top of page

Solving the Black Scholes PDE numerically with Neural Networks: A review of Physics Informed Neural Network algorithm.

 

In these notes, I will provide numerical solutions to the Black-Scholes PDE:

eq1.jpg

​

The code is provided in both Python and in the future in C++ at Ricard0000 github page under Financial models.

​

The purpose:

The Black-Scholes PDE predicts the value of European-style options. The PDE relies on the lognormal random walk and no-arbitrage assumption. Here I discuss the equation mathematically. I'd also like to teach whoever is reading this how to preform physics informed neural network (PINN) type of calculations in great detail. This is a good place to start for people learning how to apply Machine Learning to solve PDEs. For more information see Maziar Raissi "Physics Informed Neural Networks." I also would like to review PINNs and provide some of my thoughts about this algorithm. 

​

Exact Solution:

The exact solution to equation to the Black-Scholes PDE is given by the integral operator:

​

eq2.png

​

For a derivation see Chapter 7 in Paul Wilmott Quantitative Finance book, say. This gives the solution at each point (S,t) in terms of an integral with respect to S'. To evaluate this integral, one can use numerical quadrature. However, one will quickly find this approach to be inefficient. Unless the Payoff function has compact support, it would be difficult to use this integral operator directly. Thus using finite differences to solve the PDE is the most popular method. Here I will introduce some algorithms based on machine learning.

​

​

Call Options:

The initial condition (rather final condition) for the call option is given by:

eq3.png

​

We need to specify this and other parameters (sigma, r, E, T) before solving the PDE.

​

Boundary conditions for call options:

​

Next important thing we need to do is set up the Boundary condition. The boundary condtion that I will use for the Call option is given by: 

eq4.png

This boundary condition can be derived from the integral operator mentioned earlier.

​

Feed Forward Neural Network Structure:

​

Now that we have set up our problem, we can specify an ansatz, that is, we represent the solution V(S,T) using a deep neural network (See Maziar Raissi "PINN"). The structure I will use is given below: 

eq5.png

To use the network structure, we will set as input data x=S and y=V(S,t) evaluated at grid points (Sm,tn). Notice the tiled data structure is used as input. Once we finish training, we will have to change the data structure in order to visualize the data.

​

​

Defining the loss function:

Now we define a loss function which will be minimized using a conjugate gradient method.

eq6.png

Notice that this loss function seeks to satisfy the PDE equation with its boundary conditions. The norm for the various terms do not necessarily need to be the supremum norm or L2 norm. Other norms could yield equally accurate solutions.

​

Results from Physics Informed Neural Networks:

See my github page for the code. Below I plot the neural network predicted solution for a call option, the exact solution, and the absolute value of the difference (Error) for comparison.

ExactBlackScholes.png
NNfigBlackScholes.png
ErrorBlackScholes.png

Final thoughts:

​

  • The Neural Network solution does resemble the exact solution. However, the error between the two solutions could be bothersome. Granted, there could exist better hyper-parameters.

  • The solution could be improved by using more parameters and more training. This is the downside for using this method: You need to tune the parameters and wait for training to complete.

  • An upside to this method: You don't necessarily need to use a uniform grid. Thus the implementation of this algorithm could be easier if you find yourself using scattered data.

  • In this particular simulation I used a sigmoid activation function. Thus, the neural network solution tends to be smooth.

  • For this low-dimensional equation, conventional time-stepping algorithms are more accurate. Maziar does provide alternate algorithms (time-stepping schemes) which could work better.

Future Endeavor:

​

  • The neural network structure is a basic feed-forward structure. There are opportunities here to discover alternate and more accurate network structures.

  • Usually Neural Networks beat conventional PDE algorithms when it comes to high-dimensional problems. Thus the Black-Scholes equations with basket options is also an equation worth researching using Neural Networks and is something I am actively working on.

​

bottom of page