# Neural Network In Python: Introduction, Structure And Trading Strategies – Part III

Contributor:
QuantInsti
Visit: QuantInsti

See the first and second installments of this series to learn more about perceptrons and neural networks as Devang demonstrates in this tutorial.

Training the Neural Network

To simplify things in this neural network tutorial, we can say that there are two ways to code a program for performing a specific task.

• Define all the rules required by the program to compute the result given some input to the program.
• Develop the framework upon which the code will learn to perform the specific task by training itself on a dataset through adjusting the result it computes to be as close to the actual results which have been observed.

The second process is called training the model which is what we will be focusing on.

The neural network will be given the dataset, which consists of the OHLCV data as the input, as well as the output. We would also give the model the Close price of the next day. The actual value of the output will be represented by ‘y’ and the estimated value will be represented by y^, y hat.

The training of the model involves adjusting the weights of the variables for all the different neurons present in the neural network. This is done by minimizing the ‘Cost Function’.

There are many cost functions that are used in practice, the most popular one is computed as half of the sum of squared differences between the actual and estimated values for the training dataset.

The way the neural network trains itself is by first computing the cost function for the training dataset for a given set of weights for the neurons. Then it goes back and adjusts the weights, followed by computing the cost function for the training dataset based on the new weights. The process of sending the errors back to the network for adjusting the weights is called backpropagation.

This is repeated several times till the cost function has been minimized. We will look at how the weights are adjusted and the cost function is minimized in more detail next.

The weights are adjusted to minimize the cost function. One way to do this is through brute force. Suppose we take 1000 values for the weights, and evaluate the cost function for these values. When we plot the graph of the cost function, we will arrive at a graph as shown below.

The best value for weights would be the cost function corresponding to the minima of this graph.

This approach could be successful for a neural network involving a single weight which needs to be optimized. However, as the number of weights to be adjusted and the number of hidden layers increases, the number of computations required will increase drastically.

The time it will require to train such a model will be extremely large even on the world’s fastest supercomputer. For this reason, it is essential to develop a better, faster methodology for computing the weights of the neural network. This process is called Gradient Descent. We will look into this concept in the next part of the neural network tutorial.