A neural network can be described as a machine learning model which mimics the biological neurons in our brain. It doesn’t mean that it would think the same way the brain thinks.Neural network is the core of deep learning.
Now we can discuss how a neural network works. A perceptron is the simplest neural network. Perceptron is based on TLU (Threshold Logic Unit). The perceptron is composed of a single layer of TLU. It has each TLU connected to all the inputs.
The input of the perceptron is passed through neurons called input neurons. This would be how a simple neural network would look like:
A neural network would consist of an input layer in which all the input neurons form the input layer.
The hidden layer can consist of multiple neurons. When an ANN consists of a deep stack of the hidden layer then it is called a deep neural network(DNN).
Each connection from the input layer to the hidden layer would consist of data, which is called as weights.
Now let’s name each node from the first input as (Wa, Wb, Wc) and (Wx, Wy, Wz). The output from the hidden layer can be called (W1, W2, W3). Every layer except the output layer consists of a bias neuron (b1,b2,b3).
Now we will have to train the network. We can pass the input which can be an image and would be split in as two inputs. We will have to train the neural network to check if the prediction is equal to the output.if not then we do backpropagation which would retrain again.
The input is multiplied with the weight and is transferred to the hidden layer. So if you take the first node of the hidden layer it would be calculated as
(WaU1 + U2Wx + b1)
b1 is a value which will be passed from the hidden layer. Now an activation function will be triggered. An activation function of a node defines the output of a node given an input or set of inputs.
The activation function will minus the output of the node with any given value. if we put the activation function value as 10.then the output of the hidden node would be
v1 = 10 — f(WaU1 + U2Wx + b1)
So now we can use the same formula for the other input node. The output of the hidden layer can be called (V1, V2, V3). The output layer formula would
W1V1 + W2V2 + W3V3 + B
Now we would receive the output and will check if it matches the original value. if the value is not correct then the weights and bias would be changed will do the same process until the preferred output is derived.
This is how a simple neural network works. I will be explaining in more details in the future blogs on how to integrate an algorithm for a neural network.