Artificial neural networks have been increasingly used to solve complex problems in fields such as computer vision, machine learning, and natural language processing. But what makes them so effective? This article seeks to explore the biological analogies between neural networks and the human brain to help gain a better understanding of how they work and how they learn.
What is a Neural Network?
A neural network is a computing system composed of many interconnected processing elements, known as neurons. Neurons are organized into layers, and the connections between neurons are called weights. The connections between neurons are weighted, and the weights are adjusted through a process of learning.What is a Neural Network?
The neurons “fire” when an input signal passes through them, and the output is determined by the weights of the connections between them. The neurons are organized into layers, and the output of one layer is the input to the next. The output of the final layer is the result of the neural network’s processing.
Biological Analogies
Excitability Threshold
The output of a logistic unit must be between 0 and 1. In a classifier, we must choose which class to predict (e.g. is this is a picture of a cat or a dog?). If 1 = cat and 0 = dog, and the output is 0.7, then we say it is a cat. This is because our model is saying, “the probability that this is an image of a cat is 70%”. The 50% line acts as the “excitability threshold” of a neuron, i.e. the threshold at which an action potential would be generated.
Excitatory and Inhibitory Connections
Neurons have the ability when sending signals to other neurons, to send an “excitatory” or “inhibitory” signal. Excitatory connections produce action potentials, while inhibitory connections inhibit action potentials. These are like the weights of a logistic regression unit. A very positive weight would be a very excitatory connection. A very negative weight would be a very inhibitory connection.
Repetition and Familiarity
The old adage “practice makes perfect” can be applied to neural networks. When you practice something over and over again, you become better at it. Neural networks are the same way. If you train a neural network on the same or similar examples again and again, it gets better at classifying those examples. Your mind, by practicing a task, is lowering its internal error curve for that particular task. This is implemented in code when we talk about backpropagation, the training algorithm for a neural network. Essentially, a for-loop is used to look at the same samples again and again, and backpropagation is applied to them each time.
The Learning Process
When a neural network is trained on a task, it is adjusting the weights of the connections between neurons to optimize the output. The learning process involves making small changes to the weights based on the error of the output. This process is known as backpropagation. The error is calculated by comparing the output of the neural network to the desired output. The error is propagated back through the network, and the weights are adjusted accordingly.
The learning process is an iterative process. The more data the neural network is exposed to, the better it gets at performing the task. This is why neural networks are so powerful. They can learn complex tasks by being exposed to large amounts of data.
Conclusion
In conclusion, artificial neural networks are powerful computing systems that are based on biological analogies. They are composed of interconnected neurons, and the weights of the connections between neurons are adjusted through a learning process. This process involves making small changes to the weights based on the error of the output, and is known as backpropagation. The more data the neural network is exposed to, the better it gets at performing the task. Through this process, neural networks can learn complex tasks with large amounts of data.