Neural Networks¶
A neural network simulates neurons in a brain. Each neuron takes inputs, processes them, and feeds the outputs forward to the next neuron. A neural network, as a whole, takes inputs and gives an output. It is an adaptive complex system - it can change its internal structure based on the data flowing through it. Each connection between neurons has its own weight, and adaptivity is achieved through the adjustment of weights. The learning (or training) process consists of adjusting these weights.
Strategies for learning¶
- Supervised learning: needs a "teacher" that's smarter than the network to train it. The network is trained with problems that have known answers. When it's trained, it will be ready to solve similar new problems.
- Unsupervised learning: required when there isn't a dataset with known answers. Good for finding unknown patterns. An application is clustering.
- Reinforcement learning: built on observation. The network is rewarded for some results, and punished for others.
Standard uses of neural networks¶
- Pattern recognition
- Time series prediction
- Signal processing
- Control: self-driving vehicles
- Soft sensors: evaluating multiple sensors together and providing a single result
- Anomaly detection
Perceptron¶
A perceptron is a computation model of a single neuron. It was invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory.
Here is an example using a single perceptron to decide whether a point is on one side of a line or another.
NEW WEIGHT = WEIGHT + ERROR * INPUT * LEARNING CONSTANT
- I'm not happy with the result of the perceptron, it doesn't always converge to the correct line
Networks¶
A single perceptron can only solve linearly separable problems. You need a multi-layered perceptron otherwise.