Neural Networks Intuition
1. Introduction to Neural Networks
Neural networks (and Deep Learning) are algorithms designed to mimic the biological brain to recognize patterns and make predictions. In modern machine learning, using a trained neural network to make predictions is formally called Inference.
Why the sudden growth in Neural Networks?
As shown in the figure below, traditional learning algorithms (like Linear or Logistic Regression) tend to plateau in performance as data volume increases. Neural networks, especially larger ones, continue to improve as more data is provided.

2. Biological vs. Artificial Neurons
The design of artificial neural networks is loosely inspired by the human brain.
- Biological Neuron: Receives electrical signals through dendrites, processes them in the cell body, and sends signals through the axon.
- Artificial Neuron: Receives numerical inputs (), performs a mathematical calculation, and outputs an activation ().

3. Demand Prediction Example
To understand how a neuron works, consider a "Demand Prediction" task for a T-shirt business.
Input (): Price of the T-shirt.
Output (): Probability of the shirt being a "top seller" (Yes/No).
The Activation Function
A single neuron applies a function to the input to produce an activation (). For binary classification, we use the Logistic (Sigmoid) Function:

Complex Demand Prediction (Neural Network Layer)
In a real-world scenario, we use multiple features:
- Price
- Shipping Cost
- Marketing Cost
- Material Quality
Instead of one neuron, we group neurons into layers. These layers create "intermediate" features (activations) like:
- Affordability: Derived from Price and Shipping Cost.
- Awareness: Derived from Marketing.
- Perceived Quality: Derived from Price and Material.
The final output layer uses these activations to determine the probability of the product being a top seller.
4. Key Terminology
- Input Layer: The vector containing the raw features.
- Hidden Layer: The layers between the input and output. They are "hidden" because we don't see the specific values in the training set; the model determines them.
- Activations (): The values passed from one layer to the next.
- Architecture: The specific layout of the network, including the number of hidden layers and the number of neurons per layer.
Feature Engineering: Unlike traditional models where humans must manually create features (like "Affordability"), a Neural Network learns these features automatically.

5. Multi-Layer Perceptrons (MLP)
When we stack multiple hidden layers, we create a Multi-Layer Perceptron. This allows the network to learn increasingly complex hierarchies of features.

Example: Face Recognition
Consider an image of size pixels. This creates a vector of 1 million pixel values.
- Layer 1: Detects tiny edges and short lines.
- Layer 2: Combines edges to detect features like eyes, noses, or mouths.
- Layer 3: Combines facial features to identify coarser face shapes.

Example: Car Classification
Similarly, for cars:
- Layer 1: Detects edges.
- Layer 2: Detects parts (wheels, windows, bumpers).
- Layer 3: Detects full car shapes/models.

6. Practice Quiz
Question 1

Which of these are terms used to refer to components of an artificial neural network?
A) Activation function
B) Layers
C) Axon
D) Neurons
Answer:
A, B, D
- A (Correct): An activation is the number calculated by a neuron.
- B (Correct): A layer is a grouping of neurons.
- D (Correct): A neuron is the fundamental unit of the network.
- C (Incorrect): "Axon" is a biological term, not typically used in artificial network code/math.
Question 2
True/False? Neural networks take inspiration from, but do not very accurately mimic, how neurons in a biological brain learn.
A) True
B) False
Answer:
A) True
Artificial neural networks use a very simplified mathematical model compared to the biological complexity of the human brain.