Mathematics for Machine Learning: The Engine Under the Hood
Abstract AlgorithmsTL;DR
This guide covers the "Big Three"—Linear Algebra (handling data), Calculus (optimizing models), and Probability (handling uncertainty)

Introduction: Why Math Matters
If Machine Learning is a car, Code is the steering wheel, but Math is the engine. You can drive without knowing how a combustion engine works, but if you want to be a mechanic (or build your own car), you need to look under the hood.
Many developers are intimidated by the math notation ($\sum$, $\nabla$, $\int$), but the concepts themselves are often quite intuitive. We use math in ML for three main reasons:
Representation: How do we turn a picture of a cat into numbers? (Linear Algebra)
Optimization: How do we make the model smarter? (Calculus)
Uncertainty: How confident are we in the prediction? (Probability)
1. Linear Algebra: The Language of Data
Computers don't see images or text; they see giant grids of numbers. Linear Algebra is the math of manipulating these grids. It allows us to process massive amounts of data simultaneously (parallelism), which is why GPUs are so good at AI.
Vectors (Lists of Numbers)
Imagine a house. To a computer, it's just a list of features. This list is a Vector.
| Feature | Value |
| Size (sq ft) | 2000 |
| Bedrooms | 3 |
| Bathrooms | 2 |
Vector Representation: [2000, 3, 2]
Analogy: An arrow pointing to a specific spot in a 3D space (Size axis, Bedroom axis, Bathroom axis).
Why it matters: If two houses have similar vectors (arrows pointing to nearby spots), they are likely similar houses.
Matrices (Spreadsheets of Numbers)
If you have a list of 100 houses, you stack those vectors together to make a Matrix.
Analogy: An Excel spreadsheet. Rows are data points (houses), columns are features (size, rooms).
Operations: We can multiply a Matrix by a Vector to transform all the data points at once (e.g., converting square feet to square meters for all 100 houses in one step).
Deep Dive: The Dot Product (Measuring Similarity)
The Dot Product is the most common operation in ML. It multiplies two vectors to see how much they point in the same direction. This is how Netflix knows what you like.
Toy Example: Netflix Recommendations Imagine we represent users and movies with just 2 numbers: [Action Score, Romance Score].
User A (Action Fan):
[5, 1](Loves action, hates romance)Movie 1 (Die Hard):
[4, 1](High action, low romance)Movie 2 (The Notebook):
[1, 5](Low action, high romance)
The Math: To see if User A will like a movie, we calculate the Dot Product: [ (\text{User Action} \times \text{Movie Action}) + (\text{User Romance} \times \text{Movie Romance}) ]
1. User A vs. Die Hard:
Calculation: $(5 \times 4) + (1 \times 1)$
Result: $20 + 1 = \mathbf{21}$
Verdict: High Score! Recommend it.
2. User A vs. The Notebook:
Calculation: $(5 \times 1) + (1 \times 5)$
Result: $5 + 5 = \mathbf{10}$
Verdict: Low Score. Don't recommend.
What it means: The math successfully identified that the user's taste "aligned" with Die Hard. The vectors pointed in the same direction. If the vectors pointed in opposite directions (e.g., User likes Action, Movie is Romance), the score would be low.
2. Calculus: The Art of Optimization
ML models start out "stupid" (random guesses). Calculus tells them how to get smarter. It is the study of change.
Derivatives (The Slope)
Imagine you are standing on a mountain (the "Error Mountain") at night. You want to get to the bottom (Zero Error). You can't see the bottom, but you can feel the slope under your feet.
The Derivative tells you which way is "down."
If the slope is steep, you are far from the bottom.
If the slope is flat (zero), you are at the bottom (or a plateau).
Deep Dive: Gradient Descent (Walking Down the Mountain)
This is the algorithm used to train almost every Neural Network. Let's trace a single step with real numbers.
The Goal: Find the number $x$ that makes the Error lowest. The Error Function: Let's say Error = $x^2$ (a simple bowl shape). The bottom is obviously at $x=0$.
Step-by-Step Walkthrough:
Start: We initialize randomly at $x = 3$.
- Current Error: $3^2 = 9$ (High error).
Calculate Gradient (Slope):
Calculus tells us the slope of $x^2$ is $2x$.
At $x=3$, the slope is $2 \times 3 = \mathbf{6}$.
Positive slope means we are on the right side of the bowl, so we need to go left (subtract).
Take a Step (Update):
We use a Learning Rate (step size) of 0.1.
Formula: [ \text{New } x = \text{Old } x - (\text{Learning Rate} \times \text{Slope}) ]
Calculation: $3 - (0.1 \times 6)$
$3 - 0.6 = \mathbf{2.4}$
Check Result:
New Error: $2.4^2 = \mathbf{5.76}$.
Success! The error went down from 9 to 5.76.
If we repeat this 100 times, $x$ will eventually reach 0. This is exactly how a Neural Network learns, just with millions of $x$'s (weights) at once.
Why Learning Rate Matters:
Too Big (e.g., 1.0): $3 - (1.0 \times 6) = -3$. You jump to the other side of the bowl. Next step: $-3 - (1.0 \times -6) = 3$. You bounce back and forth forever.
Too Small (e.g., 0.0001): You take tiny baby steps. It takes a million years to reach the bottom.
3. Probability & Statistics: Handling Uncertainty
The real world is messy. Data is noisy. Probability helps us quantify how sure we are.
Distributions (The Shape of Data)
Most data follows a pattern.
Normal Distribution (Bell Curve): Heights, IQ scores, errors. Most things are average; few are extreme.
Usage: We often assume data is "normally distributed" to make math easier. For example, if we know the average height is 5'9" and the standard deviation is 3 inches, we can calculate exactly what percentage of people are taller than 6 feet.
Deep Dive: Conditional Probability (Bayes' Theorem)
This helps us update our beliefs based on new evidence. It's the logic behind spam filters and medical diagnosis.
Toy Example: The Spam Filter
Fact 1: 1% of all emails are Spam ($P(\text{Spam}) = 0.01$).
Fact 2: 80% of Spam emails contain the word "Free" ($P(\text{Free}|\text{Spam}) = 0.8$).
Fact 3: 10% of Normal emails contain the word "Free" ($P(\text{Free}|\text{Normal}) = 0.1$).
Question: If an email has the word "Free", what is the probability it is Spam?
The Logic (Bayes' Theorem): [ P(\text{Spam}|\text{Free}) = \frac{\text{Chance it's Spam AND has "Free"}}{\text{Total Chance of seeing "Free"}} ]
Numerator (Spam match): [ 0.8 \times 0.01 = \mathbf{0.008} ]
Denominator (Total match):
From Spam: $0.008$
From Normal: $0.1 \times 0.99 = 0.099$
Total: $0.008 + 0.099 = \mathbf{0.107}$
Result: $0.008 / 0.107 \approx \mathbf{7.5%}$
Insight: Even though "Free" is very common in spam, because spam itself is rare (1%), seeing the word "Free" only raises the spam probability to 7.5%. It's not a guarantee! The filter needs more words (evidence) to be sure.
This is why you sometimes see "False Positives" (Normal emails marked as Spam). The probability crossed the threshold (e.g., 50%), but it was actually normal.
Summary & Key Takeaways
Linear Algebra (Dot Product): Calculates similarity between data points (Vectors). Used for recommendations.
Calculus (Gradient Descent): Iteratively reduces error by following the slope. Used for training models.
Probability (Bayes): Updates predictions based on new evidence. Used for handling uncertainty.
Practice Quiz: Test Your Math Intuition
Scenario: You have two vectors: A =
[1, 1]and B =[-1, -1]. What does their Dot Product tell you?A) They are identical.
B) They are opposites.
C) They are unrelated.
Scenario: You are training a model using Gradient Descent. The error is bouncing back and forth between 10 and 100 and never going down. What is likely the problem?
A) Learning Rate is too small.
B) Learning Rate is too big.
C) The data is bad.
Scenario: A medical test is 99% accurate. You test positive for a very rare disease (only 1 in a million people have it). What is the probability you actually have the disease?
A) 99%
B) 1%
C) Very low (much less than 1%)
(Answers: 1-B, 2-B, 3-C)
What's Next?
Now that we speak the language of the engine, let's take it for a spin. In the next post, we'll dive back into algorithms with Supervised Learning, applying this math to predict the future.
Did you find this guide helpful? Share it with a friend who wants to learn AI!

Written by
Abstract Algorithms
@abstractalgorithms
