aboutsummaryrefslogtreecommitdiffstats
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md26
1 files changed, 25 insertions, 1 deletions
diff --git a/README.md b/README.md
index 4d6833d..47b3583 100644
--- a/README.md
+++ b/README.md
@@ -6,11 +6,17 @@ A simple linear regression implementation using gradient descent to predict car
- Python 3
- matplotlib (for visualization only)
+- pandoc (to generate HTML documentation)
```
pip install matplotlib
```
+To generate the HTML version of this README (to see the equations):
+```
+pandoc README.md --mathml -s -o README.html
+```
+
## Usage
### Train the model
@@ -51,4 +57,22 @@ The model fits a linear function:
estimatePrice(mileage) = θ0 + θ1 * mileage
```
-Parameters are found via gradient descent with min-max normalization on the input data. After training, thetas are denormalized so they work directly on raw mileage values.
+Parameters are found via gradient descent. The input data is normalized before training using min-max normalization, and the resulting thetas are denormalized afterward so they work directly on raw mileage values.
+
+### Why normalization?
+
+The two variables have very different scales: mileage ranges from ~22,000 to ~240,000 while prices range from ~3,600 to ~8,300. This causes the gradient for $\theta_1$ (which is multiplied by mileage) to be orders of magnitude larger than the gradient for $\theta_0$. No single learning rate can work well for both parameters simultaneously.
+
+Min-max normalization scales each variable to $[0, 1]$:
+
+$$x_{\text{norm}} = \frac{x - x_{\min}}{x_{\max} - x_{\min}}$$
+
+Without normalization, if you pick a learning rate small enough to prevent $\theta_1$ from overshooting, $\theta_0$ barely moves and needs millions of iterations. If you pick a larger learning rate so $\theta_0$ converges in a reasonable time, $\theta_1$ overshoots, oscillates, and diverges to infinity (NaN).
+
+Normalization brings both gradients to the same scale, allowing gradient descent to converge efficiently with a single learning rate.
+
+After training on normalized data, the thetas are converted back to work on raw values:
+
+$$\theta_1' = \theta_1 \cdot \frac{p_{\max} - p_{\min}}{km_{\max} - km_{\min}}$$
+
+$$\theta_0' = \theta_0 \cdot (p_{\max} - p_{\min}) + p_{\min} - \theta_1' \cdot km_{\min}$$