Noisy Gaussian NN β Robustness to Label Noise
Overview
This project explores how a simple 1-hidden-layer neural network handles increasing label noise when fitting a Gaussian curve.
We test three noise levels (Ο = 0.05, 0.1, 0.2) to see when the network smooths effectively and when it starts to underfit.
Dataset
- Synthetic dataset: Gaussian curve (
y = exp(-x^2)) - Noise added directly to labels using
torch.normal - 200 evenly spaced
xpoints in [-2, 2]
Model
- Architecture: 1 hidden layer, 50 neurons,
ReLUactivation - Loss: MSELoss
- Optimizer: Adam (lr=0.01)
- Training: 2000 epochs
Results
- Low noise: NN fits curve smoothly.
- Medium noise: Slight underfitting.
- High noise: Curve shape lost, noise dominates.
Key Insight
More noise β better regularization.
Too much noise can destroy the signal beyond recovery.
Files
GaussianApproximation.ipynbβ Full experiment, plots, and analysisREADME.mdβ This file
License
MIT License β free to use, modify, and distribute with attribution.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support