Pre 1.5 Normal distribution
Course subject(s)
Pre-knowledge Mathematics
Normal distribution
The normal distribution is the statistical distribution commonly used for measurements. The PDF is given by the following function:
\[ f_{\underline{y}}(y) = \frac{1}{\sqrt{2\pi\sigma^2_y}} \exp (-\frac{(y-\bar{y})^2}{2\sigma^2_y})\]
from which it can be seen that the function depends on the mean \( \bar{y} \) and variance \( \sigma_y^2\) of the random variable. The figure below shows the normal PDF with its mean and standard deviation \( \sigma_y\). The PDF is symmetric around the mean, and its width (or peakedness) is determined by the standard deviation, which is the value corresponding to the inflection point of the PDF.
Notation: \( \underline{y} \sim N(\bar{y}, \sigma^2_y)\) means that random variable \(\underline{y}\) is normally distributed with mean \( \bar{y} \) and variance \( \sigma_y^2\).
An important property of normally distributed random variables is that if you apply a linear transformation \( \underline{t} = a \underline{y} + b \) (note that \(a\) and \(b\) are deterministic), the resulting random variable will also be normally distributed:
\[ \underline{y} = N( \bar{y}, \sigma^{2}_{y} ) \quad \longrightarrow \quad \underline{t} = a \underline{y} + b \sim N(a \bar{y}+b, a^{2} \sigma^{2}_{y} ) \]
It can be seen that the mean and variance will be transformed as well; for the mean it is straightforward. At this point it will not be explained how the transformation works for the variance – it will be discussed in Module 5 of the course.
From this it follows that:
\[\underline{y}-\bar{y} \sim N(0,\sigma^2_y)\]
The figure below shows some important probabilities for intervals defined by multiples of the standard deviation. For instance the probability that random variable \(\underline{y}\) deviates less than \(\sigma_y\) from the mean is given by:
\[P(\underline{y}-\bar{y} \in [-\sigma_y,\sigma_y]) = P(|\underline{y}-\bar{y} | <\sigma_y) = 0.683\]
Hence, there is a 68.3% chance that \(|\underline{y}-\bar{y}|<\sigma_y \).
Similarly, it follows:
\[P(|\underline{y}-\bar{y} | <2\sigma_y) = 0.954\]
\[P(|\underline{y}-\bar{y} | <3\sigma_y) = 0.997\]
Not shown in the figure, but often used are the 95% and 99% probabilities, for which holds:
\[P(|\underline{y}-\bar{y} | <1.96\sigma_y) = 0.95\]
\[P(|\underline{y}-\bar{y} | <2.58\sigma_y) = 0.99\]
Observation Theory: Estimating the Unknown by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://ocw.tudelft.nl/courses/observation-theory-estimating-unknown.