5.1 Summary
Course subject(s)
5. How precise is the estimate?
Precision and Error Propagation
Measurement errors will obviously propagate into our estimation results, but HOW depends on the type of error.
Random errors are stochastic and will affect each observation. Random errors affect the precision; the covariance matrix \(Q_{yy}\) describes the uncertainty in the observations due to random errors.
The impact of systematic biases and outliers is not captured by \(Q_{yy}\), these errors are deterministic, and will propagate into the estimated parameters (see next Module).
Propagation laws
If \( E\{\underline{v}\}\) and \(D\{\underline{v}\}=Q_{vv}\) are given and we have \(\underline{u} = L\underline{v}+ l \), according to the linear propagation laws we have:
\[ E\{\underline{u}\} = L\cdot E\{\underline{v}\}+ l \]
\[ D\{\underline{u}\} =Q_{uu}= L\cdot Q_{vv} \cdot L^T \]
Recall that the Best Linear Unbiased estimators are given by:
\[\begin{align}\underline{\hat{x}} &= (A^T Q_{yy}^{-1} A)^{-1}A^T Q_{yy}^{-1}\underline{y}=L\underline{y}\\ \underline{\hat{y}}&= A\underline{\hat{x}}\\ \underline{\hat{e}}&= \underline{y}-\underline{\hat{y}}= (I_m - A (A^T Q_{yy}^{-1} A)^{-1}A^T Q_{yy}^{-1})\underline{y} \end{align}\]
We can now derive the associated covariance matrices by application of the covariance propagation laws.
This yields for the expectation of the estimators:
\[\begin{align}E\{\underline{\hat{x}} \}&= (A^T Q_{yy}^{-1} A)^{-1}A^T Q_{yy}^{-1} E\{\underline{y}\}\\ &=(A^T Q_{yy}^{-1} A)^{-1}A^T Q_{yy}^{-1} Ax\\ &= x \\ E\{\underline{\hat{y}} \} &=A E\{\underline{\hat{x}} \} \\ &=Ax \\ E\{\underline{\hat{e}} \} &= E\{\underline{y}\}-E\{\underline{\hat{y}}\}\\ &= Ax-Ax\\ &= 0 \end{align}\]
These results confirm the unbiased property of the estimator.
For the covariance matrices of the estimators we obtain:
\[\begin{align}Q_{\hat{x}\hat{x}} &= L Q_{yy} L^T \\ &= (A^T Q_{yy}^{-1} A)^{-1}A^T Q_{yy}^{-1} \cdot Q_{yy} \cdot Q_{yy}^{-1} A (A^T Q_{yy}^{-1} A)^{-1}\\ &=(A^T Q_{yy}^{-1} A)^{-1}A^T Q_{yy}^{-1} A (A^T Q_{yy}^{-1} A)^{-1}\\ &= (A^T Q_{yy}^{-1} A)^{-1}\\ Q_{\hat{y}\hat{y}} &=AQ_{\hat{x}\hat{x}} A^T \\ &=A (A^T Q_{yy}^{-1} A)^{-1} A^T \\ Q_{\hat{e}\hat{e}} &= (I_m - A (A^T Q_{yy}^{-1} A)^{-1}A^T Q_{yy}^{-1}) \cdot Q_{yy} \cdot (I_m - A (A^T Q_{yy}^{-1} A)^{-1}A^T Q_{yy}^{-1})^T \\ &= (Q_{yy} - A (A^T Q_{yy}^{-1} A)^{-1}A^T) (I_m -Q_{yy}^{-1}A (A^T Q_{yy}^{-1} A)^{-1}A^T)\\ &= Q_{yy} - Q_{\hat{y}\hat{y}}-Q_{\hat{y}\hat{y}}+Q_{\hat{y}\hat{y}}\\ &= Q_{yy} - Q_{\hat{y}\hat{y}}\end{align}\]
Since the estimators are all linear functions of \(\underline{y}\) and we assume that \(\underline{y}\sim N(Ax,Q_{yy})\), we have that the estimators are also normally distributed with:
\[\begin{align}\underline{\hat{x}} &\sim N(x,Q_{\hat{x}\hat{x}})\\ \underline{\hat{y}}&\sim N(Ax,Q_{\hat{y}\hat{y}}) \\ \underline{\hat{e}}&\sim N(0,Q_{\hat{e}\hat{e}}) \end{align}\]
Observation Theory: Estimating the Unknown by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://ocw.tudelft.nl/courses/observation-theory-estimating-unknown.