4.1 Summary
Course subject(s)
4. Best Linear Unbiased Estimation (BLUE)
Estimate vs. Estimator:
The estimate of the unknowns in a vector \(x\) is always a function of observation vector \(y\). In a generic form \[ \hat{x}=G(y), \] where \(G(y)\) is a function. If the function \(G\) is linear, we can write the estimate as
\[ \hat{x}=Ly. \] Note that the observation vector \(y\) is one draw from a random vector of observables \(\underline{y}\). The observation vector is deterministic, whereas the observable vector is stochastic. Applying the estimation function \(G\) (or its linear case \(L\)) on the observable vector provides a random variable \(\hat{\underline{x}}\) \[\hat{\underline{x}}=G(\underline{y}), ~~ \text{or} ~~\hat{\underline{x}}=L\underline{y}.\] The random variable/vector \(\hat{\underline{x}}\) is called the estimator of \(x\). In fact, the estimate \(\hat{x}\) is one draw (or one realization) of the random variable \(\hat{\underline{x}}\).
Weighted Least Squares Estimator:
Applying weighted least squares (WLS) estimation on the observable vector \(\underline{y}\) gives us the WLS estimator as \[\hat{\underline{x}}_{\text{WLS}}= (A^TWA)^{-1}A^TW\underline{y}.\] The WLS estimator \(\hat{\underline{x}}_{\text{WLS}}\) is a random variable/vector and so it has all the properties of the random variables. For example, it follows a certain probability distribution function, and it will have an expectation (mean) and dispersion (variance). For WLS these properties are:
- 1. The distribution of \(\hat{\underline{x}}_{\text{WLS}}\) depends on the distribution of the observables \(\underline{y}\). If the observables follow a normal distribution, the WLS estimator \(\hat{\underline{x}}_{\text{WLS}}\) also has a normal distribution (This is based on the fact that a linear function of normally distributed variables is again normally distributed.).
- 2. The expectation of \(\hat{\underline{x}}_{\text{WLS}}\) also depends on the expectation of the observable vector \(\underline{y}\). In linear models with \(E\{\underline{y}\}=Ax\), the expectation of WLS estimator is computed as follows \[E\{\underline{\hat{x}}_{\text{WLS}}\}= (A^TWA)^{-1}A^TW E\{\underline{y}\}=(A^TWA)^{-1}A^TWAx=x. \] This is an important property, and shows that the WLS estimator for linear models is unbiased because its expectation is equal to the true (but unknown) value of \(x\).
- 3. The dispersion of \(\hat{\underline{x}}_{\text{WLS}}\) also depends on the dispersion of the observable vector \(\underline{y}\). If \(D\{\underline{y}\}=Q_{yy}\), based on the linear error propagation law , the dispersion of the WLS estimator can be computed as: \[D\{\underline{\hat{x}}_{\text{WLS}}\}=Q_{\hat{x}\hat{x}} =(A^TWA)^{-1}A^TWQ_{yy}WA(A^TWA)^{-1}. \] In fact \(Q_{\hat{x}\hat{x}} \) describes the precision of the WLS estimator. Note that the detailed explanation of the linear error propagation laws and the derivation of \(Q_{\hat{x}\hat{x}}\) and its interpretation is given in the next module of this MOOC (i.e. module 5 on "How precise is the estimate?").
We see that estimators are random variables and have statistical properties. Now, it is wise to think about the optimality aspects of estimators based on their statistical properties. For example, one would say that it is always desirable to have an unbiased estimator. Or it is desirable to have the most precise (or best) estimator that has smallest \(Q_{\hat{x}\hat{x}}\). We will continue this module with the discussion on what is the best linear unbiased estimator.
Observation Theory: Estimating the Unknown by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://ocw.tudelft.nl/courses/observation-theory-estimating-unknown.