Pre 2.3 Linear Combinations, Rank and Inverse

Course subject(s) Pre-knowledge Mathematics

Linear Combination

Assume \(x_{1}, ~ x_{2}, \dots, x_{n}\) are \(n\) vectors with the size of \(m \times 1\) each, and \(\alpha_{1}, ~ \alpha_{2}, \dots \alpha_{n}\), are \(n\) scalars, then \( \sum_{i=1}^{n} \alpha_{i} x_{i} \) is called a linear combination of \( x_{1}, \ldots, x_{n} \).  In other words, if we multiply vectors by scalars, and add or subtract them from each other, the result is called the linear combination of the vectors. 


Linear Dependency

Vectors \( x_{i} \), \( i=1, \ldots, n \), are said to be linear dependent if there exist scalars \( \alpha_{i} \), not all equal to zero, such that \( \alpha_{1}x_{1}+\alpha_{2}x_{2}+\ldots + \alpha_{n}x_{n} = 0 \). An alternative but equivalent definition is:  a set of vectors are linear dependent if at least one of the vectors can be written as linear combination of the others. If not, we say that the vectors \( x_{i} \) are linear independent.  The vectors \( x_{1}, \ldots, x_{n} \) are linear independent if and only if: \[ \alpha_{1}x_{1}+\alpha_{2}x_{2}+\ldots + \alpha_{n}x_{n} = 0 \Rightarrow \alpha_{1}= \ldots = \alpha_{n}=0 \]

Rank of a matrix

The maximum number of linearly independent column vectors of a matrix \(A\) is called the rank of \(A\), and it is denoted as \(\text{rank}(A)\).  The maximum number of linearly independent column vectors of a matrix always equals its maximum number of linearly independent row vectors. A matrix \(A\) of size \(m \times n\) is said to have full row rank if \(\text{rank}(A)=m\) and full column rank if \(\text{rank}(A)=n\) . The matrix is said to have a rank deficiency if \( \text{rank}(A) < \text{min}(m,n) \).  


Singular matrices 

Square matrices with a rank deficiency are called singular. Alternatively, if a square matrix of size \(m \times m\) has a rank equal to \(m\), the matrix is called nonsingular. 


Inverse of a matrix

Let \(A\) be a square \(m \times m\) and nonsingular matrix. Then there exists a unique matrix \(A^{-1}\), called the inverse of \(A\), such that \[AA^{-1}=A^{-1}A=I_{m}.\] It can be shown that \( (A^{-1})^{-1}=A\). If \(\alpha\) is a nonzero scalar, then \( (\alpha A)^{-1}=\frac{1}{\alpha}A^{-1} \). Note that, singular matrices (or matrices with rank deficiency) are not invertible. 

Creative Commons License
Observation Theory: Estimating the Unknown by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://ocw.tudelft.nl/courses/observation-theory-estimating-unknown.
Back to top