## Section 8.8 Linear Independence

¶### Subsection 8.8.1 Motivation and Analogy

We know from the theorem quoted in Subsection 8.6.2 on \(n\)th order linear homogeneous differential equations \(\LL y=0\) that the general solution is a linear combination of \(n\) linearly independent solutions

What does the word *linearly independent* mean and how do we find out if a set of particular solutions *is* linearly independent?

Let's examine a close geometric analogy. Consider the set of three vectors in the plane

Notice that the third vector is a linear combination of the first two:

or

We say that these three vectors are *linearly dependent* (alternatively, *NOT linearly independent*). Geometrically, this is equivalent to the statement that these three vectors lie in a two-dimensional plane.

Why is linear independence important? If we wanted to expand another vector \(\vec{v}_4\text{,}\) it is sufficient to expand it in terms of the linearly independent vectors \(\vec{v}_1\) and \(\vec{v}_2\text{:}\)

and the coefficients \(D_1\) and \(D_2\) are unique. We say that \(\vec{v}_1\) and \(\vec{v}_2\) form a basis for the two-dimensional vector space. We do not need to include the vector \(\vec{v}_3\) in the expansion,

but if we did include it the coefficients \(D_1\text{,}\) \(D_2\text{,}\) and \(D_3\) would not be uniquely specified. Many combinations of \(D\)s would work.

We will now extend this definition of linear independence of vectors that are arrows in space to linear independence of functions that are solutions of a linear ODE. There is deep mathematics underlying the analogy. The solutions of a linear ODE form a vector space (see Section 5.1).

### Subsection 8.8.2 Linear Independence of Functions

*Definition:* A set of \(n\) functions \({y_1,\dots ,y_n}\) on an interval \(I\) are *linearly dependent* if there exist constants \(C_1, C_2, \dots , C_n\text{,}\) not all zero, such that

Otherwise the functions are *linearly independent*.

### Subsection 8.8.3 Testing for Linear Independence: Wronskians

It is cumbersome to use the definition above to find out if a set of functions is linearly independent. If the set of functions are all solutions of the same linear ODE, then there is a much quicker method, using a mathematical object called a Wronskian. *Definition:* If a set of \(n\) functions \({y_1,\dots ,y_n}\) on an interval \(I\) each have \(n-1\) derivatives, then the determinant \(W(y_1,\dots ,y_n)\text{,}\) defined below, is called the *Wronskian* of the set of functions.

*Theorem:* If \({y_1,\dots ,y_n}\) are solutions of \(\LL(y)=0\) on \(I\text{,}\) then they are linearly independent \(\Longleftrightarrow\) \(W(y_1,\dots ,y_n)\) is *not* identically zero on \(I\text{.}\)

*Note:* This theorem is only valid if the functions \({y_1,\dots ,y_n}\) are all solutions of the *same* \(n^{th}\) order linear ODE.

### Subsection 8.8.4 A Note on Orthonormality

Just as with vectors that are arrows in space, it is often convenient, but not necessary to choose the linearly independent basis functions to be orthonormal, i.e. orthogonal and normalized. In the motivation section above, I chose to focus on \(\vec{v}_1=\hat{x}\) and \(\vec{v}_2=\hat{y}\) as the basis because that is the conventional orthonormal basis, but everything I said would have worked perfectly well if I had chosen \(\vec{v}_1=\hat{x}\) and \(\vec{v}_3=3\hat{x}-2\hat{y}\) as the basis instead. It just would have been a little harder for you to follow the algebra. In the same way, it will often simplify algebra for us if we choose the linearly independent basis functions to be orthonormal, but we'll need to generalize the ideal of the dot product to these functions. See this section 9.3 of the book.