A Guide to diagonalize a matrix

Khadija Ech
5 min readJan 11, 2021

--

In mathematics, diagonalization is a process of linear algebra which makes it possible to simplify the description of certain endomorphisms of a vector space, in particular of certain square matrices. It consists of finding and explaining a basis of the vector space made up of eigenvectors when one exists. In finite dimension, diagonalization amounts to describing this endomorphism using a diagonal matrix.

a square matrix M is called diagonalizable or non defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix S and a diagonal matrix D such that S^-1MS=D, or equivalently M=SDS^-1 . (Such S,D are not unique.)For a finite-dimensional vector space V, a linear map T : V →V is called diagonalizable if there exists an ordered basis of V consisting of eigenvectors of T . These definitions are equivalent: if T has a matrix representation M=SDS^-1 as above, then the column vectors of S form a basis of eigenvectors of T , and the diagonal entries of D are the corresponding eigenvalues of T; with respect to this eigenvector basis, M is represented by D. Diagonalization is the process of finding the above S and D.

what is the importance of eigenvalues /eigenvectors?

Eigenvectors make understanding linear transformations easy. They are the “axes” (directions) along which a linear transformation acts simply by “stretching/compressing” and/or “flipping”; eigenvalues give you the factors by which this compression occurs.

The more directions you have along which you understand the behavior of a linear transformation, the easier it is to understand the linear transformation; so you want to have as many linearly independent eigenvectors as possible associated to a single linear transformation.

Here i am gonna explain how to diagonalize a matrix. i only describe the procedure of diagonalization, and no justification will be given.

Let M be the n×n matrix that you want to diagonalize (if possible).

step 1:Find eigenvalues λi of the matrix M.

It is sometimes necessary to calculate the characteristic polynomial of the matrix, in order to determine its eigenvalues and the associated eigenvectors and subspaces.

the characteristic polynomial is:

ficure 1: the characteristic polynomial where X is the indeterminate and In is the identity matrix of Mn(k)

The eigenvalues λi are the roots of the characteristic polynomial (figure 1), there are therefore at most n eigenvalues of multiplicity mi.

example 1:

Now let us examine these steps with an example.
Let us consider the following 3×3 matrix.

example 1

The characteristic polynomial p(t) of A is

Using the cofactor expansion, we get: p(t)=−(t−1)^2(t−2).

From the characteristic polynomial obtained, we see that eigenvalues are

λ=1 with algebraic multiplic 2 and λ=2 with algebraic multiplicity 1.

step 2: find the eigenspaces:

we need to determine, for each eigenvalue, the eigenspace which is associated with it:

figure 2: eigenspaces

let’s take the same matrix of the ( example 1 ) A,Let us first find the eigenspace E1 corresponding to the eigenvalue λ=1.
By definition, E1 is the null space of the matrix

by elementary row operations.
Hence if (A−I)X=0 for X∈R3, we have

Therefore, we have

here N is the null space of the matrix A-I { wich is ker(A-I)}

From this, we see that the set

basis for the eigenspace E1

is a basis for the eigenspace E1.
Thus, the dimension of E1, which is the geometric multiplicity of λ=1, is 2.

Similarly, we find a basis of the eigenspace E2=ker(A−2I) for the eigenvalue λ=2.
We have

by elementary row operations.
Then if (A−2I)X=0 for X∈R3, then we have

Therefore we obtain

From this we see that the set:

basis for the eigenspace E2

is a basis for the eigenspace E2 and the geometric multiplicity is 1.

Since for both eigenvalues, the geometric multiplicity is equal to the algebraic multiplicity, the matrix A is not defective, and hence diagonalizable.

Step 3: Determine linearly independent eigenvectors

From Step 2, the vectors

are linearly independent eigenvectors.

Step 4: Define the invertible matrix S

Define the matrix S=[v1v2v3]. Thus we have

the invertible matrix S

and the matrix S is nonsingular (since the column vectors are linearly independent).

Step 5: Define the diagonal matrix D

Define the diagonal matrix, (remenber that we found λ=1 with algebraic multiplic m=2 and λ=2 with algebraic multiplicity m=1) so the matrix D is:

the diagonal matrix D

Note that (1,1)-entry of D is 1 because the first column vector v1=[1,1,0]of S is in the eigenspace E1, that is, v1 is an eigenvector corresponding to eigenvalue λ=1.

Similarly, the (2,2)-entry of D is 1 because the second column v2=[1,0,1] of S is in E1.

The (3,3)-entry of D is 2 because the third column vector v3=[−3,−3,1] of S is in E2.

(The order you arrange the vectors v1,v2,v3 to form S does not matter but once you made S, then the order of the diagonal entries is determined by S, that is, the order of eigenvectors in S.)

Step 6: Finish the diagonalization

Finally, we can diagonalize the matrix A as

where:

(Here you don’t have to find the inverse matrix S^−1 unless you are asked to do so.)

REFERANCES:

[1]https://en.wikipedia.org/wiki/Diagonalizable_matrix#:~:text=Diagonalization%20is%20the%20process%20of,eigenvalues%20and%20eigenvectors%20are%20known.

[2]https://yutsumura.com/diagonalize-a-2-by-2-matrix-a-and-calculate-the-power-a100/

[3]https://math.stackexchange.com/questions/23312/what-is-the-importance-of-eigenvalues-eigenvectors

--

--

Khadija Ech
Khadija Ech

Written by Khadija Ech

Studied Mathematics, Interested in algorithms, probability theory, quantum computers and machine learning. Python user.

No responses yet