Skip to main content

Section 3.1 Lie Groups and Lie Algebras

Subsection 3.1.1 Lie Groups

Now that we have studied several examples of Lie groups, it's time for a definition. A Lie group is a group \(G\) that is also a smooth manifold. In other words, one can use coordinates to describe the group elements, and one can differentiate with respect to these coordinates. Furthermore, these structures must be compatible, in the sense that the group operations

\begin{equation} \begin{aligned} (P,Q) \amp\longmapsto PQ ,\\ P \amp\longmapsto P^{-1} , \end{aligned}\tag{3.1.1} \end{equation}

are smooth maps on \(G\text{.}\)

We are interested here only in the structure of Lie groups near the identity element. We will therefore usually assume that a Lie group is connected, or equivalently that we are studying the component connected to the identity. All finite-dimensional Lie groups can be regarded locally as matrix groups, which we will usually do.

A representation of a Lie group \(G\) on a vector space \(V\) is a group homomorphism

\begin{equation} \rho : G \longrightarrow \textrm{End}(V)\tag{3.1.2} \end{equation}

that takes elements of \(G\) to linear maps (“endomorphisms”) on \(V\text{.}\) Thus, a representation of \(G\) is an explicit identification of \(G\) with certain matrices acting on \(V\text{.}\) For this reason, the term “representation” is often used to refer to the matrices \(\rho(G)\text{,}\) and occasionally used to refer to \(V\text{.}\)

Subsection 3.1.2 Lie Algebras I

The simplest definition of a Lie algebra is that it is the tangent space at the identity of a Lie group. This tangent space is a real vector space; thus, Lie algebras are vector spaces. Since Lie groups are locally matrix groups, we can always regard the elements of Lie algebras as matrices. However, all we have so far is the vector space structure, which allows us to add, but not (yet) multiply, Lie algebra elements.

From this point of view, a representation of a Lie algebra on a vector space is simply the result of differentiating a representation of the corresponding Lie group.

Subsection 3.1.3 Matrix Exponentiation

Given a curve through the identity element of a Lie group, the corresponding Lie algebra element is just the tangent vector to this curve at the identity. How do we go the other way?

The key idea is that there are nice curves through the origin, called 1-parameter families of group elements, with the property that

\begin{align} M:\amp\RR\longrightarrow G ,\notag\\ M(0) \amp= \one ,\tag{3.1.3}\\ M(\alpha+\beta) \amp= M(\alpha)M(\beta) .\tag{3.1.4} \end{align}

In other words, \(M\) is a group homorphism from the additive group of the real numbers into \(G\text{.}\) Such curves are in 1–1 correspondence with the tangent vectors at the identity. Given a tangent vector \(A\in\gg\text{,}\) how do we find the 1-parameter family \(M(\alpha)\) that goes through it, that is, that satisfies

\begin{equation} A = \dot{M} = M'(0) ?\tag{3.1.5} \end{equation}

Differentiating (3.1.4) and using (3.1.3) yields the differential equation

\begin{equation} M'(\alpha) = M(\alpha) A\tag{3.1.6} \end{equation}

whose solution is

\begin{equation} M(\alpha) = \exp(A\alpha) .\tag{3.1.7} \end{equation}

But what does it mean to exponentiate a matrix?

We can define matrix exponentials as a power series, so that

\begin{equation} \exp(A\alpha) = \one + A\alpha + \frac12 A^2\alpha^2 + \frac16 A^3\alpha^3 + ...\tag{3.1.8} \end{equation}

which turns out to converge for any \(A\text{.}\) An important special case is when \(A^2=-\one\text{,}\) in which case the series splits into two sums, only one of which involves \(A\text{.}\) Explicitly, we have

\begin{equation} A^2=-\one \Longrightarrow \exp(A\alpha)=\one\,\cos\alpha+A\,\sin\alpha\tag{3.1.9} \end{equation}

where as usual \(\one\) denotes the identity matrix. If instead \(A^2=\one\text{,}\) only the signs change, and we have

\begin{equation} A^2=+\one \Longrightarrow \exp(A\alpha)=\one\,\cosh\alpha+A\,\sinh\alpha .\tag{3.1.10} \end{equation}

Finally, if \(A^2=0\text{,}\) we have

\begin{equation} A^2=0 \Longrightarrow \exp(A\alpha) = \one + A\alpha .\tag{3.1.11} \end{equation}

In practice, even if \(A\) itself does not satisfy any of these conditions, it can usually be broken up into blocks that do.

An important property of matrix exponentiation is that

\begin{equation} \det(e^A) = e^{\tr A}\tag{3.1.12} \end{equation}

which holds for any square matrix and is a special case of Jacobi's formula. This result is obvious if \(A\) is diagonal, using elementary properties of the exponential function. This argument extends to the case where \(A\) is diagonalizable, so that \(A=PDP^{-1}\text{,}\) with \(D\) diagonal; the diagonal elements of \(D\) are the eigenvalues of \(A\text{.}\) We now have

\begin{align} \det(e^A) \amp= \det(e^{PDP^{-1}}) = \det(Pe^DP^{-1}) = \det(e^D)\notag\\ \amp= e^{\tr D} = e^{\tr(PDP^{-1})} = e^\tr{A}\tag{3.1.13} \end{align}

The general case follows using similar but less straightforward reasoning.

Subsection 3.1.4 Lie Algebras II

Consider the action of \(G\) on itself defined by

\begin{equation} P \longmapsto MPM^{-1} ,\tag{3.1.14} \end{equation}

where \(M,P\in G\text{.}\) If \(P=P(\beta)\) is a 1-parameter family, then we can differentiate this action with respect to the parameter, resulting in an action of \(M\) on \(\dot{P}=P'(0)\text{.}\) Thus, there is an action of \(G\) on its Lie algebra \(\gg\text{,}\) given by

\begin{equation} X \longmapsto MXM^{-1} ,\tag{3.1.15} \end{equation}

where now \(X\in\gg\text{.}\) If we now think of \(M=M(\alpha)\) in turn as a 1-parameter family, we can again differentiate, obtaining an action of \(\gg\) on itself. But

\begin{equation} \frac{d}{d\alpha} M(\alpha)XM(\alpha)^{-1} = \frac{dM}{d\alpha}XM(\alpha)^{-1} - M(\alpha)XM(\alpha)^{-2} \frac{dM}{d\alpha}\tag{3.1.16} \end{equation}

so that

\begin{equation} \left( M(\alpha)XM(\alpha)^{-1} \right)^\bullet = AX - XA = [A,X]\tag{3.1.17} \end{equation}

where we have used (3.1.5). Thus, a Lie algebra always acts on itself by commutators.  1  We can use this structure to define Lie algebras directly, without starting with a Lie group. A Lie algebra is a vector space \(V\text{,}\) together with an operation

\begin{equation} \begin{aligned} V\times V \amp\longrightarrow V \\ (X,Y) \amp\longmapsto [X,Y] \end{aligned}\tag{3.1.18} \end{equation}

where the Lie bracket \([X,Y]\) is bilinear, antisymmetric, and satisfies the Jacobi identity

\begin{equation} \bigl[X,[Y,Z]\bigr] + \bigl[Y,[Z,X]\bigr] + \bigl[Z,[X,Y]\bigr] = 0\tag{3.1.19} \end{equation}

(which is identically true for matrices, as shown in Section 3.3).

A representation of a Lie algebra \(\gg\) on a vector space \(V\) is a therefore a Lie algebra homomorphism

\begin{equation} \rho : \gg \longrightarrow \textrm{End}(V)\tag{3.1.20} \end{equation}

that takes elements of \(\gg\) to linear maps on \(V\text{.}\) A representation of \(\gg\) is again an explicit identification of \(\gg\) with certain matrices acting on \(V\text{,}\) but in this case the homomorphism preserves commutators. As with Lie groups, the term “representation” is often used to refer to the matrices \(\rho(\gg)\text{,}\) and occasionally used to refer to \(V\text{.}\)

Strictly speaking, we have constructed the commutator as the derivative of a curve in the Lie algebra, so it properly lives in the tangent space to the Lie algebra. However, vector spaces are their own tangent spaces, so the result can be regarded as an element of the Lie algebra itself.