Section 5.2 Roots
All simple Lie algebras have the same basic pieces that we have just constructed for \(\su(2)\) and \(\su(3)\text{.}\) But what are those pieces?
We work throughout in the adjoint representation. The classification of simple Lie algebras normally uses the complex form of the Lie algebra, but can also be done using the split real form, which, as we will see, is in fact constructed along the way. Most of our presentation is valid in either setting, although we use the standard (complex or compact) names throughout, for instance “\(\su(3)\)” even when working with the split real form \(\sl(3,\RR)\text{.}\)
Subsection 5.2.1 Cartan subalgebra
First of all, there is a maximal subalgebra generated by non-null commuting elements. For \(\su(2)\text{,}\) this subalgebra is generated by a single element; we chose \(\sigma_0\text{.}\) For \(\su(3)\text{,}\) this subalgebra has dimension two; we chose the Gell-Mann matrices \(\lambda_3\) and \(\lambda_8\) as generators.
A Cartan subalgebra \(\hh\) of a Lie algebra \(\gg\) is a maximal subalgebra of simultaneously diagonalizable elements. Equivalently, the Cartan subalgebra of a simple Lie algebra is a maximal subalgebra generated by non-null commuting elements. The dimension of the Cartan subalgebra is independent of which of the many possible subalgebras is chosen.
The requirement that the elements be diagonalizable (in the adjoint representation) is nontrivial, as can be seen by considering the element
of \(\su(3)\) (really \(\sl(3,\RR)\)), which is null—and not diagonalizable. In higher dimensions, the inadvertent use of such elements can cause one to miscount the size of the Cartan subalgebra. For instance, in \(\su(4)\text{,}\) it is possible to find four mutually commuting elements that are linearly independent, although all of them are null, and none are diagonalizable. At most three elements of \(\su(4)\) can be simultaneously diagonalized; its Cartan subalgebra is 3-dimensional.
For \(\su(2)\text{,}\) the Cartan subalgebra is given by all multiples of \(\sigma_0\text{,}\) and for \(\su(3)\) we have
where the angled brackets denote the span of the given elements.
Subsection 5.2.2 Basis of Eigenvectors
Since the elements of \(\hh\) can be simultaneously diagonalized, by definition there is a basis consisting entirely of simultaneous eigenvectors of the elements of \(\hh\text{.}\) For \(\su(3)\text{,}\) this basis consists of \(\{\lambda_3, \lambda_8, \frac12(\lambda_1\mp\mu_2), \frac12(\lambda_4\mp\mu_5),\frac12(\lambda_5\mp\mu_7)\text{.}\) For instance,
Notice that any element of \(\hh\) has each basis element as an eigenvector, not just \(\lambda_3\) and \(\lambda_8\text{.}\) So, for instance,
Rather than identify the eigenspaces containing \(\lambda_4\mp\mu_5\) by their eigenvalues under \(\lambda_3\) and \(\lambda_8\text{,}\) we can instead label these eigenspaces by operators that give the eigenvalue for any element of \(\hh\text{.}\)
What are the properties of these operators? Let's call them \(\pm\alpha_2\text{,}\) since the eigenvalues for these two subspaces are equal and opposite for any element of \(\hh\text{.}\) We know that
and in fact
Thus, \(\alpha_2\) is a linear operator on \(\hh\text{,}\) that is, \(\alpha_2\) is in the dual space \(\hh^*\text{.}\) We can repeat this process with the remaining eigenspaces, resulting in a total of six elements of \(\hh^*\)—for this purpose, we don't count \(\hh\) itself as an eigenspace, although of course it is, with all eigenvalues zero.
Subsection 5.2.3 Roots
More generally, we can decompose any simple Lie algebra as
for some finite collection \(R\) of \(\alpha\in\hh^*\text{,}\) where
The \(\alpha\) are called the roots of \(\gg\text{.}\)
Subsection 5.2.4 Orthogonality
Suppose that \(X\in\gg_\alpha\) and \(Y\in\gg_\beta\text{.}\) Then
for any \(H\in\hh\text{.}\) We compute
so that
Thus, \(X\in\gg_\alpha\) maps \(\gg_\beta\) to \(\gg_{\alpha+\beta}\)—although the latter might be the trivial vector space \(\{0\}\text{.}\) In other words, each \(X\not\in\hh\) changes at least some eigenvalues. Similarly, \(X\) followed by \(Y\) (or vice versa) changes at least some eigenvalues—unless \(\alpha+\beta=0\text{.}\) Since our basis consists of eigenvectors, we have shown that
since a nonzero trace requires there to be elements whose eigenvalues do not change. Since we do not count \(\beta=0\) as a root, we write that case separately, namely
but the argument is the same. A more formal argument is given in Section 5.3
What about \(\hh\text{?}\) If \(H,K\in\hh\) and \(X\in\gg_\alpha\text{,}\) then
Thus, the inner product on \(\hh\) is
as can also be seen by writing \(H\) and \(K\) as diagonal matrices and taking the trace of the product. This inner product is nondegenerate, and so \(\hh\) admits an orthonormal basis.
Subsection 5.2.5 Symmetry about the Origin
We now claim that if \(\alpha\in R\) then also \(-\alpha\in R\text{.}\) We have already seen that the roots of \(\su(3)\) have this property; we now prove it in general.
Suppose \(\alpha\) is a root, but not \(-\alpha\text{.}\) Then \(\gg_\alpha\perp\gg\text{,}\) since we showed above that \(\gg_\alpha\) is perpendicular to all elements not in \(\gg_{-\alpha}\text{!}\) By the assumed non-degeneracy of \(B\text{,}\) this can't happen, so \(\gg_{-\alpha}\) must contain a nonzero vector.
Thus, the roots always come in pairs, symmetric about the origin.
Subsection 5.2.6 \(\su(2)\) Subalgebras
Continuing along these lines, it turns out that each \(\{\gg_{\pm\alpha}\}\) pair generates an \(\su(2)\) subalgebra of \(\gg\text{.}\) We outline this construction here, but save most of the details for Section 5.3
For \(X_\alpha\in\gg_\alpha\text{,}\) \(Y_\alpha\in\gg_{-\alpha}\text{,}\) the argument used above to show orthogonality now shows that \([X,Y]\in\hh\text{.}\) Using non-degeneracy, we can show that \(H_\alpha=[X,Y]\ne0\text{.}\) Thus, \(\langle H_\alpha,X_\alpha,Y_\alpha \rangle\) is a subalgebra of \(\gg\text{;}\) by rescaling if necessary we can bring the commutators into the standard form for \(\su(2)\) (really \(\sl(2,\RR)\)). In particular, we can choose roots satisfying 1
But we have already analyzed the representations of \(\su(2)\text{!}\) In particular, all eigenvalues of (standard) \(\su(2)\) in any representation must be in \(\frac12\ZZ\text{!}\) Thus, \(\alpha(H_\beta)\) must be a half-integer, and in particular must be real, for any combination of our preferred elements \(H_\beta\in\hh\) and preferred roots \(\alpha\in R\text{.}\)
From this point on, we work with the split real form of \(\gg\text{,}\) restricting if necessary to real linear combinations of the \(H_\alpha\text{,}\) \(X_\alpha\text{,}\) and \(Y_\alpha\text{,}\) and of the preferred roots satisfying (5.2.16). In particular, the Killing form (5.2.15) on \(\hh\) is now positive definite.
We illustrate this construction with \(\su(3)\text{.}\) An orthonormal basis for \(\hh\) is given by \(\{\lambda_3,\lambda_8\}\text{,}\) and direct computation shows that
so these three elements (with the factors of \(\frac12\)) form a standard basis for \(\sl(2,\RR)\text{.}\) Thus, we can choose
and define \(\alpha_1\in\hh^*\) by
Similarly,
yielding
and leading to
The roots of \(\su(3)\) are then \(\{\pm\alpha_1,\pm\alpha_2,\pm\alpha_3\}\text{.}\)
Subsection 5.2.7 Root Angles
The final piece of this construction is to determine the angles between the roots. Since \(\alpha\in\hh^*\text{,}\) there are unique elements \(T_\alpha\in\hh\) such that
for any \(H\in\hh\text{.}\) It is not hard to show that
Putting this all together, we have shown that
since, as argued above, \(\alpha(H_\beta)\) must be a half-integer.
\(\cos\theta\) | \(-1\) | \(-\frac{\sqrt3}{2}\) | \(-\frac{1}{\sqrt2}\) | \(-\frac12\) | \(0\) | \(\frac12\) | \(\frac{1}{\sqrt2}\) | \(\frac{\sqrt3}{2}\) | \(1\) |
\(\theta\) | \(\pi\) | \(\frac{5\pi}{6}\) | \(\frac{3\pi}{4}\) | \(\frac{2\pi}{3}\) | \(\frac{\pi}{2}\) | \(\frac{\pi}{3}\) | \(\frac{\pi}{4}\) | \(\frac{\pi}{6}\) | 0 |
\(\frac{|\beta|}{|\alpha|}\) | \(1\text{,}\)\(2\) | \(\sqrt3\) | \(\sqrt2\) | \(1\) | – | \(1\) | \(\sqrt2\) | \(\sqrt3\) | \(1\text{,}\)\(2\) |
There is a natural inner product on \(\hh^*\text{,}\) given by
The angle between two roots is thus given by
and we also have
From these two relations we can work out not only the possible angles between two roots \(\alpha\) and \(\beta\text{,}\) but also the ratio of their lengths. The results are collected in Table 5.2.1. 2
Returning to \(\su(3)\) and computing the Killing product in the adjoint representation, since the \(\{X_\alpha,Y_\alpha\}\) pairs are clearly related to each other by cyclic transformations, so are the \(H_\alpha\) (despite the asymmetric form of our basis element \(\lambda_8\)). Thus,
so that
which in turn forces
Similarly, all the cross terms will be equal, so it is enough to compute
(since \(\lambda_3\) and \(\lambda_8\) are orthogonal and have the same magnitude), so that \(\theta=\frac{2\pi}{3}\text{.}\)
Remarkably, one obtains the same answer in this case by treating the pairs of eigenvalues
as Euclidean coordinates. This alternative derivation works only when the roots all have the same magnitude, which, as can be seen from Table 5.2.1, is not always the case. Nonetheless, we do have