Skip to main content

Section 5.5 Dynkin Diagrams

We have seen that any simple Lie algebra can be decomposed into a Cartan subalgebra \(\hh\) and its eigenspaces \(\gg_\alpha\text{.}\) Furthermore, the commutators of elements in \(\gg_{\pm\alpha}\) determine a collection of preferred elements \(H_\alpha\in\hh\text{,}\) and the angles between the \(H_\alpha\) are tightly constrained. We show in this section that these constraints in turn impose additional constraints on the possible types of simple Lie algebras.

The symmetry of the roots discussed in Section 5.4 allows us to divide them in half using any hyperplane through the origin that does not itself contain any roots. Call one half positive roots, and the other half negative roots. A positive root is a simple root if it can not be written as the sum of two positive roots. It turns out that, first, the simple roots form a basis (called a base) for the vector space of roots and, second, all positive roots are linear combinations of simple roots with positive, integer coefficients.  1 

It is not hard to see that the positivity property means that the angle between any two non-orthogonal simple roots must be obtuse, rather than acute. Let \(\alpha\text{,}\) \(\beta\) be simple roots, and recall from Section 5.4 that \(r_\alpha(\beta)=\beta\mp\alpha\) must also be a root, with the sign opposite to that of \(\alpha\cdot\beta\text{.}\) If the sign is negative, then one of \(\beta-\alpha\) and \(\alpha-\beta\) is a positive root; in the first case, \(\beta\) is not simple, and in the second case \(\alpha\) is not simple. Thus, the sign must be positive, that is, \(\alpha\cdot\beta\le0\) as claimed.

Recall that all pairs of roots satisfy

\begin{equation} \frac{\alpha\cdot\beta}{\alpha\cdot\alpha} \in \frac12\ZZ\tag{5.5.1} \end{equation}

so that the angle \(\theta\) between the roots must satisfy

\begin{equation} 4 \cos^2\theta \in \{0,1,2,3,4\} .\tag{5.5.2} \end{equation}

Since the simple roots \(\alpha_i\in\RR^n\) are independent, they satisfy

\begin{equation} 0 \gt \frac{\alpha_i\cdot\alpha_j}{\alpha_i\cdot\alpha_i} \in \frac12\ZZ\tag{5.5.3} \end{equation}

and the angles \(\theta_{ij}\) between roots must satisfy

\begin{equation} 4 \cos^2\theta_{ij} = 0,1,2,3 .\tag{5.5.4} \end{equation}

A Coxeter graph is a graph with one dot representing each simple root, and with each pair of dots connected by \(4\cos^2\theta_{ij}\) lines. Equation (5.5.3) also constrains the relative magnitudes of (non-orthogonal)roots; a Dynkin diagram is a Coxeter graph with the addition of arrows on the double and triple connections pointing from the longer root to the shorter.

We now derive several restrictions on the possible Dynkin diagrams. To do so, the only assumptions we need are that the \(\alpha_i\in\RR^n\) satisfy the conditions above. Most of these conditions apply both to Coxeter graphs and Dynkin diagrams.

Subsection 5.5.1 There are at most \(k-1\) connections between \(k\) simple roots.

For the purpose of this assertion, a connection refers to a pair of roots with at least one line connecting them; multiple lines between a given pair of roots still count as a single connection.

We let

\begin{equation} \alpha = \sum_1^k \frac{\alpha_i}{|\alpha_i|} \ne 0 .\tag{5.5.5} \end{equation}

where the inequality follows from the independence of the simple roots. The magnitude of \(\alpha\) is given by

\begin{equation} \begin{aligned} 0 \lt \alpha\cdot\alpha \amp= \sum_{i,j} \frac{\alpha_i\cdot\alpha_j}{|\alpha_i| |\alpha_j|} \nonumber\\ \amp= k + 2 \sum_{i\lt j} \frac{\alpha_i\cdot\alpha_j}{|\alpha_i| |\alpha_j|} \nonumber\\ \amp= k + \sum_{i\lt j} 2\cos\theta_{ij} .\end{aligned}\tag{5.5.6} \end{equation}

But either \(\alpha_i\cdot\alpha_j=0\text{,}\) in which case there are no connections, or

\begin{equation} 4\cos^2\theta_{ij} \in \{1,2,3\} ,\tag{5.5.7} \end{equation}

in which case

\begin{equation} 2\cos\theta_{ij} \le -1 .\tag{5.5.8} \end{equation}

Thus,

\begin{equation} 0 \lt k - \hbox{# of connections}\tag{5.5.9} \end{equation}

which is what we were trying to show.

An immediate corollary is that there are no closed loops in a Coxeter graph.

Subsection 5.5.2 There are at most three lines at each point.

Suppose that a simple root \(\alpha\) is connected to \(k\) other simple roots \(\alpha_i\text{.}\) Since there are no cycles, none of the \(\alpha_i\) can be connected to each other. Thus,

\begin{equation} i\ne j\Longrightarrow \alpha_i\cdot\alpha_j = 0 .\tag{5.5.10} \end{equation}

Since the simple roots are independent, we can extend \(\{\alpha_i\}\) to an orthogonal basis \(\{\alpha_0,\alpha_i\}\) of the span \(\langle\alpha,\alpha_i\rangle\text{.}\) We can expand \(\alpha\) in this basis, yielding

\begin{equation} \alpha = \sum_0^k \frac{\alpha\cdot\alpha_i}{\alpha_i\cdot\alpha_i}\alpha_i .\tag{5.5.11} \end{equation}

Since \(\alpha\) is independent of the \(\alpha_i\text{,}\) we must have

\begin{equation} \alpha\cdot\alpha_0 \ne 0\tag{5.5.12} \end{equation}

so that

\begin{equation} |\alpha|^2 = \sum_0^k \frac{(\alpha\cdot\alpha_i)^2}{\alpha_i\cdot\alpha_i} \gt \sum_1^k \frac{(\alpha\cdot\alpha_i)^2}{|\alpha_i|^2} .\tag{5.5.13} \end{equation}

Thus,

\begin{equation} \hbox{# of lines} = 4 \sum_1^k \cos^2\theta_i = 4 \sum_1^k \frac{(\alpha\cdot\alpha_i)^2}{|\alpha|^2|\alpha_i|^2} \lt 4\tag{5.5.14} \end{equation}

so that the number of lines is strictly less than \(4\text{,}\) as claimed.

Subsection 5.5.3 Simple chains of roots can be replaced by a single root.

Suppose that there is a chain of single lines connecting \(\alpha_i\) to \(\alpha_{i+1}\) for \(1\le i\le k-1\text{,}\) in which case

\begin{equation} \begin{aligned} \frac{\alpha_i\cdot\alpha_{i+1}}{\alpha_i\cdot\alpha_i} \amp= -\frac12 , \\ \alpha_1\cdot\alpha_1 \amp= ... = \alpha_k\cdot\alpha_k = Q^2 .\end{aligned}\tag{5.5.15} \end{equation}

We claim that the entire chain can be replaced by

\begin{equation} \alpha = \sum_1^k \alpha_i\tag{5.5.16} \end{equation}

with the result still being a valid Coxeter graph.

We first compute

\begin{equation} \begin{aligned} \alpha\cdot\alpha \amp= \sum_{i,j} \alpha_i\cdot\alpha_j \nonumber\\ \amp= \sum_i \alpha_i\cdot\alpha_i + 2 \sum_{i\lt j} \alpha_i\cdot\alpha_j \nonumber\\ \amp= \sum_1^k \alpha_i\cdot\alpha_i + 2 \sum_1^{k-1} \alpha_i\cdot\alpha_{i+1} \nonumber\\ \amp= k Q^2 - (k-1) Q^2 = Q^2\end{aligned}\tag{5.5.17} \end{equation}

so that \(\alpha\) has the same magnitude as each of the \(\alpha_i\text{.}\) Furthermore, if \(\beta\) is any other root, \(\beta\) can be connected to at most one of the \(\alpha_i\text{,}\) in which case

\begin{equation} \beta\cdot\alpha = \beta\cdot\alpha_i ;\tag{5.5.18} \end{equation}

if not, \(\beta\cdot\alpha=0\text{.}\) In either case, all of the conditions on the original roots continue to hold if the \(k\) roots \(\alpha_i\) are replaced by the single root \(\alpha\text{,}\) as claimed.

Subsection 5.5.4 Allowed Diagrams

We can use these three properties to rule out several Coxeter graphs. First of all, we consider only connected graphs, as only such graphs correspond to simple Lie algebras. The simplest graphs correspond to \(n\) roots, connected in a single chain by single lines. This diagram is of type \(A\text{;}\) since there are \(n\) roots, \(\hh\) is \(n\)-dimensional. The corresponding Lie algebras are therefore called \(\aa_n\text{.}\) All of the roots have the same magnitude.

Consider instead a single chain, but with at least one double line. If there are more than one, then collapsing the roots in between must yield an allowable graph. But such a graph has four lines at a single point, which is not allowed. Thus, there is at most one double line, and hence exactly one. In this case, the roots have one of two magnitudes, depending on which side of the double connection they are on. Now suppose that there is a branch point in the graph. Again, there can not be more than one, since otherwise the chain between them could be collapsed, again resulting in a graph with four lines. Similarly, there can not be both a branch point and a double line. We will discuss these cases in further detail below; the corresponding Lie algebras are of types \(B\)\(F\text{.}\)

Finally, if two roots are connected by three lines, then no other lines are possible. Thus, there is only one Coxeter graph with three lines. This diagram is of type \(G\text{;}\) since there are two roots, \(\hh\) is 2-dimensional. The corresponding Lie algebra is therefore called \(\gg_2\text{.}\)

Subsection 5.5.5 Diagrams with a double link.

Table 5.5.1. A Dynkin diagram with a double link.
\(\begin{alignat}{1} \amp\bullet\!-\dotsb-\!\bullet=\!\!\Leftarrow\amp\bullet\!-\dotsb-\!\bullet\\[-1ex] \amp\alpha_1\qquad~~~\alpha_p\amp\beta_q\qquad~~\beta_1 \end{alignat}\)

Consider the two simple chains shown in Table 5.5.1, with \(p\) roots \(\alpha_m\) on one side of a double link, and \(q\) roots \(\beta_k\) on the other, and with the double link connecting \(\alpha_p\) and \(\beta_q\text{.}\) Since all but one of the links are single lines, we know that

\begin{equation} \alpha_m\cdot\alpha_m = P^2 , \qquad \beta_n\cdot\beta_n = Q^2\tag{5.5.19} \end{equation}

and we can assume without loss of generality that there are \(p\) “short” roots and \(q\) “long” roots, so that \(Q^2=2P^2\text{.}\) We also know that

\begin{equation} \frac{\alpha_m\cdot\alpha_{m+1}}{\alpha_m\cdot\alpha_m} = -\frac12 ,\tag{5.5.20} \end{equation}

with a similar relation for the roots \(\beta_k\text{.}\) Setting

\begin{equation} \alpha = \sum_1^p m \frac{\alpha_m}{|\alpha_m|} , \qquad \beta = \sum_1^q k \frac{\beta_k}{|\beta_k|} ,\tag{5.5.21} \end{equation}

and using similar techniques as in the previous calculations, we can compute

\begin{equation} \begin{aligned} \alpha\cdot\alpha \amp= \sum_1^p m^2 - \sum_1^{p-1} m(m+1) \nonumber\\ \amp= p^2 + \sum_1^{p-1} \bigl(m^2-m(m+1)\bigr) \nonumber\\ \amp= p^2 - \sum_1^{p-1} m \nonumber\\ \amp= p^2 - \frac{p(p-1)}{2} = \frac{p(p+1)}{2}\end{aligned}\tag{5.5.22} \end{equation}

and similarly

\begin{equation} \beta\cdot\beta = \frac{q(q+1)}{2} .\tag{5.5.23} \end{equation}

We also know that

\begin{equation} 4\cos^2\theta = 4\frac{(\alpha_p\cdot\beta_q)^2}{P^2Q^2} = 2\tag{5.5.24} \end{equation}

so that

\begin{equation} (\alpha\cdot\beta)^2 = \left( \frac{p\alpha_p}{P}\cdot\frac{q\beta_q}{Q} \right)^2 = {p^2q^2}\frac{(\alpha_p\cdot\beta_q)^2}{P^2Q^2} = \frac12 p^2q^2 .\tag{5.5.25} \end{equation}

But

\begin{equation} (\alpha\cdot\beta)^2 \lt (\alpha\cdot\alpha)(\beta\cdot\beta)\tag{5.5.26} \end{equation}

or, in other words,

\begin{equation} \begin{aligned} \frac12 p^2q^2 \amp\lt \frac14 p(p+1)q(q+1) \\ \amp\Longrightarrow 2pq \lt pq + p + q + 1 \\ \amp\Longrightarrow pq \lt p + q + 1 \\ \amp\Longrightarrow (p-1)(q-1) \lt 2 . \end{aligned}\tag{5.5.27} \end{equation}

There are three ways to satisfy (5.5.27). If \(p=1\text{,}\) then there is only one short root; these Lie algebras are of type \(B\text{,}\) and denoted \(\bb_{p+1}\text{.}\) If \(q=1\text{,}\) then there is only one long root; these Lie algebras are of type \(C\text{,}\) and denoted \(\cc_{q+1}\text{.}\) Finally, if \(p=q=2\text{,}\) we get a single, exceptional case, of type \(F\text{,}\) and denoted \(\ff_4\text{.}\)

Subsection 5.5.6 Diagrams with a branch point.

Table 5.5.2. A Dynkin diagram with a branch point.
\(\begin{alignat}{2} \amp\qquad\qquad\quad\,\,\,\,\vdots \\[-1ex] \amp\qquad\qquad\quad\,\,\,\bullet\>\gamma_{r-1} \\[-1ex] \amp\qquad\qquad\quad\,\,\,\,\mid \\[-1ex] \amp\bullet\!-\dotsb-\!\bullet\!-\bullet-\!\bullet\!-\dotsb-\!\bullet\\[-1ex] \amp\alpha_1\qquad~~~\alpha_{p-1}\quad\beta_{q-1}\qquad~~\beta_1 \end{alignat}\)

Consider now three simple chains meeting at a branch point \(\psi\) as shown in Table 5.5.2, with \(p-1\) roots \(\alpha_m\) in one chain, \(q-1\) roots \(\beta_k\) in the second, and \(r-1\) roots \(\gamma_\ell\) in the third, where in each case we do not count \(\psi\text{.}\) Since all of the links are single lines, we know that

\begin{equation} \alpha_m\cdot\alpha_m = \beta_k\cdot\beta_k = \gamma_\ell\cdot\gamma_\ell = \psi\cdot\psi = Q^2 .\tag{5.5.28} \end{equation}

As above, set

\begin{equation} \alpha = \sum_1^p m \frac{\alpha_m}{|\alpha_m|} , \qquad \beta = \sum_1^q k \frac{\beta_k}{|\beta_k|} , \qquad \gamma = \sum_1^r \ell \frac{\gamma_\ell}{|\gamma_\ell|} .\tag{5.5.29} \end{equation}

The magnitudes of \(\alpha\text{,}\) \(\beta\text{,}\) and \(\gamma\) can be computed as in the preceding case, taking into account that the chains (excluding \(\psi\)) now contain \(p-1\text{,}\) \(q-1\text{,}\) and \(r-1\) roots, respectively, yielding

\begin{equation} \alpha\cdot\alpha = \frac{p(p-1)}{2} , \qquad \beta\cdot\beta = \frac{q(q-1)}{2} , \qquad \gamma\cdot\gamma = \frac{r(r-1)}{2} .\tag{5.5.30} \end{equation}

Since the roots \(\alpha_m\text{,}\) \(\beta_k\text{,}\) \(\gamma_\ell\) are mutually orthogonal, so are the vectors \(\alpha\text{,}\) \(\beta\text{,}\) \(\gamma\text{.}\) Since \(\psi\) is linearly independent of the other roots, it is independent of \(\alpha\text{,}\) \(\beta\text{,}\) \(\gamma\text{.}\) We can therefore extend these vectors to an orthogonal basis for \(\langle\alpha,\beta,\gamma,\psi\rangle\) by adding a vector \(\psi_0\text{,}\) with \(\psi\cdot\psi_0\ne0\text{.}\) Expanding \(\psi\) in terms of this basis, we get

\begin{equation} \psi = \frac{\psi\cdot\alpha}{\alpha\cdot\alpha}\alpha + \frac{\psi\cdot\beta}{\beta\cdot\beta}\beta + \frac{\psi\cdot\gamma}{\gamma\cdot\gamma}\gamma + \frac{\psi\cdot\psi_0}{\psi_0\cdot\psi_0}\psi_0\tag{5.5.31} \end{equation}

and therefore

\begin{equation} \psi\cdot\psi \gt \frac{(\psi\cdot\alpha)^2}{\alpha\cdot\alpha} + \frac{(\psi\cdot\beta)^2}{\beta\cdot\beta} + \frac{(\psi\cdot\gamma)^2}{\gamma\cdot\gamma} .\tag{5.5.32} \end{equation}

In other words, if \(\theta_\alpha\text{,}\) \(\theta_\beta\text{,}\) \(\theta_\gamma\) are the angles between \(\psi\) and \(\alpha\text{,}\) \(\beta\text{,}\) \(\gamma\text{,}\) respectively, then

\begin{equation} \cos^2\theta_\alpha + \cos^2\theta_\beta + \cos^2\theta_\gamma \lt 1 .\tag{5.5.33} \end{equation}

But

\begin{equation} (\alpha\cdot\psi)^2 = \frac{(p-1)^2(\alpha_{p-1}\cdot\psi)^2}{|\alpha_{p-1}|^2} = \frac14 (p-1)^2 |\psi|^2\tag{5.5.34} \end{equation}

since the angle between \(\psi\) and \(\alpha_{p-1}\) is \(\frac{2\pi}3\text{.}\) Thus,

\begin{equation} \cos^2\theta_\alpha = \frac{(\alpha\cdot\psi)^2}{|\alpha|^2 |\psi|^2} = \frac{(p-1)^2/4}{p(p-1)/2} = \frac{p-1}{2p} = \frac12 \left(1-\frac1p\right)\tag{5.5.35} \end{equation}

and similarly for \(\theta_\beta\) and \(\theta_\gamma\text{,}\) which, using (5.5.33), yields

\begin{equation} \frac32 - \frac12\left(\frac1p+\frac1q+\frac1r\right) \lt 1\tag{5.5.36} \end{equation}

or in other words

\begin{equation} \frac1p + \frac1q + \frac1r \gt 1 .\tag{5.5.37} \end{equation}

There are several ways to satisfy (5.5.37). If \(q=r=2\text{,}\) we get the diagrams of type \(D\text{,}\) and the Lie algebras denoted \(\dd_{p+2}\text{.}\) The only other distinct possibilities are \(r=2\text{,}\) \(q=3\text{,}\) and \(p\in\{3,4,5\}\text{.}\) These three exceptional cases belong to class \(E\text{,}\) and are denoted \(\ee_6\text{,}\) \(\ee_7\text{,}\) and \(\ee_8\text{.}\)

The allowed Dynkin diagrams appear in Table 5.5.3.

Table 5.5.3. The Dynkin diagrams for simple Lie algebras.
\(\aa_n\) \(\begin{aligned} \bullet\!-\!\bullet\!-\dotsb-\!\bullet \end{aligned}\)
\(\bb_n\) \(\begin{aligned} \bullet=\!\!\Leftarrow\bullet\!-\dotsb-\!\bullet \end{aligned}\)
\(\cc_n\) \(\begin{aligned} \bullet\Rightarrow\!\!=\amp\bullet\!-\dotsb-\!\bullet \end{aligned}\)
\(\dd_n\) \(\begin{aligned} \amp\>\bullet \\[-1ex] \amp\,\,\mid \\[-1ex] \bullet\,\!-\!\amp\bullet\!-\!\bullet\!-\dotsb-\!\bullet \end{aligned}\)
\(\ee_6\) \(\begin{aligned} \amp\>\bullet \\[-1ex] \amp\,\,\mid \\[-1ex] \bullet\!-\!\bullet\!-\!\amp\bullet\!-\!\bullet\!-\!\>\bullet \end{aligned}\) \(\ee_7\) \(\begin{aligned} \amp\>\bullet \\[-1ex] \amp\,\,\mid \\[-1ex] \bullet\!-\!\bullet\!-\!\amp\bullet\!-\!\bullet\!-\!\bullet\!-\!\>\bullet \end{aligned}\) \(\quad\ee_8\) \(\begin{aligned} \amp\>\bullet \\[-1ex] \amp\,\,\mid \\[-1ex] \bullet\!-\!\bullet\!-\!\amp\bullet\!-\!\bullet\!-\!\bullet\!-\!\bullet\!-\!\>\bullet \end{aligned}\)
\(\ff_4\) \(\begin{aligned} \bullet\!-\!\bullet\Rightarrow\!\!=\bullet\!-\!\bullet \end{aligned}\)
\(\gg_2\) \(\begin{aligned} \bullet\Rrightarrow\!\!\equiv\bullet \end{aligned}\)

Subsection 5.5.7 Special Cases

The four infinite families correspond to known symmetry groups. In terms of their Lie algebras, we have

\begin{equation} \begin{aligned} \aa_n \amp= \su(n+1) = \su(n+1,\CC) , \\ \bb_n \amp= \so(2n+1) = \su(n+1,\RR) , \\ \cc_n \amp= \sp(2n) = \su(n,\HH) , \\ \dd_n \amp= \so(2n) = \su(2n,\RR) .\end{aligned}\tag{5.5.38} \end{equation}

There are similar correspondences for four of the five exceptional cases, namely

\begin{equation} \begin{aligned} \ff_4 \amp= \su(3,\OO) , \\ \ee_6 \amp= \sl(3,\OO) = \su(3,\CC\otimes\OO) , \\ \ee_7 \amp= \sp(6,\OO) = \su(3,\HH\otimes\OO) , \\ \ee_8 \amp= \su(3,\OO\otimes\OO) ,\end{aligned}\tag{5.5.39} \end{equation}

but the last case is somewhat different, namely

\begin{equation} \gg_2 = \mathrm{Der}(\OO) ,\tag{5.5.40} \end{equation}

which at the group level says that

\begin{equation} G_2 = \mathrm{Aut}(\OO) ,\tag{5.5.41} \end{equation}

that is, \(G_2\) is the group of automorphisms of the octonions. We will discuss all of these correspondences further below.

The allowable Dynkin diagrams are shown in Table 5.5.3. Writing out the first few cases, there are several that overlap. For instance,

\begin{equation} \begin{aligned} \su(2) \amp\cong \aa_1 \cong \bb_1 \cong \so(3) ,\\ \so(5) \amp\cong \bb_2 \cong \cc_2 \cong \su(2,\HH) . \end{aligned}\tag{5.5.42} \end{equation}

Somewhat more unexpectedly, we have

\begin{equation} \su(4) \cong \aa_3 \cong \dd_3 \cong \so(6) ,\tag{5.5.43} \end{equation}

as well as

\begin{equation} \so(4) \cong \dd_2 \cong \aa_1\oplus\aa_1 \cong \su(2)\oplus\su(2)\tag{5.5.44} \end{equation}

showing explicitly that \(\dd_2\text{,}\) whose Dynkin diagram consists of two disconnected points, is not simple. To avoid these repetitions, one normally assumes that \(n\gt1\) for \(\bb_n\text{,}\) that \(n\gt2\) for \(\cc_n\text{,}\) and that \(n\gt3\) for \(\dd_n\text{.}\)

Finally, the Lie algebras \(\ee_n\) correspond to the the cases described above by the parameter values \((p,q,r)=(n-3,3,2)\) with \(n=6,7,8\text{.}\) We could in principle consider the cases \(n=4,5\) as (also) being of type \(E\text{.}\) We would then have the duplications

\begin{equation} \begin{aligned} \ee_5 \amp\cong \dd_5 \cong \so(10) ,\\ \ee_4 \amp\cong \aa_4 \cong \su(5) , \end{aligned}\tag{5.5.45} \end{equation}

so normally this is not done. However, it is worth pointing out that the nested sequence of Lie algebras

\begin{equation} \so(10) \subset \su(5) \subset \ee_6 \subset \ee_7 \subset \ee_8\tag{5.5.46} \end{equation}

has a long history in physics, with each of the corresponding Lie groups being considered as candidates for the symmetry group of a so-called Grand Unified Theory, combining three of the four fundamental forces of nature, namely electromagnetism and the strong and weak nuclear forces.

For instance, the positive root “closest” to the hyperplane must be simple. The orthogonal projections of the remaining roots are still positive (why?), so this process can be repeated, resulting in a basis of simple roots. It remains to check that there can be no other simple roots, and that the remaining roots can be expanded as claimed.