Skip to main content Contents Index
Prev Up Next \(\newcommand{\vf}[1]{\mathbf{\boldsymbol{\vec{#1}}}}
\renewcommand{\Hat}[1]{\mathbf{\boldsymbol{\hat{#1}}}}
\let\VF=\vf
\let\HAT=\Hat
\newcommand{\Prime}{{}\kern0.5pt'}
\newcommand{\PARTIAL}[2]{{\partial^2#1\over\partial#2^2}}
\newcommand{\Partial}[2]{{\partial#1\over\partial#2}}
\newcommand{\tr}{{\mathrm tr}}
\newcommand{\CC}{{\mathbb C}}
\newcommand{\HH}{{\mathbb H}}
\newcommand{\KK}{{\mathbb K}}
\newcommand{\RR}{{\mathbb R}}
\newcommand{\HR}{{}^*{\mathbb R}}
\renewcommand{\AA}{\vf A}
\newcommand{\BB}{\vf B}
\newcommand{\CCv}{\vf C}
\newcommand{\EE}{\vf E}
\newcommand{\FF}{\vf F}
\newcommand{\GG}{\vf G}
\newcommand{\HHv}{\vf H}
\newcommand{\II}{\vf I}
\newcommand{\JJ}{\vf J}
\newcommand{\KKv}{\vf Kv}
\renewcommand{\SS}{\vf S}
\renewcommand{\aa}{\VF a}
\newcommand{\bb}{\VF b}
\newcommand{\ee}{\VF e}
\newcommand{\gv}{\VF g}
\newcommand{\iv}{\vf\imath}
\newcommand{\rr}{\VF r}
\newcommand{\rrp}{\rr\Prime}
\newcommand{\uu}{\VF u}
\newcommand{\vv}{\VF v}
\newcommand{\ww}{\VF w}
\newcommand{\grad}{\vf\nabla}
\newcommand{\zero}{\vf 0}
\newcommand{\Ihat}{\Hat I}
\newcommand{\Jhat}{\Hat J}
\newcommand{\nn}{\Hat n}
\newcommand{\NN}{\Hat N}
\newcommand{\TT}{\Hat T}
\newcommand{\ihat}{\Hat\imath}
\newcommand{\jhat}{\Hat\jmath}
\newcommand{\khat}{\Hat k}
\newcommand{\nhat}{\Hat n}
\newcommand{\rhat}{\HAT r}
\newcommand{\shat}{\HAT s}
\newcommand{\xhat}{\Hat x}
\newcommand{\yhat}{\Hat y}
\newcommand{\zhat}{\Hat z}
\newcommand{\that}{\Hat\theta}
\newcommand{\phat}{\Hat\phi}
\newcommand{\LL}{\mathcal{L}}
\newcommand{\DD}[1]{D_{\textrm{$#1$}}}
\newcommand{\bra}[1]{\langle#1|}
\newcommand{\ket}[1]{|#1/rangle}
\newcommand{\braket}[2]{\langle#1|#2\rangle}
\newcommand{\LargeMath}[1]{\hbox{\large$#1$}}
\newcommand{\INT}{\LargeMath{\int}}
\newcommand{\OINT}{\LargeMath{\oint}}
\newcommand{\LINT}{\mathop{\INT}\limits_C}
\newcommand{\Int}{\int\limits}
\newcommand{\dint}{\mathchoice{\int\!\!\!\int}{\int\!\!\int}{}{}}
\newcommand{\tint}{\int\!\!\!\int\!\!\!\int}
\newcommand{\DInt}[1]{\int\!\!\!\!\int\limits_{#1~~}}
\newcommand{\TInt}[1]{\int\!\!\!\int\limits_{#1}\!\!\!\int}
\newcommand{\Bint}{\TInt{B}}
\newcommand{\Dint}{\DInt{D}}
\newcommand{\Eint}{\TInt{E}}
\newcommand{\Lint}{\int\limits_C}
\newcommand{\Oint}{\oint\limits_C}
\newcommand{\Rint}{\DInt{R}}
\newcommand{\Sint}{\int\limits_S}
\newcommand{\Item}{\smallskip\item{$\bullet$}}
\newcommand{\LeftB}{\vector(-1,-2){25}}
\newcommand{\RightB}{\vector(1,-2){25}}
\newcommand{\DownB}{\vector(0,-1){60}}
\newcommand{\DLeft}{\vector(-1,-1){60}}
\newcommand{\DRight}{\vector(1,-1){60}}
\newcommand{\Left}{\vector(-1,-1){50}}
\newcommand{\Down}{\vector(0,-1){50}}
\newcommand{\Right}{\vector(1,-1){50}}
\newcommand{\ILeft}{\vector(1,1){50}}
\newcommand{\IRight}{\vector(-1,1){50}}
\newcommand{\Partials}[3]
{\displaystyle{\partial^2#1\over\partial#2\,\partial#3}}
\newcommand{\Jacobian}[4]{\frac{\partial(#1,#2)}{\partial(#3,#4)}}
\newcommand{\JACOBIAN}[6]{\frac{\partial(#1,#2,#3)}{\partial(#4,#5,#6)}}
\newcommand{\ii}{\ihat}
\newcommand{\jj}{\jhat}
\newcommand{\kk}{\khat}
\newcommand{\dS}{dS}
\newcommand{\dA}{dA}
\newcommand{\dV}{d\tau}
\renewcommand{\ii}{\xhat}
\renewcommand{\jj}{\yhat}
\renewcommand{\kk}{\zhat}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
\definecolor{fillinmathshade}{gray}{0.9}
\newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}}
\)
Section 4.4 Normalization of Eigenvectors
In
Section 3.7 , we defined the inner product operation on abstract vectors with complex components, such as
\begin{equation}
\vert v\rangle
\doteq\begin{pmatrix}a\\b\\ \vdots
\end{pmatrix} \text{.}\tag{4.4.1}
\end{equation}
If we take the inner product of this vector with itself,
\begin{align}
\langle v\vert v\rangle
\amp = \begin{pmatrix}a^* \amp b^* \amp \dots
\end{pmatrix} \begin{pmatrix}a\\b\\ \vdots
\end{pmatrix}\notag\\
\amp = \vert a\vert^2 + \vert b\vert^2 +\dots\text{,}\tag{4.4.2}
\end{align}
the operation always yields a positive, real number. Thus, we can use the square root of this operation \(\vert \vert v\rangle\vert\text{.}\)
\begin{equation}
\left\vert \vert v\rangle\right\vert
=\left\{\langle v\vert v\rangle\right\}^{\frac{1}{2}}\tag{4.4.3}
\end{equation}
to define the
norm (also called
magnitude or
length ) of the vector. This definition is a natural generalization of the dot product, see
Section 1.7 , of a real vector with itself.
Definition 4.4 . Normalized Vector.
From the eigenvalue/eigenvector equation
(4.1.1) :
\begin{equation}
A \left|v\right> = \lambda \left|v\right>\tag{4.4.4}
\end{equation}
it is straightforward to show that if
\(\vert v\rangle\) is an eigenvector of
\(A\text{,}\) then, any multiple
\(N\vert v\rangle\) of
\(\vert v\rangle\) is also an eigenvector since
\(N\) can pull through to the left on both sides of the equation.
If a choice of \(N\) normalizes a particular vector, then so does any arbitrary complex phase times the original number \(Ne^{i\alpha}\) (for \(\alpha\) real). Usually, we choose \(N\) to be real, for simplicity.
It is always possible to choose the number
\(N\) to rescale the eigenvector to have length
\(1\text{.}\) Such an eigenvector is called
normalized .