Section 5.13 Discrete Translation Operations
We’d like to explore a class of matrices with ones on the upper off-diagonal and in the lower left corner and zeros everywhere else. What do they do to a column vector? How can we understand the operation physically/geometrically? What are the eigenvalues and eigenvectors and what do they mean, geometrically? Consider the \(n\times n\) matrix of the form
\begin{equation}
S_n^{\uparrow}=
\begin{pmatrix}
0\amp 1\amp 0\amp 0\amp\cdots\amp 0\\
0\amp 0\amp 1\amp 0\amp\cdots\amp 0\\
0\amp 0\amp 0\amp 1\amp\cdots\amp 0\\
\vdots\amp \vdots\amp \vdots\amp \vdots\amp\ddots\amp \vdots\\
1\amp 0\amp 0\amp 0\amp\cdots\amp 0
\end{pmatrix}\tag{5.13.1}
\end{equation}
If we let the matrix \(S_n^{\uparrow}\) act on a state \(\ket{X}\) represented as a column vector \(X\) whose entries are labeled \(x_i\) for \(i=\{1, \dots, n\}\)
\begin{equation}
S_n^{\uparrow}\, X=
\begin{pmatrix}
0\amp 1\amp 0\amp 0\amp\cdots\amp 0\\
0\amp 0\amp 1\amp 0\amp\cdots\amp 0\\
0\amp 0\amp 0\amp 1\amp\cdots\amp 0\\
\vdots\amp \vdots\amp \vdots\amp \vdots\amp\ddots\amp \vdots\\
1\amp 0\amp 0\amp 0\amp\cdots\amp 0
\end{pmatrix}
\begin{pmatrix}
x_1\\
x_2\\
x_3\\
\vdots\\
x_n
\end{pmatrix}
=
\begin{pmatrix}
x_2\\
x_3\\
x_4\\
\vdots\\
x_1
\end{pmatrix}\tag{5.13.2}
\end{equation}
we see that it cycles the entries of the column vector, moving each entry up one position and moving the top entry to the bottom of the column. If, for example, the \(x_i\)’s represent the positions of \(n\) distinguishable beads arranged clockwise on a ring, then the operator \(S_n^{\uparrow}\) moves each bead one position counterclockwise.
Eigenvalues.
To find the eigenvalues of \(S_n^{\uparrow}\text{,}\) we could solve the characteristic equation, but as \(n\) becomes large, this becomes hard to do. Instead, lets explore what happens when we repeat the translation operation on an eigenvector multiple times. If \(\ket{\lambda}\) is an eigenvector of \(S_n^{\uparrow}\) with eigenvalue \(\lambda\text{,}\) then repeating the operation multiple times just multiplies the eigenvalue times itself that number of times.
\begin{align}
\textrm{If} \quad S_n^{\uparrow} \ket{\lambda}
\amp =\lambda \ket{\lambda}\tag{5.13.3}\\
\textrm{Then} \quad S_n^{\uparrow}(S_n^{\uparrow} \ket{\lambda})
\amp =\lambda S_n^{\uparrow}\ket{\lambda} =\lambda^2 \ket{\lambda}\tag{5.13.4}\\
\vdots\tag{5.13.5}\\
\quad S_n^{\uparrow}({S_n^{\uparrow}}^{(n-1)} \ket{\lambda})
\amp =\lambda^{(n-1)} S_n^{\uparrow}\ket{\lambda} =\lambda^n \ket{\lambda}\tag{5.13.6}
\end{align}
Notice that if we repeat this translation operation \(n\) times, the beads will get back to where they started so \(S_n^{\uparrow}\) raised to the \(n\)th power must be the identity, \((S_n^{\uparrow})^n=I\text{.}\)
\begin{equation}
I \ket{\lambda}=(S_n^{\uparrow})^n \ket{\lambda}
=\lambda^n \ket{\lambda}= 1 \ket{\lambda}\tag{5.13.7}
\end{equation}
This is an eigenvalue equation for the identity matrix, that shows that \(\lambda\) must be an \(n\)th root of one.
In Section 2.11, we discussed the method of finding roots of one. Write
\begin{align}
\lambda^n \amp =1\tag{5.13.8}\\
\Rightarrow\lambda\amp =(1)^{\frac{1}{n}}
=(e^{i\, 2\pi m})^{\frac{1}{n}} =e^{i\frac{2\pi m}{n}}\tag{5.13.9}
\end{align}
We get \(n\) distinct roots and therefore \(n\) distinct eigenvalues for the set of integers \(m=\{0, 1, 2, \dots, n-1\}\text{.}\)
Eigenvectors.
Now, to find the eigenvectors, we use the eigenvalue equation, one eigenvalue at a time, as usual, see Section 4.3. We’ll illustrate with the first non-trivial eigenvalue \(\lambda_1=e^{i\frac{2\pi}{n}}\text{.}\)
\begin{align}
S_n^{\uparrow} \ket{\lambda_1}\amp =\lambda_1 \ket{\lambda_1}\tag{5.13.10}\\
\begin{pmatrix}
0\amp 1\amp 0\amp 0\amp\cdots\amp 0\\
0\amp 0\amp 1\amp 0\amp\cdots\amp 0\\
0\amp 0\amp 0\amp 1\amp\cdots\amp 0\\
\vdots\amp \vdots\amp \vdots\amp \vdots\amp\ddots\amp \vdots\\
1\amp 0\amp 0\amp 0\amp\cdots\amp 0
\end{pmatrix}
\begin{pmatrix}
x_1\\
x_2\\
x_3\\
\vdots\\
x_n
\end{pmatrix}
\amp =
e^{i\frac{2\pi}{n}}
\begin{pmatrix}
x_1\\
x_2\\
x_3\\
\vdots\\
x_n
\end{pmatrix} \tag{5.13.11}\\
\begin{pmatrix}
x_2\\
x_3\\
x_4\\
\vdots\\
x_1
\end{pmatrix}
\amp =
\begin{pmatrix}
e^{i\frac{2\pi}{n}}\, x_1\\
e^{i\frac{2\pi}{n}}\, x_2\\
e^{i\frac{2\pi}{n}}\, x_3\\
\vdots\\
e^{i\frac{2\pi}{n}}\, x_n
\end{pmatrix} \tag{5.13.12}
\end{align}
You can read the last equation (5.13.12) as a set of \(n\) equations, one for each row of the matrices. If you start at the top of the column and set \(x_1=1\text{,}\) you can read of the value \(x_2=e^{i\frac{2\pi}{n}}\text{.}\) Then work your way downward through the column, \(x_{3}=e^{i\frac{2\pi}{n}}\, x_2=e^{i\frac{4\pi}{n}}\text{,}\) etc. Therefore, the eigenvector corresponding to the eigenvalue \(\lambda_1=e^{i\frac{2\pi}{n}}\) is
\begin{equation}
\ket{\lambda_1}=\ket{e^{i\frac{2\pi}{n}}}
\doteq \begin{pmatrix}
1\\
e^{i\frac{2\pi}{n}}\\
e^{i\frac{4\pi}{n}}\\
\vdots\\
e^{i\frac{2\pi(n-1)}{n}}
\end{pmatrix} \tag{5.13.13}
\end{equation}
UNDER CONSTRUCTION. Add some comments about the geometric meaning of the eigenvectors.

