We have to devote some space to Exercise 1 of the previous post.
The problems was: Prove that
<ba,c> = <b,ca*>,
where <u,v> is the scalar part of the product u*v, with u* = τ(u).
I have mentioned that it can be done in a hard way, using the definitions (and Anna did it), but there is also an easy way. It would be a pity not to examine the easy way, as it will be useful for us later on. The easy way involves using the matrix representation of Cl(V), that we have examined in Part 9. Specifically, we associate to u = (p0,p) the 2x2 complex matrix m(u):
m(u) = pμσμ = p0σ0 + ... + p3σ3,
where σ0 is the identity matrix, and σi (i=1,2,3) are the three Pauli Hermitian matrices. With this association we automatically have that
m(uv) = m(u)m(v),
and
m(u*) = m(u)*,
where u*= τ(u), and m(u)* is the Hermitian conjugate of m(u). We also have the fact that p0, the scalar part of u, is given by
p0 = (1/2) Tr(m(u)).
Thus
<u,v> = (1/2) Tr(m(u*v)) = (1/2) Tr(m(u)*m(v)).
We can now return to our problem - we have
and
But
Tr( m(a)*m(b)*m(c) ) = Tr( m(b)*m(c)m(a)* ),
because in general Tr(XY) = Tr(YX) - this is the fundamental property of the trace.
QED.
After this introduction we can start with the main topic of this post - the more natural representation of Cl(V), without using Pauli matrices.
In a comment to my previous post I quoted the famous mathematician Emil Artin:
Emil Artin in "Geometric Algebra", p. 13, Dover 2016, wrote:
"Mathematical education is still suffering from the enthusiasms which the discovery of the isomorphism (between the ring of endomorphisms and the ring of matrices A.J.)
has aroused. The result has been that geometry has been eliminated and
replaced by computations. Instead of the intuitive maps of a space
preserving addition and multiplication by scalars (these maps have an
immediate geometric meaning) matrices have been introduced. From the
innumerable absurdities --from a pedagogical point of view--let me point
out one example and contrast it with the direct description. (...) It
is my experience that proofs involving matrices can be shortened by 50%
if one throws the matrices out. Sometimes it can not be done; a
determinant may have to be computed."
Here I will go against this warning, I will thus make an exception.
Exceptions are sometimes even more important than rules, though it is bad
when using an exception becomes a rule!
By the way, here is an example from Artin's biography in Wikipedia:
"On the orders of a Hamburg doctor whom he had consulted about a chronic cough, Artin had given up smoking years before. He had vowed not to smoke so long as Adolf Hitler remained in power. On May 8, 1945, at the news of Germany's surrender and the fall of the Third Reich, Natascha made the mistake of reminding him of this vow, and in lieu of a champagne toast, he indulged in what was intended to be the smoking of a single, celebratory cigarette. Unfortunately, the single cigarette led to a second, and another after that. Artin returned to heavy smoking for the rest of his life."
And so we have our geometric Clifford algebra of space Cl(V) (Cl(3) in the standard notation). We have endowed V with the Euclidean metric and orientation, which enabled us to consider Cl(V) as a complex space of four dimensions. If ej (j=1,2,3) is an oriented orthonormal basis in V, then we set
i = e1e2e3,
which is independent of the choice of such a basis. We select such a basis, and this choice enables us to realize Cl(V) as an algebra of complex matrices. In Part 9 we used the Pauli matrices, here we will do it in a more natural way.
Note: We will obtain the Pauli matrices also in a
"natural way", but it will come later on, after we start discussing
"observers", "reference frames", and ""measurements".
We are in the category of vector spaces and algebras. We have also
decided to switch, for our case, from reals to complex numbers as the
basic field. It came out naturally. Now Cl(V) is an algebra, we have
associative multiplication defined for elements of Cl(V). But, first of
all Cl(V) is a complex vector space. For vector spaces we have the
concept of endomorphisms--in our case complex linear maps of Cl(V) into
itself. Endomorphisms can be naturally composed. They form an
associative algebra with unit, denoted End(Cl(V)), where the unit is the
identity map. Notice that we are talking about endomorphisms of Cl(V),
not about endomorphisms of V. V is a real vector space, and
endomorphisms of V form a real algebra.
Now, every element u of Cl(V) defines an endomorphism of Cl(V), namely the left multiplication by u, we denote it by L(u):
L(u)v = uv for all u,v in Cl(V).
Since Cl(V) is an associative algebra, we instantly get
L(u)L(v) = L(uv).
Also L(1) = Id.
Thus we have a homomorphism L between two algebras, Cl(V) and End(Cl(V)). L is "faithful", that is L(u)=0 if an only if u=0. This follows instantly by selecting v=1 in the formula L(u)v = uv. In other words Cl(V), as an algebra, can be identified with its image in End(Cl(V)).
We do not have matrices yet, but we already have endomorphisms that Artin was talking about in the quote above. Now we will go for matrices. For this we select an oriented orthonormal basis ei in V, i=1,2,3. Then automatically we have a complex basis Eμ (μ=0,1,2,3) in Cl(V):
E0 = 1, Ei = ei (i=,1,2,3).
Once we have a basis in Cl(V), its endomorphisms are naturally represented
by matrices. We are going to find the complex 4x4 matrices Lμ representing the endomorphisms L(Eμ). Here I will assume that we already know how to find the matrix representing a given endomorphism in a given basis.
To be continued in the next post (unless one of the Readers will have enough patience to do this calculation.
Next to last paragraph, at the beginning:
ReplyDelete"Once we a basis in Cl(V),"
probably a verb missing, maybe "have". FWIW.
Fixed. Thank you!
ReplyDeleteYeh, I felt that we need not multiply those 2x2 matrices explicitly 🙈 And i tried to apply the property Tr(XY) = Tr(YX), but failed to use the most important thing: p0 = (1/2) Tr(m(u)). Such an elegant proof you've got. Thanks a lot for this exercise ...
ReplyDeleteSeeing the way through - it comes with experience. When you have enough experience with using a certain tool, you look at the problem and you "know" without thinking: "it can be done", even if you have no idea "how". Without enough experience you feel like being in a maze, and you try this way or that way, with no guidance. I know it from my own studies. I read a chapter in a math book, and I have a rough idea what is about. But only going through a bunch of simple exercises I am starting to really "feel" the subject.
DeleteAnd similarly with proofs. I can read a proof of some theorem, and I think "I understand". But that i an illusion. When I close the book, and try to proof it myself, I realize that I have missed some parts, that I do not really understand yet. Understanding requires more than just reading and nodding my head.
DeleteOne more insight for me: why associativity is so important in quantum mechanics. Because of this immediate consequence L(u)L(v) = L(uv).
ReplyDeleteGood that you pointed it out.
DeleteThis comment has been removed by the author.
Delete'why associativity is so important in quantum mechanics'.
DeleteTo be more precise, the L(u)L(v) = L(uv) property is the key one in the representation theory. The product of two group elements should be represented by a product of their representations. I never know it has something to do with the associativity of representing algebra. "О, сколько нам открытий чудных...!" 😊
"We are going to find the complex 4x4 matrices Lμ representing the endomorphisms L(Eμ). Here I will assume that we already know how to find the matrix representing a given endomorphism in a given basis."
ReplyDeleteIs that similar to what you did with quaternions in the post:
https://ark-jadczyk.blogspot.com/2024/10/the-quirks-of-quaternions.html ?
If it is similar, and it seems it is, then for
Deleteu = u0 e0 + u1 e1 + u2 e2 + u3 e3
we get when:
-- multiplying u with individual basis elements from the left:
L(E0) : e0 u = u0 e0 + u1 e1 + u2 e2 + u3 e3;
L(E1) : e1 u = u0 e1 + u1 e0 + u2 ie3 + u3 (-ie2);
L(E2) : e2 u = u0 e2 + u1 (-ie3) + u2 e0 + u3 ie1;
L(E3) : e3 u = u0 e3 + u1 ie2 + u2 (-ie1) + u3 e0;
-- multiplying u with individual basis elements from the right:
R(E0) : u e0 = u0 e0 + u1 e1 + u2 e2 + u3 e3;
R(E1) : u e1 = u0 e1 + u1 e0 + u2 (-ie3) + u3 ie2;
R(E2) : u e2 = u0 e2 + u1 ie3 + u2 e0 + u3 (-ie1);
R(E3) : u e3 = u0 e3 + u1 (-ie2) + u2 ie1 + u3 e0.
To get matrices L(Eμ) and R(Eμ) we can either read the uμ next to eμ to get the matrices' rows (like in the post "The Quirks of Quaternions") or read the eμ next to uμ to get the matrices' columns (like in https://math.stackexchange.com/questions/4520554/determine-matrix-of-endomorphism-given-a-basis).
In Wolfram Mathematica notation, we get for L(Eμ):
L0 = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}};
L1 = {{0, 1, 0, 0}, {1, 0, 0, 0}, {0, 0, 0, -i}, {0, 0, i, 0}};
L2 = {{0, 0, 1, 0}, {0, 0, 0, i}, {1, 0, 0, 0}, {0, -i, 0, 0}};
L3 = {{0, 0, 0, 1}, {0, 0, -i, 0}, {0, i, 0, 0}, {1, 0, 0, 0}};
and for R(Eμ):
R0 = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}};
R1 = {{0, 1, 0, 0}, {1, 0, 0, 0}, {0, 0, 0, i}, {0, 0, -i, 0}};
R2 = {{0, 0, 1, 0}, {0, 0, 0, -i}, {1, 0, 0, 0}, {0, i, 0, 0}};
R3 = {{0, 0, 0, 1}, {0, 0, i, 0}, {0, -i, 0, 0}, {1, 0, 0, 0}};
from which it can easily be seen that
L(Eμ) = Transpose[R(Eμ)]
and
R(Eμ) = Conjugate[L(Eμ)] = L(Eμ)*,
which means that complex conjugation operates identical to matrix transpose with these matrices.
Also, as expected,
L(Eμ) L(Eμ) = I = R(Eμ) R(Eμ) = L(E0) = R(E0)
and
L(E1) L(E2) = - L(E2) L(E1) = i L(E3) (following the rules for (ei ej) )
R(E1) R(E2) = - R(E2) R(E1) = -i R(E3) (meaning (L(Eμ) L(Eμ))* = L(Eμ)* L(Eμ)* )
L(Eμ) R(Eμ) = R(Eμ) L(Eμ) (meaning L(Eμ) L(Eμ)* = L(Eμ)* L(Eμ) )
while interestingly
L(Ei) R(Ei) = L(Ei) L(Ei)* (for i=1,2,3)
gives diagonal traceless matrices:
L(E1) R(E1) = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, -1, 0}, {0, 0, 0, -1}};
L(E2) R(E2) = {{1, 0, 0, 0}, {0, -1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, -1}};
L(E3) R(E3) = {{1, 0, 0, 0}, {0, -1, 0, 0}, {0, 0, -1, 0}, {0, 0, 0, 1}}.
I have not yet verified all the details, but a good job, Saša!
DeletePartly checked and looks good indeed. More in the new post tomorrow.
DeleteThanks, glad to hear that. Looking forward to the new post.
DeleteSaša, have you read the whole Spinor Series from the beginning that you knew precisely which post to refer to do the calculations? :) Much respect. I anticipated Dirac matrices but these are not them, neither L(Eμ), nor R(Eμ), not their product. Can we get the Dirac matrices from here?
ReplyDeleteDirac matrices cn be expressed in terms of products of L and R matrices, but that is not my aim here.
DeleteThank you.
DeleteBeen following more or less regularly Ark's blog since about mid summer and, like he said, his simple exercises for us and his excellent guidance through these waters, with practice, after some time can give a little bit of experience to remember where something similar was seen before.
Interesting thing that turned out, when checking Kronecker products "¤" of Pauli sigma matrices, as Dirac matrices are exactly that, is that for sigma0 being 2×2 identity matrix I, nice correspondence popped up:
L(E0) R(E0) = sigma0 ¤ sigma0 = I;
L(E1) R(E1) = sigma3 ¤ sigma0 = gamma0;
L(E2) R(E2) = sigma3 ¤ sigma0;
L(E3) R(E3) = sigma3 ¤ sigma3.
https://en.wikipedia.org/wiki/Kronecker_product
FWIW.
P.S. Using Wolfram Mathematica function Reverse,
Deletehttps://reference.wolfram.com/language/ref/Reverse.html
L(E1) R(E1) = gamma0;
-1 * Reverse[L(E1) R(E1)] = gamma1;
-i * Reverse[L(E3) R(E3)] = gamma2.
For Dirac gamma3 matrix some additional playing would be needed, but as it's not needed for our journey here, leaving it as it is.
Correction:
DeleteL(E2) R(E2) = sigma0 ¤ sigma3.
When I wrote that gamma matrices can be expressed in terms of products, I really meant: can be expressed (uniquely) in terms of linear combinations of products of L and R matrices. We will see why it is so tomorrow.
DeleteSo, gamma matrices seem to appear at the request of readers, though they were not initially in the plan.
Delete"We will see why it is so tomorrow". Ark, and if possible, please outline the perspective, what is the role that you prepare for the Cl(V) algebra in the QM play - it will be the algebra of observables, or the vector space of states, or them both?
" it will be the algebra of observables, or the vector space of states, or them both?"
DeleteBoth. Plus also a third role - the observer. You know, "Trinity".
Though the whole "philosophy" behind the math is not yet clear to me. It is emerging, but what exactly is emerging I still do not fully grasp.
DeleteSince we're talking of 'observer', Henry Stapp appears to be that same Stapp - the author of the quantum theory of consciousness. Rather unexpectedly, because in that paper LIGHT AS FOUNDATION OF BEING in 1985 he tried to eliminate the concept of 'observer' from quantum theory, and in his theory of consciousness in 90s 'observer' is in the very center of attention. Such a right-about turn!
DeleteIndeed. In the past I had many discussions with Stapp on this subject. I mention the problem in "The way out of the quantum trap".
DeleteDownloaded "The way out" and have started reading. Thank you!
DeleteМатрицы Дирака следуют из другой алгебры. Этот вопрос изучается в разделе 3.3 (геометрия алгебры векторных полей) по ссылке https://www.researchgate.net/publication/322369062_Matematiceskie_zametki_o_prirode_vesej
ReplyDeleteIgor, thank you for the reference.
DeleteIn L(u) matrix representation we have complex numbers:
ReplyDelete{{u0, u1, u2, u3},
{u1, u0, -iu3, iu2},
{u2, iu3, u0, -iu1},
{u3, -iu2, iu1, u0}}.
Ark, is that what you expected from us?
Yes. This agrees with Saša's u0 L0 + u1 L1 + u2 L2 + u3 L3. Good job.
Delete