Friday, December 20, 2024

Spin Chronicles Part 27: Back to the roots

 We have to devote some space to Exercise 1 of the previous post

Back to the roots

The problems was: Prove that

<ba,c> = <b,ca*>,

where <u,v> is the scalar part of the product u*v, with   u* = τ(u). 


I have mentioned that it can be done in a hard way, using the definitions (and Anna did it), but there is also an easy way. It would be a pity not to examine the easy way, as it will be useful for us later on. The easy way involves using the matrix representation of Cl(V), that we have examined in Part 9. Specifically, we associate to u = (p0,p) the 2x2 complex matrix m(u):

m(u) = pμσμ p0σ0 + ... + p3σ3 

where σ0 is the identity matrix, and σi (i=1,2,3) are the three Pauli Hermitian matrices. With this association we automatically have that

m(uv) = m(u)m(v),

and

m(u*) = m(u)*,

where u*= τ(u), and m(u)* is the Hermitian conjugate of m(u). We also have the fact that p0, the scalar part of u, is given by

p0 = (1/2) Tr(m(u)).

Thus

<u,v> = (1/2) Tr(m(u*v)) = (1/2) Tr(m(u)*m(v)).

We can now return to our problem - we have


<ba,c> = (1/2) Tr( m(ba)*m(c) ) = (1/2) Tr( m(a)*m(b)*m(c) ),

and

<b,ca*>, = (1/2) Tr( m(b)*m(c)m(a)* )

But

Tr( m(a)*m(b)*m(c) ) = Tr( m(b)*m(c)m(a)* ),

because in general Tr(XY) = Tr(YX) -  this is the fundamental property of the trace.

QED.

After this introduction we can start with the main topic of this post - the more natural representation of Cl(V), without using Pauli matrices.

In a comment to my previous post I quoted the famous mathematician Emil Artin:

 Emil Artin in "Geometric Algebra",  p. 13, Dover 2016, wrote:


"Mathematical education is still suffering from the enthusiasms which the discovery of the isomorphism (between the ring of endomorphisms and the ring of matrices A.J.) has aroused. The result has been that geometry has been eliminated and replaced by computations. Instead of the intuitive maps of a space preserving addition and multiplication by scalars (these maps have an immediate geometric meaning) matrices have been introduced. From the innumerable absurdities --from a pedagogical point of view--let me point out one example and contrast it with the direct description. (...)  It is my experience that proofs involving matrices can be shortened by 50% if one throws the matrices out. Sometimes it can not be done; a determinant may have to be computed."

Here I will go against this warning, I will thus make an exception. Exceptions are sometimes even more important than rules, though it is bad when using an exception becomes a rule!

By the way, here is an example from Artin's biography in Wikipedia:

"On the orders of a Hamburg doctor whom he had consulted about a chronic cough, Artin had given up smoking years before. He had vowed not to smoke so long as Adolf Hitler remained in power. On May 8, 1945, at the news of Germany's surrender and the fall of the Third Reich, Natascha made the mistake of reminding him of this vow, and in lieu of a champagne toast, he indulged in what was intended to be the smoking of a single, celebratory cigarette. Unfortunately, the single cigarette led to a second, and another after that. Artin returned to heavy smoking for the rest of his life."

And so we have our geometric Clifford algebra of space Cl(V) (Cl(3) in the standard notation).  We have endowed V with the Euclidean metric and orientation, which enabled us to consider Cl(V) as a complex space of four dimensions. If ej (j=1,2,3) is an oriented orthonormal basis in V, then we set 

i = e1e2e3

 which is independent of the choice of such a basis. We select such a basis, and this choice enables us to realize Cl(V) as an algebra of complex matrices. In Part  9 we used the Pauli matrices, here we will do it in a more natural way.

Note: We will obtain the Pauli matrices also in a "natural way", but it will come later on, after we start discussing "observers", "reference frames", and ""measurements".

We are in the category of vector spaces and algebras. We have also decided to switch, for our case, from reals to complex numbers as the basic field. It came out naturally. Now Cl(V) is an algebra, we have associative multiplication defined for elements of Cl(V). But, first of all Cl(V) is a complex vector space. For vector spaces we have the concept of endomorphisms--in our case complex linear maps of Cl(V) into itself. Endomorphisms can be naturally composed. They form an associative algebra with unit, denoted End(Cl(V)), where the unit is the identity map. Notice that we are talking about endomorphisms of Cl(V), not about endomorphisms of V. V is a real vector space, and endomorphisms of V form a real algebra.

Now, every element u of Cl(V) defines an endomorphism of Cl(V), namely the left multiplication by u, we denote it by L(u):

L(u)v = uv  for all u,v in Cl(V).

Since Cl(V) is an associative algebra, we instantly get

L(u)L(v) = L(uv).

Also L(1) = Id.

Thus we have a homomorphism L between two algebras, Cl(V) and End(Cl(V)). L is "faithful", that is L(u)=0 if an only if u=0. This follows instantly by selecting v=1 in the formula  L(u)v = uv. In other words Cl(V), as an algebra, can be identified with its image in End(Cl(V)).

We do not have matrices yet, but we already have endomorphisms that Artin was talking about in the quote above. Now we will go for matrices. For this we select an oriented orthonormal basis ei in V, i=1,2,3. Then automatically we have a complex basis Eμ (μ=0,1,2,3) in Cl(V):

E0 = 1, Ei = ei (i=,1,2,3).

Once we have a basis in Cl(V), its endomorphisms are naturally represented by matrices. We are going to find the complex 4x4 matrices Lμ representing the endomorphisms L(Eμ). Here I will assume that we already know how to find the matrix representing a given endomorphism in a given basis.

Back to the roots


To be continued in the next post (unless one of the Readers will have enough patience to do this calculation.



12 comments:

  1. Next to last paragraph, at the beginning:
    "Once we a basis in Cl(V),"
    probably a verb missing, maybe "have". FWIW.

    ReplyDelete
  2. Yeh, I felt that we need not multiply those 2x2 matrices explicitly 🙈 And i tried to apply the property Tr(XY) = Tr(YX), but failed to use the most important thing: p0 = (1/2) Tr(m(u)). Such an elegant proof you've got. Thanks a lot for this exercise ...

    ReplyDelete
    Replies
    1. Seeing the way through - it comes with experience. When you have enough experience with using a certain tool, you look at the problem and you "know" without thinking: "it can be done", even if you have no idea "how". Without enough experience you feel like being in a maze, and you try this way or that way, with no guidance. I know it from my own studies. I read a chapter in a math book, and I have a rough idea what is about. But only going through a bunch of simple exercises I am starting to really "feel" the subject.

      Delete
    2. And similarly with proofs. I can read a proof of some theorem, and I think "I understand". But that i an illusion. When I close the book, and try to proof it myself, I realize that I have missed some parts, that I do not really understand yet. Understanding requires more than just reading and nodding my head.

      Delete
  3. One more insight for me: why associativity is so important in quantum mechanics. Because of this immediate consequence L(u)L(v) = L(uv).

    ReplyDelete
  4. "We are going to find the complex 4x4 matrices Lμ representing the endomorphisms L(Eμ). Here I will assume that we already know how to find the matrix representing a given endomorphism in a given basis."

    Is that similar to what you did with quaternions in the post:
    https://ark-jadczyk.blogspot.com/2024/10/the-quirks-of-quaternions.html ?

    ReplyDelete
    Replies
    1. If it is similar, and it seems it is, then for
      u = u0 e0 + u1 e1 + u2 e2 + u3 e3
      we get when:

      -- multiplying u with individual basis elements from the left:
      L(E0) : e0 u = u0 e0 + u1 e1 + u2 e2 + u3 e3;
      L(E1) : e1 u = u0 e1 + u1 e0 + u2 ie3 + u3 (-ie2);
      L(E2) : e2 u = u0 e2 + u1 (-ie3) + u2 e0 + u3 ie1;
      L(E3) : e3 u = u0 e3 + u1 ie2 + u2 (-ie1) + u3 e0;

      -- multiplying u with individual basis elements from the right:
      R(E0) : u e0 = u0 e0 + u1 e1 + u2 e2 + u3 e3;
      R(E1) : u e1 = u0 e1 + u1 e0 + u2 (-ie3) + u3 ie2;
      R(E2) : u e2 = u0 e2 + u1 ie3 + u2 e0 + u3 (-ie1);
      R(E3) : u e3 = u0 e3 + u1 (-ie2) + u2 ie1 + u3 e0.

      To get matrices L(Eμ) and R(Eμ) we can either read the uμ next to eμ to get the matrices' rows (like in the post "The Quirks of Quaternions") or read the eμ next to uμ to get the matrices' columns (like in https://math.stackexchange.com/questions/4520554/determine-matrix-of-endomorphism-given-a-basis).

      In Wolfram Mathematica notation, we get for L(Eμ):
      L0 = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}};
      L1 = {{0, 1, 0, 0}, {1, 0, 0, 0}, {0, 0, 0, -i}, {0, 0, i, 0}};
      L2 = {{0, 0, 1, 0}, {0, 0, 0, i}, {1, 0, 0, 0}, {0, -i, 0, 0}};
      L3 = {{0, 0, 0, 1}, {0, 0, -i, 0}, {0, i, 0, 0}, {1, 0, 0, 0}};
      and for R(Eμ):
      R0 = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}};
      R1 = {{0, 1, 0, 0}, {1, 0, 0, 0}, {0, 0, 0, i}, {0, 0, -i, 0}};
      R2 = {{0, 0, 1, 0}, {0, 0, 0, -i}, {1, 0, 0, 0}, {0, i, 0, 0}};
      R3 = {{0, 0, 0, 1}, {0, 0, i, 0}, {0, -i, 0, 0}, {1, 0, 0, 0}};
      from which it can easily be seen that
      L(Eμ) = Transpose[R(Eμ)]
      and
      R(Eμ) = Conjugate[L(Eμ)] = L(Eμ)*,
      which means that complex conjugation operates identical to matrix transpose with these matrices.

      Also, as expected,
      L(Eμ) L(Eμ) = I = R(Eμ) R(Eμ) = L(E0) = R(E0)
      and
      L(E1) L(E2) = - L(E2) L(E1) = i L(E3) (following the rules for (ei ej) )
      R(E1) R(E2) = - R(E2) R(E1) = -i R(E3) (meaning (L(Eμ) L(Eμ))* = L(Eμ)* L(Eμ)* )
      L(Eμ) R(Eμ) = R(Eμ) L(Eμ) (meaning L(Eμ) L(Eμ)* = L(Eμ)* L(Eμ) )
      while interestingly
      L(Ei) R(Ei) = L(Ei) L(Ei)* (for i=1,2,3)
      gives diagonal traceless matrices:
      L(E1) R(E1) = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, -1, 0}, {0, 0, 0, -1}};
      L(E2) R(E2) = {{1, 0, 0, 0}, {0, -1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, -1}};
      L(E3) R(E3) = {{1, 0, 0, 0}, {0, -1, 0, 0}, {0, 0, -1, 0}, {0, 0, 0, 1}}.

      Delete
    2. I have not yet verified all the details, but a good job, Saša!

      Delete
  5. Saša, have you read the whole Spinor Series from the beginning that you knew precisely which post to refer to do the calculations? :) Much respect. I anticipated Dirac matrices but these are not them, neither L(Eμ), nor R(Eμ), not their product. Can we get the Dirac matrices from here?

    ReplyDelete
  6. Матрицы Дирака следуют из другой алгебры. Этот вопрос изучается в разделе 3.3 (геометрия алгебры векторных полей) по ссылке https://www.researchgate.net/publication/322369062_Matematiceskie_zametki_o_prirode_vesej

    ReplyDelete

Thank you for your comment..

Spin Chronicles Part 27: Back to the roots

  We have to devote some space to Exercise 1 of the previous post .  Back to the roots The problems was: Prove that <ba,c> = <b,ca...