Sunday, January 5, 2025

Spin Chronicles Part 33: Consciousness and conscious choices

 Once upon a time, guided by a whisper of intuition—or perhaps a playful nudge from fate—we set out on a journey. At first, our quest seemed clear: to uncover the mysteries of the enigmatic spinors. We had a map (or so we thought) and a destination in mind. But as we wandered deeper into the unknown, the wide road faded into a meandering trail, and the trail became a wisp of a path. Before we knew it, we were in a forest—dense, shadowy, and alive with secrets.

The forest wasn’t on the map, but here we were. And while our grand quest felt like a distant memory, the forest itself had other lessons to teach. At first, we worried: how would we ever find our way? But then we noticed the sweetness in the air, the earthy scent of moss, and the rustling leaves whispering ancient songs. We saw plump berries, glistening with dew—some delicious, others mysterious. And then, as if out of a dream, a gentle roe deer emerged, its soft eyes urging us to follow. It led us to a crystal-clear lake, where the water was cool and refreshing, as though the forest itself offered us a blessing.

... the forest itself offered us a blessing.


Forests, after all, are not just places to get lost; they are places to be found. They nourish the soul, if only we stop to look. So we paused, took a deep breath, and began to notice both the towering trees and the soft carpet of the forest floor. In this moment of stillness, we remembered our original quest. Yes, we were here to understand spinors, but perhaps the forest—the journey—was as important as the destination.

This particular forest is called Geometric Algebra A. It is a simple land, yet rich with wonder. To truly know it, we must not just walk its trails but see its beauty, smell its air, touch its textures, and listen to its tales. Some stories are soft whispers; others roar like waterfalls. This is one of those stories, told by the forest itself.

So, dear traveler, let us begin.

We are in geometric algebra A. It is simple, but it has a rich structure. We need to feel this structure by sight, by smell and by touch. We need to be able to hear the stories it says to us, sometimes silently, sometimes in a really loud voice. So this is on of these stories.

A is simple. In algebra saying this has a precise meaning: an algebra is simple if it has no non-trivial two-sided ideals. A two-sided ideal is a subalgebra that is at the same time left and right ideal. We did not consider two-sided ideals yet (and we will not in the future), but it is not difficult to show that A is indeed simple. But we did consider left ideals, and those of a particular form. To construct such an ideal we select a direction (unit vector) n in V, from this we construct p, with p=p*=pp:

p=(1+n)/2

Then we define, let us call it  In:

In = {u: up=u}.

This is a left ideal.

Now, the defining equation up=u is equivalently written as un=u (convince yourself that this is indeed the case!). Since n2 =1 (n is a unit vector), and n*=n, it follows that n, considered as an operator acting on A from the right, has two possible eigenvalues +1 and -1. So the equation  un=u means that In consists of eigenvectors of n belonging to the eigenvalue +1. This is our left ideal under consideration.

The first thing we notice is that p itself is an element of In. But that is not all In. In is a complex two-dimensional space. Thus there are two linearly independent (even mutually orthogonal) vectors in In.

Thus we proceed as follows: we choose an oriented orthonormal basis e1,e2,e3 in V in such a way that e3 coincides with n. Then e2 and e3 are perpendicular to n. Then we define a basis E1,E2 in In by choosing:

E1 = p
E2 = (e1 - ie2)/2,

Then magic happens: in this basis the left action of e1,e2,e3 on  In is given exactly by the three Pauli matrices!

We came to the lake in a forest and it is time to fore out the thinking machine in our brains. We have arrived naturally at Pauli matrices, which is very rewarding. Except for the fact that there is nothing "natural" in this process! First we had to select a direction n in otherwise completely isotropic space V. This cannot be deterministic. No deterministic process can lead to breaking a perfect symmetry. It can be done only by a conscious choice. So consciousness is entering here (or it can be a random choice, but then consciousness is needed to define what precisely a "random choice" is). In practice the choice of a reference direction, and of an orthonormal basis is being made by a conscious "observer" (or by a machine programmed by a conscious "observer"). You can, of course, replace "observer" by "experimental physicist" or "an engineer", but that will not change the idea.

Then we decided to define E2 the way we did above. Another application of consciousness. Thus, temporarily, I am associating the right action of the algebra on itself, the action of p in up=u, with consciousness. It is not needed for further considerations, but it something that should be thought about: we have left and right actions of operators on our Hilbert space. In ordinary quantum theory only left actions are being considered, What can be the meaning of right actions, if any? But let us abandon philosophy and return to math.

We have the basis Eα (α=1,2) in In, we have the basis ei (i=1,2,3) in V and they are related by Pauli matrices σi by the following relation

ei Eα = Eβi)βα.       (*)

Notice that I write the right hand side by putting the basis vectors first, and coefficients after. That has the advantage that matrices transforming the components are transposed to those transforming the basis vectors. This way I do not have to transpose anything.

But what happens if we replace our basis Eα by some other orthonormal basis in In? Then the whole beauty and simplicity of Pauli matrices will be spoiled. And we like Pauli matrices so much! And here is the place to demonstrate the power of the desire. We want Pauli matrices, whatever the cost would be! So, we start thinking. When there is a desire, there must be a way! So we start looking at our equation (*) from a different point of view. The elements Eα  and ei are related (or "correlated") by the Pauli matrices. If change Eα, perhaps ei also need to be changed so, that the correlation stays the same? We try our great idea of saving our love - the sigmas. The result is condensed in the following statement:

Proposition. There is one and only one way to have Eq. (*), with Pauli matrices in it, valid for all orthonormal bases in In. It goes as follows: if Eα is replaced by E'α related to Eα by a 2 by 2 unitary matrix A of determinant 1 (element of SU(2)):

E'α = Eβ Aβα,

then ei must be replaced by e'i, related to ei by a real orthogonal 3 by 3 matrix R(A) (an element of SO(3)):

e'i = ej R(A)ji,

where the relation between A and R is

iA* = σj R(A)ji.

Proof. Left as a straightforward, but needing use of indices,  exercise.

And this way we have accomplished something that was left unexplained in October 16 post  Part 3: Spin frames.

What we see now is that this correlation between spin  frames Eα and orthonormal frames ei is not so "natural" at all. It requires certain arbitrary human-made choices. It has little to do with the "true state of affairs". Spin frames and orthonormal frames are two different realities. Yes, they can be "correlated", but this correlation is artificial. So, the question remains: what are spinors? Elements of a left ideal? But which one? And why this one, and not some other?

Exercise 1. Do the calculations needed to prove the Proposition.

Exercise 2. For any x in A denote by Ax the set

Ax = {ax: a in A}

Show that Ax is a left ideal. Show that it is the smallest left ideal containing x. With In and p =(1+n)/2  show that In = Ap. Why Ap is not the same as An? What is An?

Exercise 3a.  If u in A is invertible, it cannot be contained in any of the  In's.

Exercise 4. Show that the * operation transforms every left ideal into a right ideal, and conversely.

Exercise 5. If Il1 and Il2 are two left ideals, is their intersection also a left ideal? If Il is a left ideal and Ir a right ideal, is their intersection a two-sided ideal?

Friday, January 3, 2025

The Spin Chronicles Part 32: Ideal exercises

 The Spin Chronicles Part 30:

Solutions to exercises of Part 30

This post is all written by Saša. Saša agreed to present the results of our common work (Saša, Anna, Bjab). So here they are.

Saša with ideals

Wednesday, January 1, 2025

The Spin Chronicles Part 31: Irreducibility

 Here is the beginning of the new post. I will be expanding it during the course of the day. At the same time I will be open to the feedback from my Readers to see if I need to change or adjust anything.

Happy 2025! 

This post will be about some basic stuff. I have ordered "Basic Algebra, Vol. II" by Nathan Jacobson, and yesterday it came by mail. It was on the kitchen table. Laura looked inside, opened it at the chapter "Primary Decomposition", looked at the symbols there, and noticed: "It so not so `basic'". Well the same will be with this post. Basic but not so basic.

We will be discussing "decompositions".  The general scenery is as follows:

- all spaces here are finite-dimensional and over the field of complex numbers

- we have an associative algebra A, with unit 1, and with an involution "*"

- we have a vector space E with a (positive defined) scalar product (u,v) (finite-dimensional Hilbert space, if you wish). We assume (u,v) is linear with respect to v, anti-linear with respect to u.

- we have a *-representation, let us call it ρ, of A on E. Thus for each u in A we have a linear operator ρ(u) acting on E (thus a member of End(E) such that ρ(1) is the identity operator and ρ(u*)=ρ(u)*, where * on the right hand side is the Hermitian conjugate of the operator ρ(u) with respect to the scalar product: (X* x,y) = (x,Xy) for all x,y in E, all X in End(E).

So far we have studied the case with A = Cl(V), E = A, and ρ = L or ρ = R - the left and right regular representations of A acting on A itself. But the reasoning below is the same for a general case. And it is more transparent at the same time, there are less possibilities for a confusion. In text below I will state certain things as evident: "it is so and so ...". In a good linear algebra course each of these things is being proven, or is left as an exercise - to be proved starting from definitions. I will also provide reasonings that sketch the proofs of less evident properties. 

We are interested in invariant subspaces of E. By a subspace I will always mean a linear (or vector-) subspace. "Invariant" means invariant for the representation ρ. Thus a subspace F⊂ E is invariant if ρ(A)F ⊂F or, explicitly

ρ(u)x∈F for all u∈A, x∈F.

The representation is said to be irreducible if there are no invariant subspaces other then the zero subspace (consisting of the zero vector alone) and the whole space E. These two subspaces are always trivially invariant.

If F is a subspace, its orthogonal complement F is also a subspace, and they span the whole space:

F⊕F= E.

Thus every u in E E can be uniquely decomposed (orthogonal decomposition)

u = v+w,

where v is in F and w is in F.

We define PF, the orthogonal projection on F, by

PF u = v in the above orthogonal decomposition. Then PF is a Hermitian idempotent PF=PF*=PF2, or "orthogonal projection", or "projection operator", or, simply,  "a projector".

To the decomposition F⊕F= E,  there corresponds the formula

PF+PF= I, or PF= I-PF,

where I stands for the identity operator.

It is an easy exercise to show that F is invariant if and only if PF commutes with all ρ(u). If this happens, then, evidently, also PF= I-PF commutes with all ρ(u), thus F is invariant if and only if Fis invariant. We then have a decomposition of E into the direct sum of two invariant subspaces:

E = F⊕F.

Note. In the case we have discussed in previous posts, we have E = A, and, for  ρ=L, ρ(u)x=ux. Therefore looking for a non-trivial invariant subspace is the same as looking for a non-trivial left ideal. For ρ=R, the right regular representation, looking for an invariant subspace is the same as looking for a right ideal. 

Reducibility in terms of a basis.

We can always choose an orthonormal basis ei in E. If E is n-dimensional, then i=1,2,...,n. If F is a subspace of E, with dim(F)=m, 0<m<n, we can always choose a basis so that e1,...,em are in F, while em+1,...,en are in F. Then e1,...,em form a basis in F, while em+1,...,en form a basis in F. We call such a basis "adapted to the decomposition". So, let ei be such a basis. Assume that F is invariant. The vectors e1,...,em are in F, so, since  F is invariant, for any u in A, the vectors ρ(u)ei (i=1,...,m) are also in F. But e1,...,em form a basis in F. Therefore ρ(u)ei are linear combinations of e1,...,em. We write it as

ρ(u)ei = ∑j=1m  ej μ(u)ji (i=1,...,m).

The m2 complex coefficients μ(u)ji form a matrix representation of A by m⨯m square matrices.We have

μ(uw) = μ(u)μ(w).

Now, since Fis also invariant, we have

ρ(u)ei = ∑j=m+1n  ej ν(u)ji (i=m+1,,,,,n).

Similarly we have

ν(uv) = ν(u)ν(v).

Thus we have another matrix representation of A with, this time, (n-m)⨯(n-m) square matrices. 

Now, all n basis vectors e1,..., en form a basis for E. So, we get a matrix representation of ρ(u), call its matrices simply  ρ(u)ij

ρ(u)ei = ∑j=1n  ej ρ(u)ji , (i=1.,,,,n).

Now, in our adapted basis,  the n by n matrices ρ(u)ji are block diagonal: they are of the form

ρ(u) = {{μ(u),0},{0,ν(u}}

with blocks m⨯m, m⨯(n-m),(n-m)⨯m,(n-m)⨯(n-m). That is the matrix view of reducibility. A representation is reducible if there is an orthonormal basis in E in which the matrices of the representation are block diagonal.

Exercise 1. In the comments to the last post we have found a basis E1,E2 which spans a nontrivial left ideal of A. Is this basis orthonormal? Probably not, because we were  not taking care of the normalization. But are vectors E1,E2 orthogonal to each other in the scalar product of A? But then there should be another pair of vectors, say E1',E2' orthogonal to E1,E2 and to each other that span the orthogonal complement to our left ideal. It should also be an invariant subspace, thus a complementary left ideal. Can we find a simple form of E1',E2'?   Can we find ν? It is enough to find ν(ei), where ei is the basis of A.

It is now time to formulate a version of Shur's famous Lemma, adapted to our needs.

Schur's Lemma.

Given the *-representation ρ of A on E, ρ is irreducible if and only if the only linear operators acting on E and commuting with all ρ(u) are multiples of the identity operator.

In other words ρ is irreducible if and only if the commutant ρ(A)' is CI. That is if and only if the commutant is trivial. It is reducible if and only if the commutant is non-trivial.

In the proof below we use the standard results about eigenvalues and eigenspaces of Hermitian operators (matrices, if you wish).

Proof. One way is simple. Suppose ρ is reducible, then there is a nontrivial invariant subspace F. Then the projection PF is non-trivial (different from 0 or I). But since F is invariant, PF is in the commutant. Now the harder part: show that if the commutant is non-trivial, then the representation is reducible. So suppose ρ(A)' is non-trivial. Then there is an operator X commuting with all ρ(u), and X is not of the form cI, c being a complex number. Now, since ρ is a *-representation, it follows that also X* commutes with all ρ(u).

Exercise 2. Prove the last statement.

But then X=(X+X*)/2 +(X-X*)/2. The first term commutes with all ρ(u), the second term too. Also the second term multiplied by the imaginary i. Both X+X* and i(X-X*) are Hermitian. If X=(X+X*)/2 +(X-X*)/2 is not of the form cI, then at least one of X+X* and i(X-X*), call it Y, is not of the form cI. Now we have Y=Y*, Y commutes with all ρ(u), and Y is not of the form cI. Since Y is Hermitian, it has eigenvalues and eigenspaces. At least one of its eigenspaces must be nontrivial (different from 0 and E). Call it F. Since Y commutes with ρ(u), its eigenspaces are invariant. Thus F is a nontrivial invariant eigenspace. Thus ρ is reducible.

In practice

In practice we often choose an orthonormal basis in E, and we work with matrices. Then  ρ is irreducible if and only if the only matrix commuting with all ρ(u) matrices is a multiple of the identity matrix. But A is assumed to be finite-dimensional. Thus there is a basis, say εr in A, r =1,2,...,k, where k is dimension of the algebra A. Every element u of A can be written as u=∑r=1k  cr εr, where cr are complex numbers. For a matrix M to commute with all ρ(u) matrices, it is enough that M commutes with all ρ(εr), r=1,...,k. Thus ρ is irreducible if and only if from [M,ρ(εr)]=0, r=1,...,k, it follows that M is a multiple of the identity matrix.

Why do we care?

Why do we care about reducibility or irreducibility? Suppose ρ is reducible. So we have invariant subspaces F and F. Then ρ acts on F. We may ask if ρ acting on F is still reducible. The same with ρ acting on F. We proceed this way until we end up with irreducible representations. We get this way

E=F1⊕...⊕Fp,

where each of F1,...,Fp carries an irreducible representations. These are the building blocks of ρ, the "bricks", or "atoms". They can't be decomposed any further. Both physicists and mathematicians want to know these "atoms". And if a representation is reducible, there must be some "reason" for it. Perhaps it is a reducible representation for A, but irreducible for some bigger algebra B? Then what would be the "meaning" of this B? On the other hand atoms are build of nucleons and electrons. If ρ is an irreducible representation of A, perhaps it is reducible when restricted to a smaller algebra C? What would be the meaning of such a C?

There is another thing: suppose we have two irreducible representations of A, one on F1, and one on F2. Are they "essentially the same", or "essentially different"? Two representations are essentially the same (we say: they are "equivalent") if it is possible to choose some orthonormal basis in F1, and one orthonormal basis in F2, in such a way that the matrices of both representations are exactly the same. Of course it is enough that the matrices representing  εr are the same.

How to discover the atoms?

We have so far followed a geometrical way and looked for particular left ideals of our geometric algebra A. But once we know that it is a *-algebra, there is another way, more related to the concepts of quantum theory such as "states". Then there is a celebrated GNS (Gelfand-Naimark-Segal) construction of irreducible representations. This we discuss later on, and we will relate it to our adventure with ideals.

Sunday, December 29, 2024

The Spin Chronicles Part 30: Quantum Logic

 

Let me begin this Sunday morning post with a thought-provoking quote from Britannica:

monad, (from Greek monas, “unit”), an elementary individual substance that reflects the order of the world and from which material properties are derived. The term was first used by the Pythagoreans as the name of the beginning number of a series, from which all following numbers derived. Giordano Bruno, in De monade, numero et figura liber (1591; “On the Monad, Number, and Figure”), described three fundamental types: God, souls, and atoms. The idea of monads was popularized by Gottfried Wilhelm Leibniz in Monadologia (1714). In Leibniz’s system of metaphysics, monads are basic substances that make up the universe but lack spatial extension and hence are immaterial. Each monad is a unique, indestructible, dynamic, soul-like entity whose properties are a function of its perceptions and appetites. Monads have no true causal relation with other monads, but all are perfectly synchronized with each other by God in a preestablished harmony. The objects of the material world are simply appearances of collections of monads.


God in a preestablished harmony

This rich and multilayered description of monads has fueled centuries of metaphysical speculation, philosophical debates, and even mathematical explorations. Today, we’ll delve into an intriguing reinterpretation of the monad concept—not in the metaphysical sense, but through the lens of mathematics, specifically the Clifford geometric algebra Cl(V), which we’ve symbolically denoted by a single letter:  A.

Our focus is not merely academic. We aim to explore whether can serve as a foundational model for the “objects of the material world.” In doing so, we confront a compelling question: How does one move from a singular monad A to a collection of monads? After all, the material world seems to consist of interactions among many such entities.

Here, two possibilities come to mind. The first is divine intervention—if God, with infinite power, can create one monad, then it is presumably trivial to create an infinite supply of them, stored in some vast metaphysical “container.” The second possibility, however, is far more intriguing: the monad A might possess a self-replicating property. If so, understanding this self-replication mechanism would require us to study the monad in exquisite detail. This is precisely the journey we are embarking on: a careful and meticulous examination of A.

Nowadays, variations of have found widespread application in fields like quantum computing and quantum cryptography. Increasingly, papers and books are being published that explore A—or, as it’s often called in this context, the “Qubit.” However, for most researchers in these areas, A is not viewed as a monad in the philosophical or foundational sense. Instead, it is seen as a practical and versatile tool, a means to develop cutting-edge technologies, often driven by the demands of the military-industrial complex.

The questions these researchers ask about A are, as a rule, quite different from the ones we are posing here. Their focus is on optimization, efficiency, and application, while our aim is to understand the deeper structure and meaning embedded within A . It’s a difference in perspective that can be compared to how one approaches a book: you can use a book as a blunt object to strike someone, or you can spend years pondering and analyzing the profound meaning contained in its opening sentence.

I place myself firmly in the second category of people—those who are drawn to the pursuit of understanding. That said, I wouldn’t hesitate to use the book for self-defense if the situation demanded it. But the essence of our endeavor here is not pragmatic or utilitarian; it’s a journey of curiosity and exploration, seeking to uncover the subtle and surprising truths that lie hidden within.

A contains geometry, algebra, and has a distinct quantum-mechanical smell. Let us follow our noses and look closer into the quantum-mechanical aspects of A.

A is an algebra, an algebra with involution "*". In fact, it is a C*-algebra and a von Neumann algebra - the baby version of them. There is a lot of publications about algebraic approach to quantum mechanics. It has started around 1936 with Birkhoff and von Neumann paper about the "The logic of quantum mechanics" (Ann. Math. 37, No. 4, pp. 823-843, 1936). The textbook by J.M. Jauch, "Foundations of Quantum Mechanics", Addison-Weley 1968, developed these ideas further, from a physicist's perspective. Algebras are used in quantum mechanics as "algebras of observables", which is somewhat confusing since ordinarily only selfadjoint elements of the algebra are being considered as "real observables". The product ab of two self-adjoint observable will not, in general, be self-adjoint, so real observables do not form an algebra under the algebra product (that is why Birkhoff and von Neuman were mainly focused on the Jordan product (ab+ba)/2). But there as simple observables, whose values are only 0 and 2. These form "quantum logic". Jauch calls them "propositions". They are presented by self-adjoint idempotents: p = p* = p2. Possible eigenvalues of self-adjoint idempotents are 1 and 0. They are treated as logical "yes" and "no". Let us concentrate on such elements of A and see what we can get this way?

We write a general element of A as p = (p0,p) ( or (p,p4) as in the previous post ), where p0 is a complex scalar, and p is a complex vector. Then p*= (p0*,p*), where * inside the parenthesis stands for complex conjugation. The p*=p means that p0*=p0 and p*=p. In other words p0 and p must be real.

Now we recall a general multiplication formula for A:

for p = (p0,p), q =(q0,q)

pq = (p0q0+p·q, p0q + q0p + i pq) .

In particular for pp we get

pp = (p02 + p·p, 2p0p)

since pp=0.

Thus pp = p implies

p02 + p·p = p0,

and


2p0p = p.

Then either p = 0 or not. If p =0, then p02 =p0, and we have two solutions p0=0 or p0=1. They correspond to trivial Hermitian idempotents p = 0 and p=1. On the other hand, if p is not a zero vector, then from the second equation we get p0 = 1/2. Substituting this value of  p0 into the first equation we get

1/4 + p·p = 1/2, or


p·p = 1/4.

We deduce that p = 1/2n, where n is a unit vector in V.

Therefore a general form of a nontrivial Hermitian idempotent is:

p = (1+n)/2.

This is our general propositional question in A: it is something about a direction in V. How to interpret it? We will be returning to this question again and again.

Exercise 1. Let n be a unit vector in V. Let Tn denote the space tangent to the unit sphere at n. Thus Tn can be identified with the subspace of V consisting of all vectors of V perpendicular to n. Let J be defined as a linear operator on Tn defined by

Ju = un

Show that J is well defined and it defines a complex structure on Tn (i.e.  that J2 = -1). Show that J is an isometry, that is that <Ju,Jv> = <u,v> for all u,v in Tn.

Exercise 2. Read Wikipedia article "Linear complex structure", section "Relation to complexifications". Take J from the previous exercise and extend it by linearity to TnC = Tn+iTn. Find a  solution of up = u  (see the discussion under the previous post) for  p=(1+n)/2 within TnC . Express it as an eigenvector of the operator iJ - to which eigenvalue?

Exercise 3. Let n i p be as above. Show that up=u if and only if un=u.

Exercise 4. Find an error in the proof below. I can't, but it looks suspicious to me.

Statement. Let n and n' bet two different unit vectors. Then u satisfies both un=u and un'=u if and only if u=0.

Proof.  Set n-n' = v. Suppose v is a nonzero vector. Then we have uv = 0. That means (u.v, u0v+iuxv)=0,  Thus u is perpendicular to v. Now, in  u0v+iuxv = 0 the first term is proportional to v, the second is perpendicular. Thus both must be zero. It follows that u0=0, and  uxv=0. Thus if v is not zero, then u=0.

Exercise 5. Consider the following example:

Choose n = (0,0,1) - e3. Choose e1 = (1,0,0) for the real part of u1. Get the expression for u1. Define

E1=(1+n)

E2=u1

Show that E1,and E2 span the left ideal of A. For this calculate the action of  e0,e1,e2,e3 on E1 and E2 and express the results as a linear combinations of E1,E2.

Exercise 6. Is the representation L of A on our left ideal from Exercise 5 irreducible or reducible?

To be continued.

Thursday, December 26, 2024

Spin Chronicles Part 29: Don't be cruel

 Every good story deserves a happy ending. After all, nobody wants to be left with frustration—especially during the holidays! So, on this cheerful Christmas Day, I bring you the happy conclusion to the journey we embarked on in Part 28

Happy conclusion

If you recall, I ended that post with a bit of a cliffhanger:

It would be cruel of me to ask the Reader, on Sunday, two days before Christmas Eve,  to prove that, in fact, we have

R(A) = L(A)',

L(A) = R(A)'.

So, I leave the proof for the next post. But, perhaps it is not so cruel to ask the following

Exercise 5. Show that L(A)∩R(A) = C, where C denotes here the algebra of cI, where c is a complex number and I is the identity matrix.

Now, I must confess—despite my best intentions, I may have accidentally channeled a little too much academic spirit right before the holidays. As Elvis Presley, a favorite in our home, would croon, “Don’t be cruel.” But cruel I was, unintentionally!

Thankfully, Saša rose to the challenge with some impressive attempts to crack the commutator identities. In mathematics, as in life, there’s often more than one way to reach the truth, and this case is no exception. Today, we’ll use some “baby tools” to tackle this “baby theorem,” leaving the more advanced approaches to grown-up textbooks like A.W. Knapp's Advanced Algebra (see Lemma 2.45).

Lemma 2.45. Let B be a finite-dimensional simple algebra over a field F, and
write V for the algebra B considered as a vector space. For b in B and v in V ,
define members l(b) and r(b) of EndF (V ) by l(b)v = bv and r(b)v = vb. Then
the centralizer in EndF (V ) of l(B) is r(B).


So, let’s unwrap this mathematical gift and bring our story to a festive close!

I used the term "commutant" instead of "centralizer". From what I know those dealing with infinite-dimensional algebra (C* and von Neumann) use the term "commutant", those who deal mainly with finite-dimensional cases (pure algebra, no topology)  use the term "centralizer". The proof in the advanced algebra book is not that "instant" and uses previous lemmas. Here is a simple proof that I have produced for our baby case.

Proof  (of R(A) = L(A)')

We already know that R(A) ⊂ L(A)', therefore it is enough to show that L(A)' ⊂ R(A). So, let X be an operator in End(A), and assume that X commutes with L(u) for all u in A. We want to show that then X is necessarily in R(A). I will use Latin indices Wm,n,... instead of μ, ν as in the previous post. We know that X = xmn LmRn.  Let us write L(u) = upLp. Then [X,L(u)]=0 reads as

0 = upxmn [ Lp, Lm ] Rn.

We used the fact that L's and R's commute.

Now, what do we know about the commutators  Lp, Lm ]? We know that L is a representation of A in End(A). We have defined Lp as L(ep), where ei (i=1,2,3) is an orthonormal basis in V, and e4=1. Since L is a representation, we have

[Lp,Lm]= L( [ep,em]).

Exercise 1. Make sure that you really know why is it so. Since er form a basis in A, the commutator
[ep,em] is a linear combination of er. We write it as


[ep,em] = cpmr er.

The constants are called the structure constants of the Lie algebra. Now,

L([ep,em]) = cpmr L(er) = cpmr Lr.

Therefore


0 =  cpmr upxmn Lr Rn

for all u.

What do we know about the structure constants cpmr ? If p or m = 4, the structure constants are 0, because e4=1 commutes with every other basis vector. Thus the sums over p and m run, in fact, only through j,k = 1,2,3. On the other hand e1e2= - e2e1 = ie3 etc.  Thus [e1,e2] = 2ie3 etc. While [e1,e1]=[e2,e2]=[e3,e3]=0.Therefore

[ej,ek] = 2i εjkl el.

So, we have


0 =  2i εpmr upxmn Lr Rn

where p,m,r run only through 1,2,3. We know that LrRn are linearly independent, therefore
εpmr upxmn = 0. And this is true for any u, therefore


0 = εpmrxmn ,

for all p,r = 1,2,3. To show that, for instance, x1n =0, we choose p=2,r=3. We deduce this way  that xmn=0 for m=1,2,3. The only possibly non-vanishing xmn are x4n. They stand in front of L4Rn . But L4 is the identity. QED.

So, we are done. It was technical, but rather straightforward, and not scary at all - once you overcome the fear of flying!

I used the term "representation". Anna used it too in the comment under the previous post, when talking about the scary Shur's lemma. So, here comes the exercise that should help in overcoming the fear of flying:

Overcoming the fear of flying

Exercise 2: Is the representation L reducible or irreducible?

Exercise 3. Let ✶ denote the map from A to A defined by ✶(u) = u*. Then ✶ is real-linear, but complex anti-linear. Thus it is not an element of End(A), because by End(A) we have denoted the algebra of complex linear operators on A. Show that

L(u) = ✶∘R(u*)∘✶

Hint: don't be scared of flying. First try to understand what it is that you are supposed to prove. It only looks scary. 

P.S. 27-12.24 10:09 In a comment to Part 28 Anna asked for an explanation why the matrices  Rm are transposed to Lm?(Exercise 3),  One way to answer this question is by calculating them explicitly. But there is a way to see it without calculating explicitly. Suppose we accept the already discussed property that L and R matrices are Hermitian. Then we start with the defining relation for (Lm)rn:

Lmen = er (Lm)rn
or

emen = er (Lm)rn.

We apply * to both sides. * is anti-linear, and (em)* = em. On the left we get

(emen )* = enem = Rmen = er(Rm)rn.

On the right we get

er cc((Lm)rn),

where cc stands for complex conjugate. Comparing both sides we get

Rm = cc(Lm).

But Lm is Hermitian (conjugate transposed) , thus, for  Lm, cc is the same as transposed (why is it so?).

P.S. 29-12-24 8:07 This morning received the following email:

Dear Dr. Arkadiusz Jadczyk,

We are pleased to inform you about a recent citation to the following papers you have authored or co-authored.

Your paper:
was recently cited in:

As of today, the paper received 1556 views and 256 downloads. For more article metrics, see: https://doi.org/10.3390/math12081140#metrics.


Although in the meantime I have almost forgotten about photon's localization problem, the phrase "light as foundation of being" is still in my mind. So, it is a good news.

P.S. 29-12-24 10:57 Anna, in her comment, mentioned the idea, supported by neuro-science research, that deep metaphysical questions exercise the most ancient parts of our brains. One such questions appeared in the comments to this blog: are we predetermined, or, perhaps, we are endowed with (necessarily limited) "free will"? How can we answer this question? I am applying my most ancient part, and I am reasoning, using it, as follows.

Whether we are predetermined or not, there are FACTS. One such fact is that we have senses, that these senses are limited, and we have brains, rather small compared to the size and complexity of the Universe. Thus our knowledge is limited, and our understanding is even more limited. There are many facts that we know about, but we do not understand them. Since our knowledge is limited, all conclusions are questionable. We can't really be sure of anything. What we know is the tip of an iceberg. So, how can we adhere to the conclusion that we are necessarily "predetermined". Such an idea is irrational. Of course someone may happen to be predetermined to hold to irrational ideas. But I choose to be rational, therefore open-minded. That is what my ancient part of my brain tells me. The newer part can find no fault in that kind of old-brain thinking.

Spin Chronicles Part 33: Consciousness and conscious choices

  Once upon a time, guided by a whisper of intuition—or perhaps a playful nudge from fate—we set out on a journey. At first, our quest seeme...