Sunday, December 29, 2024

The Spin Chronicles Part 30: Quantum Logic

 

Let me begin this Sunday morning post with a thought-provoking quote from Britannica:

monad, (from Greek monas, “unit”), an elementary individual substance that reflects the order of the world and from which material properties are derived. The term was first used by the Pythagoreans as the name of the beginning number of a series, from which all following numbers derived. Giordano Bruno, in De monade, numero et figura liber (1591; “On the Monad, Number, and Figure”), described three fundamental types: God, souls, and atoms. The idea of monads was popularized by Gottfried Wilhelm Leibniz in Monadologia (1714). In Leibniz’s system of metaphysics, monads are basic substances that make up the universe but lack spatial extension and hence are immaterial. Each monad is a unique, indestructible, dynamic, soul-like entity whose properties are a function of its perceptions and appetites. Monads have no true causal relation with other monads, but all are perfectly synchronized with each other by God in a preestablished harmony. The objects of the material world are simply appearances of collections of monads.


God in a preestablished harmony

This rich and multilayered description of monads has fueled centuries of metaphysical speculation, philosophical debates, and even mathematical explorations. Today, we’ll delve into an intriguing reinterpretation of the monad concept—not in the metaphysical sense, but through the lens of mathematics, specifically the Clifford geometric algebra Cl(V), which we’ve symbolically denoted by a single letter:  A.

Our focus is not merely academic. We aim to explore whether can serve as a foundational model for the “objects of the material world.” In doing so, we confront a compelling question: How does one move from a singular monad A to a collection of monads? After all, the material world seems to consist of interactions among many such entities.

Here, two possibilities come to mind. The first is divine intervention—if God, with infinite power, can create one monad, then it is presumably trivial to create an infinite supply of them, stored in some vast metaphysical “container.” The second possibility, however, is far more intriguing: the monad A might possess a self-replicating property. If so, understanding this self-replication mechanism would require us to study the monad in exquisite detail. This is precisely the journey we are embarking on: a careful and meticulous examination of A.

Nowadays, variations of have found widespread application in fields like quantum computing and quantum cryptography. Increasingly, papers and books are being published that explore A—or, as it’s often called in this context, the “Qubit.” However, for most researchers in these areas, A is not viewed as a monad in the philosophical or foundational sense. Instead, it is seen as a practical and versatile tool, a means to develop cutting-edge technologies, often driven by the demands of the military-industrial complex.

The questions these researchers ask about A are, as a rule, quite different from the ones we are posing here. Their focus is on optimization, efficiency, and application, while our aim is to understand the deeper structure and meaning embedded within A . It’s a difference in perspective that can be compared to how one approaches a book: you can use a book as a blunt object to strike someone, or you can spend years pondering and analyzing the profound meaning contained in its opening sentence.

I place myself firmly in the second category of people—those who are drawn to the pursuit of understanding. That said, I wouldn’t hesitate to use the book for self-defense if the situation demanded it. But the essence of our endeavor here is not pragmatic or utilitarian; it’s a journey of curiosity and exploration, seeking to uncover the subtle and surprising truths that lie hidden within.

A contains geometry, algebra, and has a distinct quantum-mechanical smell. Let us follow our noses and look closer into the quantum-mechanical aspects of A.

A is an algebra, an algebra with involution "*". In fact, it is a C*-algebra and a von Neumann algebra - the baby version of them. There is a lot of publications about algebraic approach to quantum mechanics. It has started around 1936 with Birkhoff and von Neumann paper about the "The logic of quantum mechanics" (Ann. Math. 37, No. 4, pp. 823-843, 1936). The textbook by J.M. Jauch, "Foundations of Quantum Mechanics", Addison-Weley 1968, developed these ideas further, from a physicist's perspective. Algebras are used in quantum mechanics as "algebras of observables", which is somewhat confusing since ordinarily only selfadjoint elements of the algebra are being considered as "real observables". The product ab of two self-adjoint observable will not, in general, be self-adjoint, so real observables do not form an algebra under the algebra product (that is why Birkhoff and von Neuman were mainly focused on the Jordan product (ab+ba)/2). But there as simple observables, whose values are only 0 and 2. These form "quantum logic". Jauch calls them "propositions". They are presented by self-adjoint idempotents: p = p* = p2. Possible eigenvalues of self-adjoint idempotents are 1 and 0. They are treated as logical "yes" and "no". Let us concentrate on such elements of A and see what we can get this way?

We write a general element of A as p = (p0,p) ( or (p,p4) as in the previous post ), where p0 is a complex scalar, and p is a complex vector. Then p*= (p0*,p*), where * inside the parenthesis stands for complex conjugation. The p*=p means that p0*=p0 and p*=p. In other words p0 and p must be real.

Now we recall a general multiplication formula for A:

for p = (p0,p), q =(q0,q)

pq = (p0q0+p·q, p0q + q0p + i pq) .

In particular for pp we get

pp = (p02 + p·p, 2p0p)

since pp=0.

Thus pp = p implies

p02 + p·p = p0,

and


2p0p = p.

Then either p = 0 or not. If p =0, then p02 =p0, and we have two solutions p0=0 or p0=1. They correspond to trivial Hermitian idempotents p = 0 and p=1. On the other hand, if p is not a zero vector, then from the second equation we get p0 = 1/2. Substituting this value of  p0 into the first equation we get

1/4 + p·p = 1/2, or


p·p = 1/4.

We deduce that p = 1/2n, where n is a unit vector in V.

Therefore a general form of a nontrivial Hermitian idempotent is:

p = (1+n)/2.

This is our general propositional question in A: it is something about a direction in V. How to interpret it? We will be returning to this question again and again.

Exercise 1. Let n be a unit vector in V. Let Tn denote the space tangent to the unit sphere at n. Thus Tn can be identified with the subspace of V consisting of all vectors of V perpendicular to n. Let J be defined as a linear operator on Tn defined by

Ju = un

Show that J is well defined and it defines a complex structure on Tn (i.e.  that J2 = -1). Show that J is an isometry, that is that <Ju,Jv> = <u,v> for all u,v in Tn.

Exercise 2. Read Wikipedia article "Linear complex structure", section "Relation to complexifications". Take J from the previous exercise and extend it by linearity to TnC = Tn+iTn. Find a  solution of up = u  (see the discussion under the previous post) for  p=(1+n)/2 within TnC . Express it as an eigenvector of the operator iJ - to which eigenvalue?

Exercise 3. Let n i p be as above. Show that up=u if and only if un=u.

Exercise 4. Find an error in the proof below. I can't, but it looks suspicious to me.

Statement. Let n and n' bet two different unit vectors. Then u satisfies both un=u and un'=u if and only if u=0.

Proof.  Set n-n' = v. Suppose v is a nonzero vector. Then we have uv = 0. That means (u.v, u0v+iuxv)=0,  Thus u is perpendicular to v. Now, in  u0v+iuxv = 0 the first term is proportional to v, the second is perpendicular. Thus both must be zero. It follows that u0=0, and  uxv=0. Thus if v is not zero, then u=0.

Exercise 5. Consider the following example:

Choose n = (0,0,1) - e3. Choose e1 = (1,0,0) for the real part of u1. Get the expression for u1. Define

E1=(1+n)

E2=u1

Show that E1,and E2 span the left ideal of A. For this calculate the action of  e0,e1,e2,e3 on E1 and E2 and express the results as a linear combinations of E1,E2.

Exercise 6. Is the representation L of A on our left ideal from Exercise 5 irreducible or reducible?

To be continued.

Thursday, December 26, 2024

Spin Chronicles Part 29: Don't be cruel

 Every good story deserves a happy ending. After all, nobody wants to be left with frustration—especially during the holidays! So, on this cheerful Christmas Day, I bring you the happy conclusion to the journey we embarked on in Part 28

Happy conclusion

If you recall, I ended that post with a bit of a cliffhanger:

It would be cruel of me to ask the Reader, on Sunday, two days before Christmas Eve,  to prove that, in fact, we have

R(A) = L(A)',

L(A) = R(A)'.

So, I leave the proof for the next post. But, perhaps it is not so cruel to ask the following

Exercise 5. Show that L(A)∩R(A) = C, where C denotes here the algebra of cI, where c is a complex number and I is the identity matrix.

Now, I must confess—despite my best intentions, I may have accidentally channeled a little too much academic spirit right before the holidays. As Elvis Presley, a favorite in our home, would croon, “Don’t be cruel.” But cruel I was, unintentionally!

Thankfully, Saša rose to the challenge with some impressive attempts to crack the commutator identities. In mathematics, as in life, there’s often more than one way to reach the truth, and this case is no exception. Today, we’ll use some “baby tools” to tackle this “baby theorem,” leaving the more advanced approaches to grown-up textbooks like A.W. Knapp's Advanced Algebra (see Lemma 2.45).

Lemma 2.45. Let B be a finite-dimensional simple algebra over a field F, and
write V for the algebra B considered as a vector space. For b in B and v in V ,
define members l(b) and r(b) of EndF (V ) by l(b)v = bv and r(b)v = vb. Then
the centralizer in EndF (V ) of l(B) is r(B).


So, let’s unwrap this mathematical gift and bring our story to a festive close!

I used the term "commutant" instead of "centralizer". From what I know those dealing with infinite-dimensional algebra (C* and von Neumann) use the term "commutant", those who deal mainly with finite-dimensional cases (pure algebra, no topology)  use the term "centralizer". The proof in the advanced algebra book is not that "instant" and uses previous lemmas. Here is a simple proof that I have produced for our baby case.

Proof  (of R(A) = L(A)')

We already know that R(A) ⊂ L(A)', therefore it is enough to show that L(A)' ⊂ R(A). So, let X be an operator in End(A), and assume that X commutes with L(u) for all u in A. We want to show that then X is necessarily in R(A). I will use Latin indices Wm,n,... instead of μ, ν as in the previous post. We know that X = xmn LmRn.  Let us write L(u) = upLp. Then [X,L(u)]=0 reads as

0 = upxmn [ Lp, Lm ] Rn.

We used the fact that L's and R's commute.

Now, what do we know about the commutators  Lp, Lm ]? We know that L is a representation of A in End(A). We have defined Lp as L(ep), where ei (i=1,2,3) is an orthonormal basis in V, and e4=1. Since L is a representation, we have

[Lp,Lm]= L( [ep,em]).

Exercise 1. Make sure that you really know why is it so. Since er form a basis in A, the commutator
[ep,em] is a linear combination of er. We write it as


[ep,em] = cpmr er.

The constants are called the structure constants of the Lie algebra. Now,

L([ep,em]) = cpmr L(er) = cpmr Lr.

Therefore


0 =  cpmr upxmn Lr Rn

for all u.

What do we know about the structure constants cpmr ? If p or m = 4, the structure constants are 0, because e4=1 commutes with every other basis vector. Thus the sums over p and m run, in fact, only through j,k = 1,2,3. On the other hand e1e2= - e2e1 = ie3 etc.  Thus [e1,e2] = 2ie3 etc. While [e1,e1]=[e2,e2]=[e3,e3]=0.Therefore

[ej,ek] = 2i εjkl el.

So, we have


0 =  2i εpmr upxmn Lr Rn

where p,m,r run only through 1,2,3. We know that LrRn are linearly independent, therefore
εpmr upxmn = 0. And this is true for any u, therefore


0 = εpmrxmn ,

for all p,r = 1,2,3. To show that, for instance, x1n =0, we choose p=2,r=3. We deduce this way  that xmn=0 for m=1,2,3. The only possibly non-vanishing xmn are x4n. They stand in front of L4Rn . But L4 is the identity. QED.

So, we are done. It was technical, but rather straightforward, and not scary at all - once you overcome the fear of flying!

I used the term "representation". Anna used it too in the comment under the previous post, when talking about the scary Shur's lemma. So, here comes the exercise that should help in overcoming the fear of flying:

Overcoming the fear of flying

Exercise 2: Is the representation L reducible or irreducible?

Exercise 3. Let ✶ denote the map from A to A defined by ✶(u) = u*. Then ✶ is real-linear, but complex anti-linear. Thus it is not an element of End(A), because by End(A) we have denoted the algebra of complex linear operators on A. Show that

L(u) = ✶∘R(u*)∘✶

Hint: don't be scared of flying. First try to understand what it is that you are supposed to prove. It only looks scary. 

P.S. 27-12.24 10:09 In a comment to Part 28 Anna asked for an explanation why the matrices  Rm are transposed to Lm?(Exercise 3),  One way to answer this question is by calculating them explicitly. But there is a way to see it without calculating explicitly. Suppose we accept the already discussed property that L and R matrices are Hermitian. Then we start with the defining relation for (Lm)rn:

Lmen = er (Lm)rn
or

emen = er (Lm)rn.

We apply * to both sides. * is anti-linear, and (em)* = em. On the left we get

(emen )* = enem = Rmen = er(Rm)rn.

On the right we get

er cc((Lm)rn),

where cc stands for complex conjugate. Comparing both sides we get

Rm = cc(Lm).

But Lm is Hermitian (conjugate transposed) , thus, for  Lm, cc is the same as transposed (why is it so?).

P.S. 29-12-24 8:07 This morning received the following email:

Dear Dr. Arkadiusz Jadczyk,

We are pleased to inform you about a recent citation to the following papers you have authored or co-authored.

Your paper:
was recently cited in:

As of today, the paper received 1556 views and 256 downloads. For more article metrics, see: https://doi.org/10.3390/math12081140#metrics.


Although in the meantime I have almost forgotten about photon's localization problem, the phrase "light as foundation of being" is still in my mind. So, it is a good news.

P.S. 29-12-24 10:57 Anna, in her comment, mentioned the idea, supported by neuro-science research, that deep metaphysical questions exercise the most ancient parts of our brains. One such questions appeared in the comments to this blog: are we predetermined, or, perhaps, we are endowed with (necessarily limited) "free will"? How can we answer this question? I am applying my most ancient part, and I am reasoning, using it, as follows.

Whether we are predetermined or not, there are FACTS. One such fact is that we have senses, that these senses are limited, and we have brains, rather small compared to the size and complexity of the Universe. Thus our knowledge is limited, and our understanding is even more limited. There are many facts that we know about, but we do not understand them. Since our knowledge is limited, all conclusions are questionable. We can't really be sure of anything. What we know is the tip of an iceberg. So, how can we adhere to the conclusion that we are necessarily "predetermined". Such an idea is irrational. Of course someone may happen to be predetermined to hold to irrational ideas. But I choose to be rational, therefore open-minded. That is what my ancient part of my brain tells me. The newer part can find no fault in that kind of old-brain thinking.

Wednesday, December 25, 2024

Christmas Special - Hyperboloid of Engineer Garin

Christmas special 2024 - Ideals coming soon

 In 1929, Alexei Nikolayevich Tolstoy published a fantasy novel titled Hyperboloid of Engineer Garin.



 If you’ve never heard of it, don’t worry! You can find a synopsis and even some amusing mock-ups of Tolstoy on Wikipedia. But let’s cut to the chase—the story’s charm isn’t just in its sci-fi intrigue but in a delightful mix-up: Tolstoy mistook a hyperboloid for a paraboloid. Classic!

Engineer Garin, the story’s protagonist and a brilliant chemist, invents something eerily similar to a modern laser. His invention brings him all the usual drama: greedy tycoons, life-threatening escapades, and a healthy dose of existential dread. Tolstoy himself couldn’t decide how to wrap up this chaotic tale. His indecision is forgivable—after all, even his grandmother warned him that people like Garin either end up as tyrants or tragedies. So, to keep things flexible, Tolstoy gave his novel not one but two endings. Talk about a writer’s workaround! 


Now, let’s get nerdy for a moment. A hyperbola (or its three-dimensional sibling, the hyperboloid) does have foci, but—and this is important—they don’t behave like the focus of a parabola. A ray of light passing through one focus of a hyperbola reflects off its surface and reemerges as if it came from the second focus. Handy for confusion, right? This mix-up isn’t unique to Tolstoy. Mirror telescopes, for example, often feature a parabolic main mirror to concentrate light and a smaller hyperbolic mirror to redirect those rays toward your eye.


The result? Stunning starlit views… and maybe an unintended homage to Tolstoy’s hyperbolic adventures. Cheers to science, storytelling, and a little festive mix-up of geometry this holiday season!

P.S. 25-12-24 16:05 From the conversation at our Christmas dinner table while ago:

- Pi R squared.
- No, pies aren'tt squared, they are round.

Sunday, December 22, 2024

Spin Chronicles Part 28: Left and Right Regular

As it is Sunday, and Christmas Eve is coming soon - it should be an easy talk today. In fact it is my intention that everything should be easy in my posts. By 'easy" I mean that even I myself can understand it. So, as Christmas is coming and light is a foundation of being, we decorated our home with light. The dedicated photographer in our extended family recorded it on a medium, for me to show you - there structure constants of the geometric algebra visible on the photo - for a trained eye:

Geometric algebra home

For this post I will denote our geometric Clifford algebra of space, Cl(V), by the bold letter A. It is an algebra over complex numbers, and we have a basis e0,e1,e2,e3 in A. For calculation purposes, especially when dealing with matrices, it is more prudent to number the basis differently: e1,e2,e3,e4, with e4 = e0 - the identity, the unit of A. And that is what I am going to use below. Thus every element of A can be written uniquely as

u = u1e1+... + u4e4,

where uμ are complex numbers.

A is endowed with involution "*", it is an involutive algebra. We notice that (eμ)* = eμ, μ - 1,...,4. For u,v in A we have (uv)*=v*u*.

A is also endowed with a positive definite scalar product

<u,v> = (u*v)4.

We notice that the basis vectors form an orthonormal basis of A:

<eμ, eν> = δμν.

Once we have a positive-definite scalar product, we have a norm, defined by ||u||2 = <u,u>, and we notice that

||u*|| = ||u||.

We also know, from the previous post,  that A is a Hilbert algebra - we have

<ba,c> = <b,ca*>.

As any algebra, so A  acts on itself. It can act from the left, from the right, or from both sides at once. Let us denote these actions by L, R, and LR:

L(u)w = uw,

R(u)w = wu,

LR(u)w = uwu*.

From associativity of the algebra it follows then that left and right actions commute

L(u)R(v) = R(v)L(u),

and evidently

LR(u) = L(u)R(u*) = R(u*)L(u).

The map L from A to End(A) is a faithful representation of A. It is called the left regular representation. Similarly for R. Moreover, it is a *-representation, that is we have

L(u*) = L(u)*.

What the above equality means? On the left L(u*) - the meaning is clear. On the right we have L(u)*. What does that mean? It is the Hermitian adjoint of the operator L(u) with respect to the Hilbert space scalar product. How is the Hermitian adjoint operator defined? Here is the defining relation:

<L(u)*v,w> = <v,L(u)w>,

or, if you prefer:

<L(u)v,w> = <v,L(u)*w>

Exercise 1. Use the Hilbert algebra property to show that L is indeed a *-representation. Do the same for R.

Note. This is the right place for a side remark. We do not really need it, but, nevertheless, here it is: A is endowed with a norm ||u||. But we have a faithful representation L of A on the Hilbert space A. To each u in A we have the associated linear operator L(u) acting on a Hilbert space. This operator has a norm, like it is the case with every bounded linear operator.   We can therefore equip A with another norm, denoted ||u||', defined as

||u||' = ||L(u)||.

If we do this, we have a nice property:

||u*u||' = ||u||'2,

because operators in every Hilbert space have this property. *-algebras with such a norm are named C*-algebras. So A can be thought as a particularly simple case of a C*-algebra. There is a whole theory of abstract C*-algebras (in finite-dimensional case they are the same as von Neumann algebras)

In the discussion under the last post Bjab calculated the matrix form of L(u) in a basis. Taking into account the change of indexing, index 0 replace d by index 4, L(u) is given by the matrix:

{{u4, -iu3, iu2, u1},
{iu3, u4, -iu1, u2},
{-iu2, iu1, u4, u3},
{u1, u2, u3, u4}
}.

I have moved the first row to the end, the first column became the last, and replaced u0 by u4. Selecting u = eμ, with (eμ)ν = δμν, we get the matrices Lμ calculated by Saša:

L1 = {{0,0,0,1},{0,0,-i,0},{0,i,0,0},{1,0,0,0}},

L2 = {{0,0,i,0},{0,0,0,1},{-i,0,0,0},{0,1,0,0}},

L3 = {{0,-i,0,0},{i,0,0,0},{0,0,0,1},{0,0,1,0}},

L4 = {{1,0,0,0},{0,1,0,0},{0,0,1,0},{0,0,0,1}}.

Matrices Rμ are transposed to the matrices Lμ.

Exercise 2. The matrices Lμ and Rμ are Hermitian. Why is it so?

Exercise 3. Why the matrices Rμ are simply transposed to the matrices Lμ?

The space End(A) - the space of all linear operators acting on A has complex dimension 16 (=4x4). We can build 16 matrices LμRν. There are enough of these matrices to build a basis in End(A). But to be a basis the matrices should be linearly independent. Are they?

One way to address this question is that End(A) is also a Hilbert space with a natural scalar product - for A,B in End(A) the scalar product is given by the trace:

<X,Y> = Tr(X*Y).

So, if our basis happens to be orthonormal, then we automatically have linear independence. Using Mathematica I verified that

<LμRν,LσRρ> = 4 δμσ δνρ

Therefore indeed our 16 matrices LμRν form a basis in End(A). Nice to know.

Let us concentrate now on L(A) - the image of A under the representation L. In other words: the set of all matrices L(u), u in A. L(A) is an algebra, a subalgebra of End(A). While End(A) is 16-dimensional, L(A) is only 4-dimensional. It is closed under Hermitian conjugation: if X is in L(A), then also  X* is in L(A). The same is true about R(A). We know that every elements in R(A) commutes with every element in L(A).

Exercise 4. Why is it so?

In algebra whenever we have a subalgebra S of an algebra T, we denote by S' the commutant of S in T:

S' = {X in T: XY = YX for all Y in S}.

The fact that every element in R(A) commutes with every element in L(A) can be expressed by the formulas:

R(A) ⊂ L(A)',

L(A) ⊂ R(A)'.

It would be cruel of me to ask the Reader, on Sunday, two days before Christmas Eve,  to prove that, in fact, we have

R(A) = L(A)',

L(A) = R(A)'.

So, I leave the proof for the next post. But, perhaps it is not so cruel to ask the following

Exercise 5. Show that L(A)∩R(A) = C, where C denotes here the algebra of cI, where c is a complex number and I is the identity matrix.

Spin Chronicles Part 33: Consciousness and conscious choices

  Once upon a time, guided by a whisper of intuition—or perhaps a playful nudge from fate—we set out on a journey. At first, our quest seeme...