Friday, January 17, 2025

Spin Chronicles Part 38: Inside a representation

 Suppose ρ is a *-representation of a *-algebra A (with unit 1) on a Hilbert space H. We will assume that both A and H are finite-dimensional. Then one of two things can happen: either ρ is irreducible, or ρ is reducible. Let us examine the case of irreducible ρ first.

Inside a representation

The case of ρ irreducible: every non-zero vector is cyclic.

Suppose ρ is irreducible. That means H has no nontrivial invariant subspaces. Choose any non-zero vector x in H. Consider ρ(A)x:

ρ(A)x = {ρ(a)x: a∈A}.

Then ρ(A)x is a linear subspace of H (Why?). Moreover ρ(A)x is an invariant subspace (Why?). By definition it must be trivial. Since it is different from {0} (Why?) - it mast be the whole H. We say that the vector x is cyclic for ρ(A). A vector is cyclic for an algebra if acting with the algebra elements on this vector you get the whole space (in general a dense subspace, but we are in finite dimensions, so every dense subspace is the whole space).

The case of ρ reducible.

Now suppose ρ is reducible. That means there is a nontrivial invariant subspace F. But then F - the orthogonal complement of F in H, is also invariant (Why?). Now we have two representations of A, one on F, and one in F. We choose a non-zero vector in each of them and repeat the reasoning. Since we assume the dimension of H is finite, this procedure must end after a finite number of steps with all irreducible representations.  So, we have a decomposition

H = F1⊕F2⊕...⊕Fk,

and vectors x1,x1,...,xk, with xi being a cyclic vector for ρ(A) acting on Fi.

Vectors and states.

Let us go back to a general case. We have A, H, and ρ, with H being different from {0}. Let us choose a nonzero x vector in H, and let us suppose that x is normalized, wit ||x||=1. The x defines a state functional f on A by

f(a) = (x,ρ(a)x).

It is easy to check (check it!) that f(a*)=f(a)*, f(a*a) ≥ 0 for all a in A, and f(1) = 1.

Notice that vectors x and ix, define the same state f. In fact, vectors x and cx, where c i a complex number with |c|=1, define the same f. So there is some redundancy where replacing states (positive normalized functionals on A) by norm 1 vectors in H. Perhaps the source of this redundancy is already contained in A itself. In quantum theory A is assumed to be an algebra over complex numbers. What is the physical meaning of the product ca,where c is a complex number and a is an "observable" in A? Nobody knows it in a general case. For our Clifford algebra the meaning is geometric, but what it has to do with quantum theory? Good question.

In the next post we will finally see the GNS construction: starting from a state f, and constructing H, ρ, and a cyclic vector x. This representation sometimes, for some states f,  will be irreducible. We will see that this construction gives us a particular x, not just a class of physically "equivalent" cx. There is some mystery here.

Wednesday, January 15, 2025

Spin Chronicles Part 37: What we do with states

Hamilton discovered the algebra of quaternions on October 16, 1843. He wrote about this:

"They started into life, or light, full grown, on the 16th of October, 1843, as I was walking with Lady Hamilton to Dublin, and came up to Brougham Bridge. "

They were strange creatures at this time. Euler was very close to their discovery already 100 years before. Gauss discovered them 20 years before. But these two great mathematicians did not realize that they, quaternions, will be important. Hamilton discovered them all by himself and realized their importance. Of course not without obstacles. Lord Kelvin, the famous Scottish physicist, commented on Hamilton's discovery with a warning: "Quaternions came from Hamilton after his really good work had been done; and though beautifully ingenious, have been an unmixed evil to those who have touched them in any way."

Algebra is an unmixed evil indeed. What to do with an algebra? Multiply its elements one by another? Divide one by another? In 1858 Cayley discovered matrices and realized the abstract algebra of quaternions as a particular subalgebra of Mat(2,C). Quaternions became ready to do some real work. Matrices act on vectors creating new vectors. Cayley has found a representation of Hamilton's algebra.

Representations of abstract algebras as "algebras of operators" is today big branch of mathematics, with applications to physics, in particular to quantum physics. To the set of all (equivalence classes of) irreducible representations of a C*-algebra A has been given a name: the spectrum of A. So algebras have their spectra. Studying spectra of atoms gave birth to quantum theory. But algebras, as it seems, have their own ways of producing some kind of "light".

Here we are studying just one monadic-type of algebra - the geometric algebra of space. We should not expect more than one light ray coming from it. Perhaps two. Our algebra is isomorphic to the algebra of biquaternions (complex quaternions), thus double the size of Cayley algebra. We have already found its irreducible representations by looking for non-trivial left ideals. The left regular representation of our A can be decomposed into a direct sum of two equivalent representations (see Part 31)

A = F⊕F,

where F is the left ideal Ap, p=(1+n)/2, F=A(1-p).

As announced in my previous post we will take the same task, but using a different perspective - will use the Gelfand-Neumark construction (today known under the name GNS-construction). The GNS machine has been developed for the much more interesting infinite-dimensional case. Here  we will use it for the simplest possible case of a 4-dimensional complex algebra. In finite dimension we can disregard all talk about continuity, because all finite dimensional linear maps are automatically continuous. We can disregard all talk about one subspace being dense in another, because every finite-dimensional linear subspace is automatically closed. What remains of the Gelfand-Neumark construction is a pure algebra. However GNS construction is more "physics oriented", where by "physics" I mainly mean the lessons we have learned from quantum mechanics. And so, we will meet the important concept of "positivity". Here by positivity I shall always mean "non-negativity". Positivity is related to the fact that probabilities are usually being considered as positive, taking values in the interval [0,1]. For real numbers being positive can be defined as being a square of some other real number. For complex numbers being positive is the same as being of the form a*a, where a is another complex number, and a* denotes the complex conjugate of a. Quantum theory suggests us that this last definition also seems to work well with noncommutative *-algebras. So we define an element of our *-algebra A to be positive if it can be written as a*a, where now a* is the antilinear anti-automorphism in A. Now if b is positive, it is automatically Hermitian : b*=b (Why?). Hermitian elements in quantum theory are usually called "observables", positive observables are those that have positive eigenvalues - their spectrum is on the positive real axis. This is not evident from the definition, but it can be proved without great difficulties.

Note: It is rather intuitive, but it takes some real effort to prove it from the above definition that the sum of two positive elements is positive. But it is so.

Of course here we meet a big interpretational problem: what is the meaning of the algebra product for two noncommuting algebra elements a,b? Or even what is meaning of a+b when a and b do not commute? It is not a surprise that Feynman declared that nobody understands quantum theory. Of course many physicists and mathematicians work hard to find a way around these problems (quantum logic, Jordan algebras, noncommutative probability, "effects", etc.), but none of these many proposals has been generally accepted as "the solution". The problem still exists, and waits for a satisfactory answer. "Shut up and calculate "is not a fully satisfactory answer.

The next important ingredient of the construction is the concept of state. A state is a normalized positive linear functional on the algebra A. Linear functional means a linear map from the algebra A to complex numbers C. For every (finite-dimensional) vector space E we have its dual E' - the space of all linear functionals on E.  If vi are the components of vector v, and fi is any sequence of complex numbers, then f(v) = fivi defines an element f of E', and any element of E' is of this form (Why?). Here the fact that A is not just a vector space, but an algebra, plays no role. But we want f to be positive, which is defined as: f(v) is positive for all positive v. That is

f(a*a) ≥ 0 for all a in A.

Here the algebra structure and its star operation  play their role.

Finally state must be normalized. Here we use the fact that our algebra has a unit, which we denote simply by 1. Normalization means that we require f(1) = 1. On the left hand side 1 is the unit in the algebra, on the right-hand-side it is the number 1.

We usually interpret f(a) in a probabilistic way as an "expectation value" of a in the state f. So our requirement on f are: expectation value of a positive observable should be positive, and expectation value of an observable taking only value 1 is 1. Intuitive, but, perhaps, misleadingly simple.

So states provide numbers to algebra elements: complex numbers to general elements, real numbers to self-adjoint (a=a*) elements, positive numbers to positive elements. Numbers we understand better than abstract algebra elements. We call these numbers "expectation values" and instantly feel much better. What can we do with states? The same we do in the kitchen with the ingredients: we mix them. If f1 and f2 are states, and t is a real number in the interval [0,1], we can forma new state tf1+(1-t)f2. We can proceed with mixing adding to the mixture more and more states. By mixing states we lose information - this is known from classical probability, where we mix probability measures. Going in the reverse direction we can try to "un-mix" states. If our state can be decomposed into f1 and f2, we try to decompose f1 and f2 further, and continue until we finally arrive at states that are not mixtures of other states. These are called "pure states". They contain maximal information about the system, maximal within a given statistical model. This is common to both classical and quantum physics. The main difference between classical and quantum, in this respect, is the fact that in classical statistical mechanics the decomposition of a mixed  into pure states is unique (we say that in classical physics the statistical figure - the convex set of states - is a "simplex"), while in quantum mechanics there is no such uniqueness. This is perhaps one of the main puzzles of quantum theory. Where is this non-uniqueness coming from? And what does it mean? We will meet  this non-uniqueness on an example later on.

And then we can use states (mixed or pure) to construct representations of the algebra as algebras of operators acting on Hilbert spaces. Why do we need this? Can't we simply work with "expectation values" and be happy forever? Here comes another quantum mystery. Louis de Broglie associated waves with particles. Waves are famous for the phenomenon of "interference". Waves can "superpose". This is not the same as statistical mixing. Then came Heisenberg with his matrix quantum mechanics saying bye-bye to the wave picture, but the superposition principle was preserved: we can make superpositions of vectors in the space on which our matrices act. We can treat the superposition within the Hilbert space formalism, but they do not fit the abstract algebra framework. So, by looking for a representation of the algebra, we move from "states" to state vectors. What these state vectors represent beyond reproducing expectation values given by states - that is again a mystery.

Gelfand-Neumark construction takes a state and uses it to construct a Hilbert space and a representation of algebra as an algebra of operators in this space. It realizes this particular state used for the construction as one particular vector in a Hilbert space, and it creates a linear "envelope" of this state by acting with operators representing the algebra elements on this one distinguished vector. This is a general picture. It will be better understood when we will do it on several examples using our Clifford algebra as a toy.   

Sunday, January 12, 2025

Spin Chronicles Part 36: Gelfand-Neumark-Segal

 Why algebra? Well, there is a natural construction that associates algebras with spaces. So, we should ask "why space?", and "why why?". There is a certain magic in algebras, and this magic attracts me. That is how I started to be interested in physics. Through the beauty and usefulness of mathematics involved in our handling of natural phenomena. Numbers and symbols - they seem to have something to do with how we can comprehend the mysteries of the world around us.

In this series we are playing with particular algebras - geometric algebras related to the geometry of space and, perhaps, "time" as well. We play with them like girls play with their dolls, and boys with their toy soldiers (bad boys, I know, I was a bad boy too).

We are playing with particular algebras - geometric algebras

Probably the first algebra we ever meet is the algebra of sets - a model of logic. It has "sum" (union of sets) and product (intersection of sets) - that model logical operations "or" and "and". This is nicely embedded into the algebra of real-valued functions through the use of characteristic functions. Algebras of real or complex-valued functions are commutative. But then came the discovery of quaternions, matrix algebras, and then quantum mechanics in its Heisenberg's version, with non-commuting complementary variables. John von Neumann developed it into a whole big theory of "Algebras of operators". The new era begun.

Here we are playing with a toy: the geometric Clifford algebra of space. It is 8-dimensional real by nature, but it is naturally equipped with a complex structure. Then it starts to resemble the algebra of spin 1/2 with its Pauli matrices. Pure coincidence? Or there is something deeper here? To ralate geometric algebra to quantum-mechanical spinors we have followed the standard route of algebraists - we analyzed left ideals, and we were able to re-discover Pauli matrices. But followers of von-Naumann have found another way of constructing representations of *-algebras, closer related to their use in quantum field theory. As a result we have today "algebraic quantum field theory" that enables us to handle certain difficulties that have been found with divergent expressions. Today we understand that there is more than one way to provide a Hilbert space than by constructing what is called "the Fock space". The quantum "vacuum" may  be, perhaps, not so empty (In Fock space the ground state, the vacuum,  is with zero particles).

So far we have just one spin, so there is no real need to use these advanced tools. But, perhaps, we can learn something new by using these tools to our toy? I want to play with what is called GNS-construction. Wikipedia has an article devoted to this subject: "Gelfand–Naimark–Segal construction". In the History section of this article we find:

"Gelfand and Naimark's paper on the Gelfand–Naimark theorem was published in 1943.[3] Segal recognized the construction that was implicit in this work and presented it in sharpened form.[4]."

But I want to start with something deeper, something that will give us a real taste of the problem. So below is the full quote from the opening part of Ref. [4] of the Wikipedia article.

Notes on the Gelfand-Neumark Theorem

RICHARD V. KADISON

Dedicated to Irving Kaplansky and Irving Segal with gratitude and respect.

ABSTRACT. The Gelfand-Neumark Theorem, the GNS construction and some of their consequences over the past fifty years are studied.

1. Introduction

In 1943, a paper [G-N], written by I. M. Gelfand and M. Neumark, "On the imbedding of normed rings into the ring of operators in Hilbert space," appeared (in English) in Mat. Sbornik (see previous paper). From the vantage point of a fifty year history, it is safe to say that that paper changed the face of modern analysis. Together with the monumental "Rings of operators" series [M-vN I, II, III, IV], authored by F. J. Murray and J. von Neumann, it introduced "non-commutative analysis," the vast area of mathematics that provides the mathematical model for quantum physics.

The founders of the theory underlying quantum mechanics (Schrodinger and Heisenberg, primarily) were groping their way toward this mathematics ("wave" and "matrix" mechanics). With his magnificent volume [D], P. A. M. Dirac all but invents the operator algebra and uses Hilbert space techniques to produce powerful conclusions in physics. Of course, simultaneously with his introduction of "rings of operators," von Neumann's book [vN2] appeared, providing a model for "quantum measurement" and some of the fundamentals of quantum statistical mechanics.
Extremely knowledgeable and vitally interested in quantum physics, I. E. Segal, who had been developing commutative and non-commutative harmonic analysis in the Hilbert space context, recognized the construction buried in the Gelfand-Neumark paper- a construction that is basic and crucial for the subject of operator algebras. Just after publication of his "Postulates for quantum mechanics" [Sl], Segal published his groundbreaking "Irreducible operator algebras" [S2] in which that construction is sharpened and made explicit and then used in one of the earliest general studies of (infinite-dimensional) unitary representations of (non-commutative) locally compact groups.

A statement of the Gelfand-Neumark theorem follows.

THEOREM (GELFAND- NEUMARK 1943). If A is an algebra over the complex numbers C with unit I, with a norm A → ||A|| relative to which it is a Banach space for which

||AB|| ≤ ||A|| ||B||,

and ||I||  = 1 (A is a Banach algebra),

and with a mapping (involution) A → A* such that

i) (aA + B)* = a* A* + B* (a* is the complex conjugate of a),
ii) (AB)* = B*A*,
iii) (A*)* =A,
iv) ||A* A||= ||A*|| ||A||,
v) A*A + I has an inverse (in A) for each A in A,
vi) ||A*|| = ||A||,

then there is an isomorphism φ of A with a norm-closed subalgebra B of the algebra B(H) of all bounded operators on a Hilbert space H such that φ(A*) = φ(A)*, where φ(A)* is the adjoint (in B(H)) of φ(A). Moreover, ||φ(A)|| = ||A|| for all A in A.

Gelfand and Neumark conjecture, in their paper, that conditions ( v) and (vi) are superfluous, that is, derivable from the others. They were proved right ten years later on (v) and seventeen years later on (vi).

The Gelfand-Neumark construction allows us to construct representations (both reducible and irreducible) starting from the concept of "state", as it is understood in quantum mechanics. We will do it in the next post. Our geometric algebra A has all the required properties.

Friday, January 10, 2025

Spin Chronicles Part 35: Rotating vectors

 Anna asked " why is the spin in m-direction unitarily equivalent to sigma3 and what does it mean?"

Spinning - Rotations in action

This question was asked in a comment to the previous post, and here I will propose my answer. There is more than one way of answering possible, and I will chose just one, that relates to previous discussions.

Let us address a more general question: given any two unit vectors m and n in V, why is the spin in m-direction unitarily equivalent to the spin in n-direction? 

Let us analyze the question first. What exactly do we mean by that? What is "spin in m-direction"? The fact that we are asking about "unitary equivalence" indicates that we have in mind operators, not state vectors. Which operators?

Introduction

Given a unit vector m in V, we consider it as an element of the Clifford algebra Cl(V). Then it acts, in particular, as an operator L(m) of left multiplication by m on Cl(V) . But Cl(V) is 4-dimensional complex, and when we say "unitarily equivalent", we mean in a 2-dimensional complex space. So, we probably have in mind some irreducible representation of CL(V), perhaps one provided by one of its non-trivial left ideals. But we can start answering our question even before specifying the representation. And this is what we will do now. Cl(V) is a Hilbert space, the scalar product <v,w> is defined as the scalar part of v*w. We know that L(u*) = L(u)*, where on the right-hand-side we have Hermitian conjugate of L(u) with respect to the <v,w> scalar product. The spin operator in m-direction is then L(m), the spin operator in n-direction is L(n). In matrix representation, if we select an oriented orthonormal basis e1,e2,e3  in V, and represent the basic vectors by the three Pauli matrices,  L(n) would be represented by the matrix σ(n) = n1σ1+n2σ2+n3σ3. But we do not have to use matrix representation yet. We can proceed on an algebraic level, and descend to a particular representation only at the very end. It will be more "geometrical" this way. Instead of using almost mindless matrix multiplication we are going to use scalar products and vector products, which have rather simple geometrical meaning. Well, we will also use the exponential, but this will be  just a compact way of using sin and cos functions. It is a longer way, but it gives some satisfaction.

In the past, using n2=1 (if n is a unit vector), we calculated, within Cl(V),  exp(itn), with the result

exp(itn) = cos(t) + in sin(t).

But we do not have to calculate anything, we can just define u(t,n):

u(t,n) = cos(t) + in sin(t).

Then u(t,n) is in Cl(V), and we can verify that u(t,n) is unitary:

u(t,n)u(t,n)* = u(t,n)*u(t,n) = 1.

But the U(t,n)=L(u(t,n)) is a unitary operator acting on the Hilbert space Cl(V).

U(t,n)U(t,n)* = U(t,n)*U(t,n) = I (the identity operator on Cl(V)).

We will use this fact in what follows, remembering that it holds for any unit vector n and any real t.

Back to the original question

Let us return to the original question. We have two unit vectors m and n. There are two cases here. The first, generic case, is when m and n are not parallel. There are two exceptions from this generic case: m=n and m=-n. We will discuss first the generic case.

If m is not parallel to n, then the cross product mn is non-zero. In fact we have

||mn|| = |sin(θ)|,

where θ is the angle between these vectors. The vector k=mn/|sin(θ)| is then in V, and of unit norm. Thus

u(t,k) = cos(t) + i k sin(t)

is unitary. We can use our formula for the Clifford product to calculate u(t,ku(t,k)*. This is a simple exercise with cross products. The result is:

Exercise 1. Calculate the result.

Exercise 2. Check that with t = θ/2 (or t = -θ/2, I am not sure which, since I did not yet do these calculations!) we get


u(t,ku(t,k)* = n.

So m and n are unitarily equivalent in Cl(V). But then, since L is a *-representation, we have that U(t,k) gives us unitary equivalence of L(m) and L(n) acting on Cl(V). This is in four complex dimensions. How to descend to two? Simple: choose two-dimensional left ideal. For instance choose e1,e2,e3, and choose the left ideal determined by p = (1+e3)/2, as we have done before. But any other choice will do as well. Since it is a left-ideal, it is invariant under the action of  U(t,k) = L(u(t,k)). And U(t,k) being unitary in the whole Cl(V), is also unitary within any invariant subspace.

What remains is the exceptional case of m = -n.  This exceptional case can be handled even simpler. Let k be any unit vector in the plane perpendicular to n. Let u = kn (Clifford algebra product). Then

unu* =(kn)n(nk) =knk = -n.

Moreover, u is unitary in Cl(V).

Exercise 3. Verify this last statement.

Decscending from 4 to 2 dimensions
(left ideal) works as before.

Wednesday, January 8, 2025

Spin Chronicles Part 34: Inventory

 

For warming up, let me start this post with some quotations. First, a snippet from an online article titled "Discovery of Electron Spin":

"The discovery note in Naturwissenschaften is dated Saturday, 17 October 1925. One day earlier, Ehrenfest had written to Lorentz to make an appointment for the coming Monday to discuss a "very witty idea" of two of his graduate students. When Lorentz pointed out that the idea of a spinning electron would be incompatible with classical electrodynamics, Uhlenbeck asked Ehrenfest not to submit the paper. Ehrenfest replied that he had already sent off their note, and he added: "You are both young enough to be able to afford a stupidity!"

Ehrenfest's encouraging response to his students’ ideas contrasted sharply with that of Wolfgang Pauli. As it turned out, Ralph Kronig, a young Columbia University PhD who had spent two years studying in Europe, had come up with the idea of electron spin several months before Uhlenbeck and Goudsmit. He had put it before Pauli for his reactions, who had ridiculed it, saying that "it is indeed very clever but of course has nothing to do with reality." Kronig did not publish his ideas on spin. No wonder that Uhlenbeck would later refer to the "luck and privilege to be students of Paul Ehrenfest."

A. Pais, in Physics Today (December 1989)
M.J. Klein, in Physics in the Making (North-Holland, Amsterdam, 1989)"


No one fully understands spinors.

So, we know that the birth of the spin concept was not an easy one. Ideas that would revolutionize physics were initially dismissed, sometimes with sharp words, and only managed to take root under the shelter of intellectual bravery and a bit of recklessness. But here we are, nearly a hundred years later. Spin is no longer a child; it has matured into a cornerstone of quantum mechanics. How is it faring in its adulthood?

To answer this question, let us turn to another fascinating quotation, this time from the comprehensive Wikipedia article "Spinor." In the section titled "Attempts at intuitive understanding," we find the following:

"Nonetheless, the concept is generally considered notoriously difficult to understand, as illustrated by Michael Atiyah's statement that is recounted by Dirac's biographer Graham Farmelo:

No one fully understands spinors. Their algebra is formally understood but their general significance is mysterious. In some sense, they describe the "square root" of geometry and, just as understanding the square root of −1 took centuries, the same might be true of spinors."

Centuries? Do we really have to wait that long? Can we afford to? And what is it, exactly, that remains a mystery? Despite all the successes of quantum theory and the remarkable applications of spin physics—from magnetic resonance imaging to quantum computing—why is it that some of the brightest minds in physics and mathematics are still uneasy? Are they just inveterate malcontents, destined to grumble in the face of progress? Or is there a deeper puzzle lurking beneath the surface, one that defies our current frameworks?

I think the answer depends on the level of curiosity, which varies greatly among individuals. It has nothing to do with optimism or pessimism but rather with an insatiable desire to dig deeper. And so, with this in mind, let us make an inventory of what we have learned so far. Let’s take stock of the journey that has brought us here and explore the questions still waiting for answers.

insatiable desire to dig deeper


Our starting point was a "vector space". Why vectors? Well, we have to start with "something". Vectors is a reasonable starting point. Not excessively simple and not excessively complicated. Why 3 dimensions? This is a good question. There must be a reason for three dimensions. One possible way of dealing with this question would be: "because organic life is possible only in 3D". But that would be an answer that begs other questions. How could some intelligence, that creates it all,  know in advance what is possible and what not? By being able to see into the future, even if only vaguely? Perhaps, but that would lead us into speculations with no end in view. So, let us stay with an empirical fact - we live in 3D space.

Then we added an Euclidean metric to V. Another empirical fact. Perhaps on a large scale the geometry is non-Euclidean, but infinitesimally is Euclidean enough. So, let us start with the Euclidean flat space and see where we can go along this path. That is what we are doing now.

Then we endowed V with "orientation". This is more iffy. Why this orientation and not the opposite? It hurts our love feelings towards some "perfect symmetry". It hurts badly. Yes, there is an empirical fact - we live in a universe with broken parity. But it hurts. So, we look for possible remedy. Perhaps our universe is a two-sided surface, a boundary separating two higher dimensions? On one side of this surface there is one parity, on the other side there is an opposite parity? Perhaps the surface is not necessarily of zero thickness? Perhaps, occasionally, the two sides can "communicate" somehow? That is for the future. For now let us accept the fact: we live in a 3D space with a preferred orientation.

Once we have our starting point, we turn on the Clifford algebra machine. It rewards us with the complex geometric *-algebra A, isomorphic to complex quaternions, it rewards us with three involutions, with the group isomorphic to the group SL(2,C) of special relativity, and with the group isomorphic to SU(2), usually employed in the study of simple spinors. But there are no spinors yet. 

In quantum physics spinors transform under the irreducible representation of SU(2). We were not discussing group representations so far, but we were discussing the representation of A. Once we have a representation of A, we have also a representation of any group contained inside the algebra. And, in quantum theory, to deal with spinors we really need the algebra, not only the group.

To relate the Clifford algebra to quantum theory the method of searching for minimal ideals has been invented by algebraists. Usually just one left ideal is picked up and it is shown that this enables us to do all standard tricks with spinors, write equations, add interactions, etc.

We already know how to construct these ideals. We need to choose a Hermitian idempotent p. Each such nontrivial p is of the form  p=(1+n)/2, where n is a unit vector in V, a direction in 3D space. Algebraists define then the ideal generated by p as Ap = {up: u∈A}. I have chosen another, equivalent, way, a way that looks as a solution of a (right-sided) eigenvalue problem:

Ap = {u∈A: up=u} = {u∈A: un=u}

Note: We can also look for the eigenspace belonging to the eigenvalue 0 of p. This would give us a complementary left ideal. It can also be obtained as the eigenvalue 1 subspace for p'=(1-n)/2, corresponding to the choice of the opposite direction.

We consider p as an operator acting on A from the right. Since p2=p, eigenvalues of p are 0 and 1, and we are looking for a subspace belonging to the eigenvalue 1. Similarly for n. Since n2=1, n has eigenvalues +1 or -1, and we are looking for a subspace In belonging to the eigenvalue +1.

What can be the meaning of selecting just one such ideal? Perhaps it is with selecting a point in space to set up a reference frame there, as Alain Cagnati suggested? We select a reference direction in space to be able to quantify what we measure? This defines our two-dimensional reference Hilbert space. Or it is like choosing a certain perspective, so that we can map the 3D house on a 2D canvas, as suggested by Anna?  

So we select a reference direction, set up our axes e1,e2,e3 in V, and E1,Ein In,  and produce our 2D complex Hilbert space with a basis E1,E2. We get then Pauli matrices as representing left action of the basis vectors of V. Standard quantum theory of spin is reproduced within In.

Another choice of n would then give us hopefully equivalent description. Except what "equivalent" is is not completely clear. It needs to be clarified. We will come to this point later on.

Given a projection p=(1+n)/2 we can act with it from the left or from the right (or both ways at once). Given two projections p=(1+n)/2, q=(1+m)/2, we can ask two questions at the same time: find all u in A, satisfying simultaneously both equations:

1. up = u

2. qu = u.

We can interpret 1. as setting a reference direction to be n, and interpret 2. as finding all spin states with spin direction m. This is intersection of one left ideal with one right ideal. It is always one-dimensional.

Exercise 1. Prove this last statement.

What if we do not want to select a reference direction n? We can ask 2. without asking 1. We get a 2-dimensional (complex) right ideal. Strange. We are dealing now with two-dimensional complex subspace of a 4-dimensional complex space A. 2D subspaces suggest using bi-vectors. Bi-vectors are elements of a Grassmann algebra or Clifford algebra. Which suggests using the Clifford algebra of the Clifford algebra. Why not. Since A is real 8-dimensional, this would be Cl(8) - the beloved Clifford algebra of many. But that is just dreaming.

In the next post I will use another toy, going from pure algebraists to C* and von-Neumann "non-commutativists". They use so called GNS construction for playing with reducible and irreducible representations of *algebras. This will give us another perspective.



Sunday, January 5, 2025

Spin Chronicles Part 33: Consciousness and conscious choices

 Once upon a time, guided by a whisper of intuition—or perhaps a playful nudge from fate—we set out on a journey. At first, our quest seemed clear: to uncover the mysteries of the enigmatic spinors. We had a map (or so we thought) and a destination in mind. But as we wandered deeper into the unknown, the wide road faded into a meandering trail, and the trail became a wisp of a path. Before we knew it, we were in a forest—dense, shadowy, and alive with secrets.

The forest wasn’t on the map, but here we were. And while our grand quest felt like a distant memory, the forest itself had other lessons to teach. At first, we worried: how would we ever find our way? But then we noticed the sweetness in the air, the earthy scent of moss, and the rustling leaves whispering ancient songs. We saw plump berries, glistening with dew—some delicious, others mysterious. And then, as if out of a dream, a gentle roe deer emerged, its soft eyes urging us to follow. It led us to a crystal-clear lake, where the water was cool and refreshing, as though the forest itself offered us a blessing.

... the forest itself offered us a blessing.


Forests, after all, are not just places to get lost; they are places to be found. They nourish the soul, if only we stop to look. So we paused, took a deep breath, and began to notice both the towering trees and the soft carpet of the forest floor. In this moment of stillness, we remembered our original quest. Yes, we were here to understand spinors, but perhaps the forest—the journey—was as important as the destination.

This particular forest is called Geometric Algebra A. It is a simple land, yet rich with wonder. To truly know it, we must not just walk its trails but see its beauty, smell its air, touch its textures, and listen to its tales. Some stories are soft whispers; others roar like waterfalls. This is one of those stories, told by the forest itself.

So, dear traveler, let us begin.

We are in geometric algebra A. It is simple, but it has a rich structure. We need to feel this structure by sight, by smell and by touch. We need to be able to hear the stories it says to us, sometimes silently, sometimes in a really loud voice. So this is on of these stories.

A is simple. In algebra saying this has a precise meaning: an algebra is simple if it has no non-trivial two-sided ideals. A two-sided ideal is a subalgebra that is at the same time left and right ideal. We did not consider two-sided ideals yet (and we will not in the future), but it is not difficult to show that A is indeed simple. But we did consider left ideals, and those of a particular form. To construct such an ideal we select a direction (unit vector) n in V, from this we construct p, with p=p*=pp:

p=(1+n)/2

Then we define, let us call it  In:

In = {u: up=u}.

This is a left ideal.

Now, the defining equation up=u is equivalently written as un=u (convince yourself that this is indeed the case!). Since n2 =1 (n is a unit vector), and n*=n, it follows that n, considered as an operator acting on A from the right, has two possible eigenvalues +1 and -1. So the equation  un=u means that In consists of eigenvectors of n belonging to the eigenvalue +1. This is our left ideal under consideration.

The first thing we notice is that p itself is an element of In. But that is not all In. In is a complex two-dimensional space. Thus there are two linearly independent (even mutually orthogonal) vectors in In.

Thus we proceed as follows: we choose an oriented orthonormal basis e1,e2,e3 in V in such a way that e3 coincides with n. Then e2 and e3 are perpendicular to n. Then we define a basis E1,E2 in In by choosing:

E1 = p
E2 = (e1 - ie2)/2,

Then magic happens: in this basis the left action of e1,e2,e3 on  In is given exactly by the three Pauli matrices!

We came to the lake in a forest and it is time to fore out the thinking machine in our brains. We have arrived naturally at Pauli matrices, which is very rewarding. Except for the fact that there is nothing "natural" in this process! First we had to select a direction n in otherwise completely isotropic space V. This cannot be deterministic. No deterministic process can lead to breaking a perfect symmetry. It can be done only by a conscious choice. So consciousness is entering here (or it can be a random choice, but then consciousness is needed to define what precisely a "random choice" is). In practice the choice of a reference direction, and of an orthonormal basis is being made by a conscious "observer" (or by a machine programmed by a conscious "observer"). You can, of course, replace "observer" by "experimental physicist" or "an engineer", but that will not change the idea.

Then we decided to define E2 the way we did above. Another application of consciousness. Thus, temporarily, I am associating the right action of the algebra on itself, the action of p in up=u, with consciousness. It is not needed for further considerations, but it something that should be thought about: we have left and right actions of operators on our Hilbert space. In ordinary quantum theory only left actions are being considered, What can be the meaning of right actions, if any? But let us abandon philosophy and return to math.

We have the basis Eα (α=1,2) in In, we have the basis ei (i=1,2,3) in V and they are related by Pauli matrices σi by the following relation

ei Eα = Eβi)βα.       (*)

Notice that I write the right hand side by putting the basis vectors first, and coefficients after. That has the advantage that matrices transforming the components are transposed to those transforming the basis vectors. This way I do not have to transpose anything.

But what happens if we replace our basis Eα by some other orthonormal basis in In? Then the whole beauty and simplicity of Pauli matrices will be spoiled. And we like Pauli matrices so much! And here is the place to demonstrate the power of the desire. We want Pauli matrices, whatever the cost would be! So, we start thinking. When there is a desire, there must be a way! So we start looking at our equation (*) from a different point of view. The elements Eα  and ei are related (or "correlated") by the Pauli matrices. If change Eα, perhaps ei also need to be changed so, that the correlation stays the same? We try our great idea of saving our love - the sigmas. The result is condensed in the following statement:

Proposition. There is one and only one way to have Eq. (*), with Pauli matrices in it, valid for all orthonormal bases in In. It goes as follows: if Eα is replaced by E'α related to Eα by a 2 by 2 unitary matrix A of determinant 1 (element of SU(2)):

E'α = Eβ Aβα,

then ei must be replaced by e'i, related to ei by a real orthogonal 3 by 3 matrix R(A) (an element of SO(3)):

e'i = ej R(A)ji,

where the relation between A and R is

iA* = σj R(A)ji.

Proof. Left as a straightforward, but needing use of indices,  exercise.

And this way we have accomplished something that was left unexplained in October 16 post  Part 3: Spin frames.

What we see now is that this correlation between spin  frames Eα and orthonormal frames ei is not so "natural" at all. It requires certain arbitrary human-made choices. It has little to do with the "true state of affairs". Spin frames and orthonormal frames are two different realities. Yes, they can be "correlated", but this correlation is artificial. So, the question remains: what are spinors? Elements of a left ideal? But which one? And why this one, and not some other?

Exercise 1. Do the calculations needed to prove the Proposition.

Exercise 2. For any x in A denote by Ax the set

Ax = {ax: a in A}

Show that Ax is a left ideal. Show that it is the smallest left ideal containing x. With In and p =(1+n)/2  show that In = Ap. Why Ap is not the same as An? What is An?

Exercise 3a.  If u in A is invertible, it cannot be contained in any of the  In's.

Exercise 4. Show that the * operation transforms every left ideal into a right ideal, and conversely.

Exercise 5. If Il1 and Il2 are two left ideals, is their intersection also a left ideal? If Il is a left ideal and Ir a right ideal, is their intersection a two-sided ideal?

Spin Chronicles Part 38: Inside a representation

  Suppose ρ is a *-representation of a *-algebra A (with unit 1) on a Hilbert space H. We will assume that both A and H are finite-dimens...