Thursday, January 23, 2025

Spin Chronicles Part 40: GNS construction

Inspiring thought in the biography of Walter Russell "The Man who Tapped the Secrets of the Universe' by Glenn Clark:

""You say that the thought which flows through you," I interrupted, "is itself never created; the thought belongs to the universe; it is only the form of the thought that is created?"
"Yes," he replied. "I can go back to the answer which Rodin gave to Lillian Russell  when  she  asked him  if  it would be very difficult to learn to be a great sculptor. 'No, Madam,' he replied, 'it is not difficult. It is very simple. All you have to do is to buy a block of marble and knock off what you do not want.'"


All you have to do is to buy a block of marble
and knock off what you do not want.

So simple: just knock off what you do not want! I will try to follow this advice.

This is a continuation of Part 39: Inside a state. We have a finite-dimensional complex *-algebra A, with unit 1, and we have a state f.  Thus f is a linear functional on A, satisfying f(a*a) ≥ 0 for all a in A, and f(1)=1. We have seen that then f has the Hermitian property f(a*) = f(a)*, and it satisfies the Cauchy inequality  (we use essentially only the Cauchy part of Cauchy-Bunyakovsky–Schwarz):

|f(a*b)|2 ≤ f(a*a) f(b*b).  

What can we do with such an f?

Well, here we can also ask another, even more relevant, question: how do we know that the set of states is non-empty? Why should we analyze consequences of some assumptions, if the set of mathematical objects satisfying these assumptions is either empty or trivial, non-interesting?

In mathematics there are two main approaches to such an existential problems. We can try using the axiom of choice. Sometimes it works, sometimes not. It is a rather unsatisfactory solution. That is how we prove existence of functions that are not Borel-measurable. After that we know they exist, but we can't give even one concrete example. Not so useful in applications. The second method is to "construct". Can we "construct" a state? Perhaps we can "construct" every state? By "construction" I mean using bricks that already are at our disposal. Fortunately we can do it. We will use the constructive way later on, after we are done. So, have a faith - we are dealing with objects that exist in abundance.

The Cauchy inequality for states is quite similar in form to a well known inequality for Hilbert space scalar product:

|(x,y)|2 ≤ (x,x) (y,y).

This suggests that we can try to use f to define a scalar product (a,b)f on A:

(a,b)f ≝ f(a*b).

But in a Hilbert space we should have the property that  (x,x) = 0 implies x=0. With f we can have f(a*a) = 0 even if a≠0. It would be nice to see examples of f with this property, but with examples we have to wait until we know how to construct states.

In Part 39 we have seen that

{a: f(a*a)=0} = {a:f(a*b)=0 for all b},

which implies the the set {a: f(a*a)=0} is a linear subspace of A. Thus "bad" vectors, norm zero vectors, they form a linear subspace. We want all these vectors to become "zero" vectors. In linear algebra there is a standard way of performing such a task - take the quotient of a vector space by an unwanted subspace. So, we define the "unwanted" subspace

If = {a∈A: f(a*a)=0} = {a∈A: f(a*b) = 0, ∀ b∈A}.

We define Hf = A/If.

Which means that we introduce an equivalence relation in A: a~b if (a-b)∈If, and we define
Hf  as the set of equivalence classes: [a] ≝ a+If. Then Hf becomes automatically a vector space: [a]+[b] ≝ [a+b],  λ[a] ≝ [λa]. In particular [0] = If. One easily checks that these are correct definitions (if you never did it before - do it, check it!). The whole subspace If becomes a zero vector of the quotient space. On Hf we define now the scalar product:

([a],[b])f ≝ f(a*b).

Notice that this is a good definition: if [a']=[a], [b']=[b], then a'=a+u, b'=b+v with u,v in If. Then

f(a'*b')=f((a+u)*(b+v))=f(a*b)+f(a*v)+f(u*b)+f(u*v)

Now f(u*b) and f(u*v) are zero because u is in If. What to do with f(a*v)? We use the  Hermitian property of f: f(a*v)=f((a*v)*)*=f(v*a)*, which is zero because v is in If. So f(a'*b')=f(a,b) - the product ([a],[b])f is well defined, it does not depend on the choice of representatives of equivalence classes. It is a matter of writing a couple of lines to check that ([a],[b])f is linear in the second argument and anti-linear in the first - as it should be.

In a Hilbert space we should have the property that the only vector orthogonal to all vectors is the zero vector. Do we have it here? Suppose ([a],[b])f=0 for all b. That means f(a*b)=0 for all b. That means a is in If. That means [a]=[0].

What about zero norm vectors? Suppose ([a],[a])f=0. That means f(a*a)=0. That means a∈If. That means [a]=[0] - the zero vector of Hf.

So far so good. We have constructed a Hilbert space (finite-dimensional Hilbert spaces are also called "unitary" spaces. "Hilbert" is usually used for infinite-dimensional spaces). But there is more. I used the symbol If for a reason: If is a left ideal in A!

Indeed, suppose a∈If, and u∈A is arbitrary. Does it follow that ua∈If? We check:

f((ua)*b)= f((a*u*)b)= f(a*(u*b))=0.

So it works.  If  is a left ideal! In previous posts we used left ideals to construct a representation of A. Here we do something that looks as completely opposite: we have a nice left ideal, and we are getting rid of it! What a shame! Yet there is a method in this madness. The fact that  If  is a left ideal will now let us to construct a *-representation of A on Hf. Let us see how it works, and only after doing that we will be able to understand what is going on here.

So we define a representation by an almost evident formula:

ρf(a)[b]≝[ab].

Is it well defined? Is it a representation? Is it a *-representation? Let's check. Suppose [b]=[b']. Is it then true that [ab]=[ab']? If [b]=[b'] then b'=b+u, u∈If. Then ab'=ab+au. But  If  is a left ideal, therefore au∈If. Therefore [ab]=[ab'], and so ρf(a) is well defined. Checking that ρf(ab)=ρf(a)ρf(b) for all a,b in A is then a matter of using associativity - not a big deal. What about ρf(a*)=ρf(a)*? Here we need to use the scalar product of Hf.

Exercise 1. Do it.

We have prepared the scene. It is time to introduce the main actor. Our algebra has a distinguished element - the unit 1. We set

Ωf ≝ [1].

It is a vector in Hf.

Calculate the norm squared of Ωf :

(Ωff)f = ([1],[1])f  = f(1*1)= f(1) = 1.

So Ωf is a unit vector. Moreover, Ωf  is a cyclic vector for ρf. Let us verify it. Take any vector [a] in Hf. Then [a]=[a1]=ρf(a)[1]=ρf(a)Ωf . So every vector of Hf can be obtained by acting with a representation operator ρf(a) on Ωf .

But there is more.

To make it more transparent, we will skip in the following the subscript f. We have

(Ω,ρ(a)Ω)=([1],ρ(a)[1])=([1],[a])=f(1*a)=f(a).

Thus the values of our positive functional f (a) is recovered as an expectation value of the representing operator ρ(a) in the (vector) state Ω. We ended up with a Hilbert space, a *-representation, and a distinguished cyclic vector that realizes the functional as a quantum mechanical expectation value. A nice reward for the construction work.

So this is the Gelfand-Neumark-Segal construction in its finite-dimensional baby version.

There are still unanswered questions: How that relates to our previous constructions with left ideals? How to construct states? And can we cut off what we do not want? For instance I do not want the positivity prison. What will happen if we do not want to use the positivity restriction? There are people who would like to cut even more. For instance some do not like real numbers, they prefer finite fields. Some other would go beyond finite fields, but to non-Archimedean fields, like p-adic numbers.... Well, if you sculpt, you must be careful. If you cut too much, you may cut off the nose part, and your sculpture will get dysfunctional. So, step by step, carefully.

We will come back to the hanging questions in the next post.

P.S. 24-01-25 14:26 From "Okay Then News"

“Academic publishing is easier when sticking to approved narratives, but this retards progress. Contrarian ideas are vital to the expansion of knowledge. Epistemologists will understand that the search for truth is akin to carving a marvelous sculpture out of a block of marble. Much must be discarded, but that is part of the process. We chip away the detritus until we at last stumble upon the beauty that is the truth within. Something is rotten in the Academy. We need to improve. One way to improve is to properly address financial and other conflicts of interest. Another way is to entertain contrarian ideas, to indulge those occupied with “taboo science,” while still adhering to time-tested scientific principles and methods.”

34 comments:

  1. Leaving typos to Bjab, as it's his/her specialty, I wonder what exactly is the "zero" vector?
    For example, any scalar (bi)quaternion would be a zero vector to all purely vector (bi)quaternions as scalar parts of their products would always be 0, i.e. they are always orthogonal, and the same would be true for any two objects in different dimensional spaces, like scalars and vectors, vector in e3 direction to planar vectors in e1-e2 plane, and so on. Also, our unit vector n is perpendicular to all vectors in T_n^C, could it be considered a zero vector to the space of states/vectors within T_n^C? In addition, a cross product of all collinear vectors results in the zero vector, and zero vector could also be said to be collinear with every other vector as their cross product is again always zero vector, in addition to being orthogonal to all other vectors as their scalar or dot product is also always 0.
    So, what exactly do we describe with a "zero" vector?

    ReplyDelete
    Replies
    1. I mean, in STR a light cone is basically defined as x^2=0, or the invariant mass of particles as the square of their 4-momentum is also 0 for a photon. So when you said that we want all bad vectors with their norm 0 to become zero vectors, it appeared like some important vectors might be thrown into the zero vector basket.

      Delete
    2. To your last comment: in x^2=0 the negative part cancels the positive part. But in the post we do not have negative part, only positive. In this case norm zero can be considered as not giving us anything useful.

      Delete
    3. To your first question: it will be more clear with the next post. But think of it like that. Suppose we have two projections P and I-P. Then P(I-P) = 0. (I-P) is for P a "null vector". When we get rid of it, what remains is P, since PP=P. It will be something like that,

      Delete
    4. OK, so I should obviously refrain myself from drawing parallels and analogies to the STR, and also probably to other possibly indefinite metric spaces mentioned in previous post. Thanks.

      Delete
    5. Yeah, that P(I-P)=0 also crossed my mind, as in that comment several posts back it was shown that (1+n)/2 and (1-n)/2 are in fact idempotents producing orthogonal components as p and 1-p.

      Delete
    6. Orthogonal components should be orthogonal complements.
      Autocorrect on mobile phone is sometimes more pain than being helpful.

      Delete
    7. "Leaving typos to Bjab, as it's his/her specialty"

      In Polish every Polish female name ends with "a"

      Delete
    8. @Bjab "Polish female name ends with "a" "
      why assume a nickname is a name? :)))

      Delete
    9. "In Polish every Polish female name ends with "a" "

      In the post,
      https://ark-jadczyk.blogspot.com/2024/10/the-quirks-of-quaternions.html,
      Ark depicted you as a female fairy, and besides, I have never encountered a Polish personal name or nickname "Bjab" except here, either male or female, so until proven otherwise you can easily be both, that is an actual woman or an actual man in reality. FWIW.

      Delete
  2. define
    Hf as the set ->
    unnecessary newline

    a vector space: [a]+[b] ≝ [ab] ->
    a vector space: [a]+[b] ≝ [a+b]

    if [a']=[a], [b']=b ->
    if [a']=[a], [b']=[b]

    construction work ->
    construction work.

    ReplyDelete
    Replies
    1. (Ω,ρ(a)Ω)=([1],ρ(a)1) ->
      (Ω,ρ(a)Ω)=([1],ρ(a)[1])

      Delete
    2. Calculate the norm of Ωf :
      (Ωf ,Ωf)f = ([1],[1])f = f(1*1)= f(1) = 1. ->

      Isn't it the square or the norm?

      Delete
    3. Oh, it was autocorrection -
      I wrote "Isn't it the square or the norm?" instead of
      "Isn't it the square of the norm?"

      Delete
  3. 'check that ([a],[b])f is linear in the second argument and anti-linear in the first'
    Please, confirm, i am not quite sure: does anti-linearity mean that
    ([ka],[b])f = k*([a],[b])f
    and the sum remains as it is, no conjugation appears anywhere:
    ([a+c],[b])f = ([a],[b])f + ([c],[b])f ?

    ReplyDelete
  4. 'we use Hermitian property of f: f(a*v)=f((a*v)*)*=f(v*a)*'
    It is not quite clear to me how the 'Hermitian property' is used here.
    Isn't it the equality of a functional to its congugate f=f*?
    Probably, f(v*a)* should be read as f*(v*a)?

    ReplyDelete
    Replies
    1. I never defined f*. Of course I could have defined it as
      f(a) = f(a*). On the other hand f(v*a) is a complex numbers, so we know what f(va*)* is - the complex conjugate of of f(va*).
      Hermitian property was defined (I think I did it) as f(a*)=f(a)*.
      The way you suggest it would be more elegant. I was trying not to introduce new notation (f* in this case) when it can be avoided.
      Does it answer your question?

      Delete
    2. Yes, thank you. But it is a bit clearer for me in my notations.
      If we can use f(a*) = f*(a), then showing ρ(a*)=ρ(a)* (Ex.1) is not hard. Consider the scalar products:
      ([b], ρ(a*)[b]) = ([b], [a*b]) = f(b*a*b) and
      ([b], ρ(a)[b])* = ([b], [ab])* = f*(b*ab) = f(b*ab)* = f((ab)*b) = f(b*a*b)
      We arrived at the same result, => ρ(a*)=ρ(a)*.
      Is this acceptable?

      Delete
    3. Yes. That's fine. It is a little bit technical, formal, but sometimes there is no way to avoid a formal check of an important property. The devil may always be hiding in the details.

      Delete
    4. That's right. I was hardly aware of what i was doing, rather some formal actions. Still trying to realize.
      Returning to functional f,
      f((va*)*) = f(av*) = (f(va*))* but not = f(va*), ok?

      Delete
    5. Precisely, f(av*) is the complex conjugate of f(va*)

      Delete
  5. Today i’ve stumbled upon one misunderstanding coming from the roots (Part 27).
    What does isomorphism Cl(V) = End(Cl(V)) mean?
    It is written in my note that “any element of Cl(V) is mapped to an endomorphism of spinor space S”.
    But endomorphism of spinor space is not the same as End(Cl(V))?! We know that endomorphisms of the spinor space S {f1=(1,0;0,0), -f2=(0,0;1,0)} are described by Pauli matrices:
    sigma1 (φ,ψ;0,0) = (ψ,φ;0,0)
    sigma2 (φ,ψ;0,0) = i (φ,ψ;0,0)
    sigma3 (φ,ψ;0,0) = (φ,-ψ;0,0)
    So, can we say that Pauli matrices give a representation of the whole Cl3 as a certain endomorphism of ideal S, i.e., of the spinor space”?

    ReplyDelete
    Replies
    1. To formulate more precisely: whether algebra Cl(3) is completely defined by the space of its ideals and their endomorphisms (motions)? It is so for any Cl(V) or only for Cl3?

      Delete
    2. OK. Let me try to clarify.
      One possible way in which a spinor space is defined is: "minimal left ideal of the Clifford algebra". This is for any V. In the case of Cl(3) any non-trivial left ideal is "minimal". That is why skipped the "minimal" part in our discussion.
      We represent our Clifford algebra algebra elements by endomorphisms of a left ideal. This is a faithful and irreducible representation. By the Shur's Lemma the commutant is trivial. That implies that the image of Cl(V) in End(S) is the whole End(S) (vy the "double commutant theorem, which we did not discuss). Therefore End(S) is isomorphic to Cl(V).
      Of course End(S) is not the same as End(Cl(V)). But we have a faithful representation of Cl(V) in End(Cl(V)) either by left, or by right actions.
      Does it answer your questions?

      Delete
    3. "But we have a faithful representation of Cl(V) in End(Cl(V))"
      in End(Cl(V)) --> in End(S) ?
      Thank you, now i am convinced that End(S) is isomorphic to Cl(V), this is the key point. But didn't you show that somewhere in the Blog previously? Which part of it should i revise then?

      Delete
  6. Ouch, sorry for that "Anonymous" above. Got a new device and have not tuned it properly yet

    ReplyDelete
  7. I am trying to understand more about endomorphisms of ideals and their isomorphisms to Cl(V). Search through the Internet instantly gives the link https://arxiv.org/pdf/2103.09767 to the paper "On the bundle of Clifford algebras over the space of quadratic forms" by Arkadiusz Jadczyk.
    It's just another day and another attempt to unveil the poetry of maths...

    ReplyDelete
  8. @Ark, above you confirmed that End(S) is isomorphic to Cl(V), and i believed you. But i'm still searching for some digestable proof of this fact. You mentioned the Von Neumann bicommutant theorem.
    It relates the closure of the operator algebra (von Neumann algebra) to its double commutant, i don't see how it helps in our case.
    This theorem is related to the Jacobson density theorem, which is generalization of the Artin-Wedderburn theorem's, that same Artin who appeared in the Blog recently?
    The latter theorem shows that any primitive ring can be viewed as a "dense" subring of the ring of linear transformations of a vector space V. This seems closer to my problem, since we probably can consider our space of ideals S as such a dense subring and our endomorphisms as the ring of linear transformations of V.
    Surely, these serious instruments are superfluous for my modest problem, perhaps you know some baby versions of them apt for my needs?

    ReplyDelete
    Replies
    1. Now, that you have asked this question I see that using the bicommutant theorem is not really necessary. We have S and it is two-dimensional complex. Basis vectors e^i of Cl(V) are represented by three Pauli matrices and the identity 1 is represented by the identity matrix I. We have checked that we have a faithful representation. So it is 1-1, an injection. To show that Cl(V) is isomorphic to End(S), that we have a bijection, we need to know that the representation is onto, that it is surjective. The simplest way is by realizing that A=Cl(V) contains all complex linear combinations of 1 and e^i, and that complex linear combinations of I and Pauli matrices span all Mat(2,C), that is all End(S). End of proof.
      The other methos is: call our representation rho. Then rho(A) is a *-algebra inside Mat(2,M). The commutant rhoh(A)' of rho(A) is trivial
      rho(A)'= CI
      - every matrix commuting with all three Pauli matrices is a complex constant times the identity matrix. Then we can use bicommutant theorem for *-algebras

      rho(A)=rho(A)'' = (CI)'=Mat(2,C).

      But the first method is enough.

      Delete

Thank you for your comment..

Spin Chronicles Part 42: GNS Promenade

 Promenade: /ˌprɑm·əˈneɪd, -ˈnɑd/ to walk slowly along a street or path, usually where you can be seen by many people, for relaxation and p...