Thursday, January 23, 2025

Spin Chronicles Part 40: GNS construction

Inspiring thought in the biography of Walter Russell "The Man who Tapped the Secrets of the Universe' by Glenn Clark:

""You say that the thought which flows through you," I interrupted, "is itself never created; the thought belongs to the universe; it is only the form of the thought that is created?"
"Yes," he replied. "I can go back to the answer which Rodin gave to Lillian Russell  when  she  asked him  if  it would be very difficult to learn to be a great sculptor. 'No, Madam,' he replied, 'it is not difficult. It is very simple. All you have to do is to buy a block of marble and knock off what you do not want.'"


All you have to do is to buy a block of marble
and knock off what you do not want.

So simple: just knock off what you do not want! I will try to follow this advice.

This is a continuation of Part 39: Inside a state. We have a finite-dimensional complex *-algebra A, with unit 1, and we have a state f.  Thus f is a linear functional on A, satisfying f(a*a) ≥ 0 for all a in A, and f(1)=1. We have seen that then f has the Hermitian property f(a*) = f(a)*, and it satisfies the Cauchy inequality  (we use essentially only the Cauchy part of Cauchy-Bunyakovsky–Schwarz):

|f(a*b)|2 ≤ f(a*a) f(b*b).  

What can we do with such an f?

Well, here we can also ask another, even more relevant, question: how do we know that the set of states is non-empty? Why should we analyze consequences of some assumptions, if the set of mathematical objects satisfying these assumptions is either empty or trivial, non-interesting?

In mathematics there are two main approaches to such an existential problems. We can try using the axiom of choice. Sometimes it works, sometimes not. It is a rather unsatisfactory solution. That is how we prove existence of functions that are not Borel-measurable. After that we know they exist, but we can't give even one concrete example. Not so useful in applications. The second method is to "construct". Can we "construct" a state? Perhaps we can "construct" every state? By "construction" I mean using bricks that already are at our disposal. Fortunately we can do it. We will use the constructive way later on, after we are done. So, have a faith - we are dealing with objects that exist in abundance.

The Cauchy inequality for states is quite similar in form to a well known inequality for Hilbert space scalar product:

|(x,y)|2 ≤ (x,x) (y,y).

This suggests that we can try to use f to define a scalar product (a,b)f on A:

(a,b)f ≝ f(a*b).

But in a Hilbert space we should have the property that  (x,x) = 0 implies x=0. With f we can have f(a*a) = 0 even if a≠0. It would be nice to see examples of f with this property, but with examples we have to wait until we know how to construct states.

In Part 39 we have seen that

{a: f(a*a)=0} = {a:f(a*b)=0 for all b},

which implies the the set {a: f(a*a)=0} is a linear subspace of A. Thus "bad" vectors, norm zero vectors, they form a linear subspace. We want all these vectors to become "zero" vectors. In linear algebra there is a standard way of performing such a task - take the quotient of a vector space by an unwanted subspace. So, we define the "unwanted" subspace

If = {a∈A: f(a*a)=0} = {a∈A: f(a*b) = 0, ∀ b∈A}.

We define Hf = A/If.

Which means that we introduce an equivalence relation in A: a~b if (a-b)∈If, and we define
Hf  as the set of equivalence classes: [a] ≝ a+If. Then Hf becomes automatically a vector space: [a]+[b] ≝ [ab],  λ[a] ≝ [λa]. In particular [0] = If. One easily checks that these are correct definitions (if you never did it before - do it, check it!). The whole subspace If becomes a zero vector of the quotient space. On Hf we define now the scalar product:

([a],[b])f ≝ f(a*b).

Notice that this is a good definition: if [a']=[a], [b']=b, then a'=a+u, b'=b+v with u,v in If. Then

f(a'*b')=f((a+u)*(b+v))=f(a*b)+f(a*v)+f(u*b)+f(u*v)

Now f(u*b) and f(u*v) are zero because u is in If. What to do with f(a*v)? We use the  Hermitian property of f: f(a*v)=f((a*v)*)*=f(v*a)*, which is zero because v is in If. So f(a'*b')=f(a,b) - the product ([a],[b])f is well defined, it does not depend on the choice of representatives of equivalence classes. It is a matter of writing a couple of lines to check that ([a],[b])f is linear in the second argument and anti-linear in the first - as it should be.

In a Hilbert space we should have the property that the only vector orthogonal to all vectors is the zero vector. Do we have it here? Suppose ([a],[b])f=0 for all b. That means f(a*b)=0 for all b. That means a is in If. That means [a]=[0].

What about zero norm vectors? Suppose ([a],[a])f=0. That means f(a*a)=0. That means a∈If. That means [a]=[0] - the zero vector of Hf.

So far so good. We have constructed a Hilbert space (finite-dimensional Hilbert spaces are also called "unitary" spaces. "Hilbert" is usually used for infinite-dimensional spaces). But there is more. I used the symbol If for a reason: If is a left ideal in A!

Indeed, suppose a∈If, and u∈A is arbitrary. Does it follow that ua∈If? We check:

f((ua)*b)= f((a*u*)b)= f(a*(u*b))=0.

So it works.  If  is a left ideal! In previous posts we used left ideals to construct a representation of A. Here we do something that looks as completely opposite: we have a nice left ideal, and we are getting rid of it! What a shame! Yet there is a method in this madness. The fact that  If  is a left ideal will now let us to construct a *-representation of A on Hf. Let us see how it works, and only after doing that we will be able to understand what is going on here.

So we define a representation by an almost evident formula:

ρf(a)[b]≝[ab].

Is it well defined? Is it a representation? Is it a *-representation? Let's check. Suppose [b]=[b']. Is it then true that [ab]=[ab']? If [b]=[b'] then b'=b+u, u∈If. Then ab'=ab+au. But  If  is a left ideal, therefore au∈If. Therefore [ab]=[ab'], and so ρf(a) is well defined. Checking that ρf(ab)=ρf(a)ρf(b) for all a,b in A is then a matter of using associativity - not a big deal. What about ρf(a*)=ρf(a)*? Here we need to use the scalar product of Hf.

Exercise 1. Do it.

We have prepared the scene. It is time to introduce the main actor. Our algebra has a distinguished element - the unit 1. We set

Ωf ≝ [1].

It is a vector in Hf.

Calculate the norm of Ωf :

(Ωff)f = ([1],[1])f  = f(1*1)= f(1) = 1.

So Ωf is a unit vector. Moreover, Ωf  is a cyclic vector for ρf. Let us verify it. Take any vector [a] in Hf. Then [a]=[a1]=ρf(a)[1]=ρf(a)Ωf . So every vector of Hf can be obtained by acting with a representation operator ρf(a) on Ωf .

But there is more.

To make it more transparent, we will skip in the following the subscript f. We have

(Ω,ρ(a)Ω)=([1],ρ(a)1)=([1],[a])=f(1*a)=f(a).

Thus the values of our positive functional f (a) is recovered as an expectation value of the representing operator ρ(a) in the (vector) state Ω. We ended up with a Hilbert space, a *-representation, and a distinguished cyclic vector that realizes the functional as a quantum mechanical expectation value. A nice reward for the construction work

So this is the Gelfand-Neumark-Segal construction in its finite-dimensional baby version.

There are still unanswered questions: How that relates to our previous constructions with left ideals? How to construct states? And can we cut off what we do not want? For instance I do not want the positivity prison. What will happen if we do not want to use the positivity restriction? There are people who would like to cut even more. For instance some do not like real numbers, they prefer finite fields. Some other would go beyond finite fields, but to non-Archimedean fields, like p-adic numbers.... Well, if you sculpt, you must be careful. If you cut too much, you may cut off the nose part, and your sculpture will get dysfunctional. So, step by step, carefully.

We will come back to the hanging questions in the next post.

7 comments:

  1. Leaving typos to Bjab, as it's his/her specialty, I wonder what exactly is the "zero" vector?
    For example, any scalar (bi)quaternion would be a zero vector to all purely vector (bi)quaternions as scalar parts of their products would always be 0, i.e. they are always orthogonal, and the same would be true for any two objects in different dimensional spaces, like scalars and vectors, vector in e3 direction to planar vectors in e1-e2 plane, and so on. Also, our unit vector n is perpendicular to all vectors in T_n^C, could it be considered a zero vector to the space of states/vectors within T_n^C? In addition, a cross product of all collinear vectors results in the zero vector, and zero vector could also be said to be collinear with every other vector as their cross product is again always zero vector, in addition to being orthogonal to all other vectors as their scalar or dot product is also always 0.
    So, what exactly do we describe with a "zero" vector?

    ReplyDelete
    Replies
    1. I mean, in STR a light cone is basically defined as x^2=0, or the invariant mass of particles as the square of their 4-momentum is also 0 for a photon. So when you said that we want all bad vectors with their norm 0 to become zero vectors, it appeared like some important vectors might be thrown into the zero vector basket.

      Delete
    2. To your last comment: in x^2=0 the negative part cancels the positive part. But in the post we do not have negative part, only positive. In this case norm zero can be considered as not giving us anything useful.

      Delete
    3. To your first question: it will be more clear with the next post. But think of it like that. Suppose we have two projections P and I-P. Then P(I-P) = 0. (I-P) is for P a "null vector". When we get rid of it, what remains is P, since PP=P. It will be something like that,

      Delete
    4. OK, so I should obviously refrain myself from drawing parallels and analogies to the STR, and also probably to other possibly indefinite metric spaces mentioned in previous post. Thanks.

      Delete
    5. Yeah, that P(I-P)=0 also crossed my mind, as in that comment several posts back it was shown that (1+n)/2 and (1-n)/2 are in fact idempotents producing orthogonal components as p and 1-p.

      Delete
    6. Orthogonal components should be orthogonal complements.
      Autocorrect on mobile phone is sometimes more pain than being helpful.

      Delete

Thank you for your comment..

Spin Chronicles Part 40: GNS construction

Inspiring thought in the biography of Walter Russell "The Man who Tapped the Secrets of the Universe' by Glenn Clark: ""...