Wednesday, January 1, 2025

The Spin Chronicles Part 31: Irreducibility

 Here is the beginning of the new post. I will be expanding it during the course of the day. At the same time I will be open to the feedback from my Readers to see if I need to change or adjust anything.

Happy 2025! 

This post will be about some basic stuff. I have ordered "Basic Algebra, Vol. II" by Nathan Jacobson, and yesterday it came by mail. It was on the kitchen table. Laura looked inside, opened it at the chapter "Primary Decomposition", looked at the symbols there, and noticed: "It so not so `basic'". Well the same will be with this post. Basic but not so basic.

We will be discussing "decompositions".  The general scenery is as follows:

- all spaces here are finite-dimensional and over the field of complex numbers

- we have an associative algebra A, with unit 1, and with an involution "*"

- we have a vector space E with a (positive defined) scalar product (u,v) (finite-dimensional Hilbert space, if you wish). We assume (u,v) is linear with respect to v, anti-linear with respect to u.

- we have a *-representation, let us call it ρ, of A on E. Thus for each u in A we have a linear operator ρ(u) acting on E (thus a member of End(E) such that ρ(1) is the identity operator and ρ(u*)=ρ(u)*, where * on the right hand side is the Hermitian conjugate of the operator ρ(u) with respect to the scalar product: (X* x,y) = (x,Xy) for all x,y in E, all X in End(E).

So far we have studied the case with A = Cl(V), E = A, and ρ = L or ρ = R - the left and right regular representations of A acting on A itself. But the reasoning below is the same for a general case. And it is more transparent at the same time, there are less possibilities for a confusion. In text below I will state certain things as evident: "it is so and so ...". In a good linear algebra course each of these things is being proven, or is left as an exercise - to be proved starting from definitions. I will also provide reasonings that sketch the proofs of less evident properties. 

We are interested in invariant subspaces of E. By a subspace I will always mean a linear (or vector-) subspace. "Invariant" means invariant for the representation ρ. Thus a subspace F⊂ E is invariant if ρ(A)F ⊂F or, explicitly

ρ(u)x∈F for all u∈A, x∈F.

The representation is said to be irreducible if there are no invariant subspaces other then the zero subspace (consisting of the zero vector alone) and the whole space E. These two subspaces are always trivially invariant.

If F is a subspace, its orthogonal complement F is also a subspace, and they span the whole space:

F⊕F= E.

Thus every u in E E can be uniquely decomposed (orthogonal decomposition)

u = v+w,

where v is in F and w is in F.

We define PF, the orthogonal projection on F, by

PF u = v in the above orthogonal decomposition. Then PF is a Hermitian idempotent PF=PF*=PF2, or "orthogonal projection", or "projection operator", or, simply,  "a projector".

To the decomposition F⊕F= E,  there corresponds the formula

PF+PF= I, or PF= I-PF,

where I stands for the identity operator.

It is an easy exercise to show that F is invariant if and only if PF commutes with all ρ(u). If this happens, then, evidently, also PF= I-PF commutes with all ρ(u), thus F is invariant if and only if Fis invariant. We then have a decomposition of E into the direct sum of two invariant subspaces:

E = F⊕F.

Note. In the case we have discussed in previous posts, we have E = A, and, for  ρ=L, ρ(u)x=ux. Therefore looking for a non-trivial invariant subspace is the same as looking for a non-trivial left ideal. For ρ=R, the right regular representation, looking for an invariant subspace is the same as looking for a right ideal. 

Reducibility in terms of a basis.

We can always choose an orthonormal basis ei in E. If E is n-dimensional, then i=1,2,...,n. If F is a subspace of E, with dim(F)=m, 0<m<n, we can always choose a basis so that e1,...,em are in F, while em+1,...,en are in F. Then e1,...,em form a basis in F, while em+1,...,en form a basis in F. We call such a basis "adapted to the decomposition". So, let ei be such a basis. Assume that F is invariant. The vectors e1,...,em are in F, so, since  F is invariant, for any u in A, the vectors ρ(u)ei (i=1,...,m) are also in F. But e1,...,em form a basis in F. Therefore ρ(u)ei are linear combinations of e1,...,em. We write it as

ρ(u)ei = ∑j=1m  ej μ(u)ji (i=1,...,m).

The m2 complex coefficients μ(u)ji form a matrix representation of A by m⨯m square matrices.We have

μ(uw) = μ(u)μ(w).

Now, since Fis also invariant, we have

ρ(u)ei = ∑j=m+1n  ej ν(u)ji (i=m+1,,,,,n).

Similarly we have

ν(uv) = ν(u)ν(v).

Thus we have another matrix representation of A with, this time, (n-m)⨯(n-m) square matrices. 

Now, all n basis vectors e1,..., en form a basis for E. So, we get a matrix representation of ρ(u), call its matrices simply  ρ(u)ij

ρ(u)ei = ∑j=1n  ej ρ(u)ji , (i=1.,,,,n).

Now, in our adapted basis,  the n by n matrices ρ(u)ji are block diagonal: they are of the form

ρ(u) = {{μ(u),0},{0,ν(u}}

with blocks m⨯m, m⨯(n-m),(n-m)⨯m,(n-m)⨯(n-m). That is the matrix view of reducibility. A representation is reducible if there is an orthonormal basis in E in which the matrices of the representation are block diagonal.

Exercise 1. In the comments to the last post we have found a basis E1,E2 which spans a nontrivial left ideal of A. Is this basis orthonormal? Probably not, because we were  not taking care of the normalization. But are vectors E1,E2 orthogonal to each other in the scalar product of A? But then there should be another pair of vectors, say E1',E2' orthogonal to E1,E2 and to each other that span the orthogonal complement to our left ideal. It should also be an invariant subspace, thus a complementary left ideal. Can we find a simple form of E1',E2'?   Can we find ν? It is enough to find ν(ei), where ei is the basis of A.

It is now time to formulate a version of Shur's famous Lemma, adapted to our needs.

Shur's Lemma.

Given the *-representation ρ of A on E, ρ is irreducible if and only if the only linear operators acting on E and commuting with all ρ(u) are multiples of the identity operator.

In other words ρ is irreducible if and only if the commutant ρ(A)' is CI. That is if and only if the commutant is trivial. It is reducible if and only if the commutant is non-trivial.

In the proof below we use the standard results about eigenvalues and eigenspaces of Hermitian operators (matrices, if you wish).

Proof. One way is simple. Suppose ρ is reducible, then there is a nontrivial invariant subspace F. Then the projection PF is non-trivial (different from 0 or I). But since F is invariant, PF is in the commutant. Now the harder part: show that if the commutant is non-trivial, then the representation is reducible. So suppose ρ(A)' is non-trivial. Then there is an operator X commuting with all ρ(u), and X is not of the form cI, c being a complex number. Now, since ρ is a *-representation, it follows that also X* commutes with all ρ(u).

Exercise 2. Prove the last statement.

But then X=(X+X*)/2 +(X-X*)/2. The first term commutes with all ρ(u), the second term too. Also the second term multiplied by the imaginary i. Both X+X* and i(X-X*) are Hermitian. If X=(X+X*)/2 +(X-X*)/2 is not of the form cI, then at least one of X+X* and i(X-X*), call it Y, is not of the form cI. Now we have Y=Y*, Y commutes with all ρ(u), and Y is not of the form cI. Since Y is Hermitian, it has eigenvalues and eigenspaces. At least one of its eigenspaces must be nontrivial (different from 0 and E). Call it F. Since Y commutes with ρ(u), its eigenspaces are invariant. Thus F is a nontrivial invariant eigenspace. Thus ρ is reducible.

In practice

In practice we often choose an orthonormal basis in E, and we work with matrices. Then  ρ is irreducible if and only if the only matrix commuting with all ρ(u) matrices is a multiple of the identity matrix. But A is assumed to be finite-dimensional. Thus there is a basis, say εr in A, r =1,2,...,k, where k is dimension of the algebra A. Every element u of A can be written as u=∑r=1k  cr εr, where cr are complex numbers. For a matrix M to commute with all ρ(u) matrices, it is enough that M commutes with all ρ(εr), r=1,...,k. Thus ρ is irreducible if and only if from [M,ρ(εr)]=0, r=1,...,k, it follows that M is a multiple of the identity matrix.

Why do we care?

Why do we care about reducibility or irreducibility? Suppose ρ is reducible. So we have invariant subspaces F and F. Then ρ acts on F. We may ask if ρ acting on F is still reducible. The same with ρ acting on F. We proceed this way until we end up with irreducible representations. We get this way

E=F1⊕...⊕Fp,

where each of F1,...,Fp carries an irreducible representations. These are the building blocks of ρ, the "bricks", or "atoms". They can't be decomposed any further. Both physicists and mathematicians want to know these "atoms". And if a representation is reducible, there must be some "reason" for it. Perhaps it is a reducible representation for A, but irreducible for some bigger algebra B? Then what would be the "meaning" of this B? On the other hand atoms are build of nucleons and electrons. If ρ is an irreducible representation of A, perhaps it is reducible when restricted to a smaller algebra C? What would be the meaning of such a C?

There is another thing: suppose we have two irreducible representations of A, one on F1, and one on F2. Are they "essentially the same", or "essentially different"? Two representations are essentially the same (we say: they are "equivalent") if it is possible to choose some orthonormal basis in F1, and one orthonormal basis in F2, in such a way that the matrices of both representations are exactly the same. Of course it is enough that the matrices representing  εr are the same.

How to discover the atoms?

We have so far followed a geometrical way and looked for particular left ideals of our geometric algebra A. But once we know that it is a *-algebra, there is another way, more related to the concepts of quantum theory such as "states". Then there is a celebrated GNS (Gelfand-Naimark-Segal) construction of irreducible representations. This we discuss later on, and we will relate it to our adventure with ideals.

P.S. 02-01-25 12:56 In her comment 11:02 AM, Anna wrote

"(1) "It is an easy exercise to show that F is invariant if and only if PF commutes with all ρ(u)."
It is not so easy, for me anyhow. I console myself that it should not be easy since it is the key point to the Schur's lemma."

As it an essential point here is my reasoning. It depends on the fact that if F is invariant, also F is invariant. This should be seen first. Suppose we accept it, if not, this is left as a separate exercise (it requires playing with the scalar product and *).

One way. Assume PFρ(u) =ρ(u)PF for all u in A. Then, for x in F, we have

ρ(u)x = ρ(u)PFx  (because x in F) = PFρ(u)x (becasuse of assumed commutation) is in F, because PF projects every vector on F.

So, one way is easy. Now the other way. Suppose F is invariant. Then F is also invariant. We want to show that then

PFρ(u) =ρ(u)PF

This is an equality of two linear operators. Since E = F⊕F, it is enough to show that it holds separately on vectors in F and on vectors in F. Suppose x is in F, then PFx=x, and the right-hand side is ρ(u)x.

But since F is invariant, ρ(u)x is also in F, therefore the left-hand-side becomes 
PFρ(u)x =  ρ(u)x. Left-hand-side and right hand side are thus equal. Now suppose x is in
F. Then the right hand side is 0. On the left we have PFρ(u)x. Since x is in F, also ρ(u)x is in F. But then PFρ(u)x = 0. Again left and right-hand sides are equal.

So it all depends crucially on the exercise mentioned above.

P.S. 02-01-25 12:56 In her comment 11:02 AM, Anna wrote:

(3) In attempt to find orthonormal projector PF⟂ i've got the following set of matrices complementary to the Pauli matrices:
{1/2(1, 1, 1, 1)}, {1/2(1, i, -i, 1)}, {1/2(0, 0, 0, 2)}
I checked that they are projectors, but how to extract the basis in explicit form is not clear yet.

Let us analyze the problem at hand. We have the algebra A, and we decide to work with its faithful representation as 2 by 2 complex matrices. So a general element of our algebra is now represented by a complex linear combination of the three Pauli matrices and the identity matrix. So, in our minds, we identify our algebra with the algebra of 2x2 complex matrices. We need to know how * operation (anti-automorphism tau) is implemented in this realization, and we know it is just the Hermitian conjugation of a matrix. We need to know how scalar product (u,v) is implemented in this realization, and we know it is 1/2 of the trace of the product u*v.

We have chosen p to be 1+e3, which in our realization becomes

E1=I+σ3.

Then Saša came with his E2 =e1-ie2, that becomes

E21-iσ2.

E1 and E2 span our F. We are looking now for F, another subspace of Mat(2,C), orthogonal to F. A general element of Fwill be a matrix X={{a,b},{c,d}} with four complex entries. We will impose now the condition that this matrix should be orthogonal to E1 and to E2. So we will have two conditions

Tr(E1*X) = 0. and Tr(E2*X)=0. These two conditions will enable us to eliminate two of the four unknowns a,b,c,d. If we do it cleverly, the basis in F should be easy to guess.

Or we can try to be brave and just guess: if we jut change signs and set

E3=I-σ3, E41+iσ2

will then E3 and E4 span our F?


68 comments:

  1. "Laura looked inside, opened it at the chapter "Primary Decomposition", looked at the symbols there, and noticed: "It so not so `basic'". Well the same will be with this post. Basic but not so basic."

    It's definitely not basic compared to Linear Algebra courses thought for physicists at the universities I know about. The Clifford algebra is not even mentioned there and not sure if they are thought even at higher level selective courses of Group theories or Symmetries. FWIW.

    So, a huge thank you for this present you've been giving us here on your blog. <3

    ReplyDelete
  2. As an electrical engineer for whom Computational Linear Algebra was just simple matrix math, I totally aligned with the "calculus for dummies" comment being for a commenter. These exercises are all way over my education. You kind of learn to have a big picture in which to fit a lot of not well understood in detail details. For something like left ideal, you can have a really simple example like this from the Tony Smith website I kind of grew up on physics-wise:

    (1 + i + j + k)
    (1 + i - j - k)
    (1 - i + j - k)
    (1 - i - j + k) ,
    which again constitute a (minimal) left ideal of the algebra (meaning that applying i, j, or k from the left on any linear combination of these four states gives another linear combination of these four states). Hence, now i,j,k are considered as "generators" of kinks in three spatial dimensions...

    ReplyDelete
    Replies
    1. Thanks, John. This is not the minimal left ideal. This is a trivial left ideal, because these four kinks span the whole four-dimensional space of quaternions. It just a different basis than 1,i,j,k standard basis. If would be able to find less than 4 kinearly independent quaternions with the property you mentioned, that would be a non-trivial left ideal. In your example, moreover, it does not matter whether you multiply from the left or from the right. So, you have a trivial two-sided ideal (equal to the whole space)

      Delete
    2. For the record that was actually Tony quoting Urs Schreiber talking about Tony's 4-dim Feynman Checkerboard. Urs has a short Wikipedia article so much more than me did not have to be that sloppy. I actually even noticed it would work from the right too but it was more the basic definition I'd be interested in. Trivial vs. minimal is not usually the type of thing I get into. I think I've confused trivial and U(1) centers too.

      Delete
  3. It so not so `basic' ->
    ?

    ReplyDelete
  4. where * on the right hand side is the Hermitian conjugate ->
    or only complex conjugate

    ReplyDelete
  5. Happy 2025! picture.

    Previous New Year's Eves were not joyful for this girl - she was careless while playing with fireworks and her fingers got blown off.

    ReplyDelete
    Replies
    1. You forgot that Ark's happy with surrealism. ;)

      Delete
  6. or is a left ->
    or is left

    ReplyDelete
  7. while em1,...,en form ->
    while em+1,...,en form

    ReplyDelete
  8. representation of A by m times square matrices ->
    ?

    ReplyDelete
  9. i=m+1.,,,,n ->
    i=m+1,...,n

    ReplyDelete
  10. ν(uv) = μ(u)μ(v). ->
    ?

    ReplyDelete
  11. commutant ρ(A)' is CI. >
    commutant ρ(A)' is cI.

    ReplyDelete
  12. That is if an only ->
    That is if and only

    ReplyDelete
  13. An if a representation ->
    And if a representation

    ReplyDelete
  14. choose aome ->
    choose some

    ReplyDelete
    Replies
    1. @Bjab Mostly fixed. Thank you. Did not change "Hermitian" conjugate - it should mean just that, not "complex conjugate". And did not change CI by cI. CI means the set of complex numbers multiplied by the identity operator. It should mean just that.
      As for 2024: 2025 is not going to be any better for the girl. I did I Ching prediction, and it confirms my intuitions (which is not surprise).

      Delete
    2. How bad can we expect, what do your intuitions say?

      Delete
    3. Personally it will be OK. But on a global scale:

      "... If armies are set marching in this way,
      One will in the end suffer a great defeat,
      Disastrous for the ruler of the country.
      For ten years
      It will not be possible to attack again."

      Delete
    4. Interesting enough, I am actually watching, before bed, a detective movie. The police department has a seer woman (her name is Katya). She can "see" the unseen. She has seen a murder during a future wedding ceremony, and the big question is: can that be prevented?

      Delete
    5. Thank you for sharing, glad to hear that we can expect many more of your posts here.
      Regarding the global scale, well, thanks to overwhelming propaganda there would probably be a split in interpretation which ruler and country would that be, Deep State or the bear, but as the bear is in fact not alone in this and not a singular country, my interpretation leans more towards the Deep State. But, as always, "wait and see" will show us the result in the end.

      Delete
    6. Interesting and intriguing movie plot!
      Hopefully, after some more hanging around in the company of Cl(V), maybe we might start to see how such phenomena of seeing the unseen might be incorporated into a bit more formal mathematical descriptions of our everyday world affairs.

      Delete
    7. Ther is also "if" in " If armies are set marching in this way". So, indeed, wait and see.

      Delete
    8. "how such phenomena of seeing the unseen might be incorporated into a bit more formal mathematical descriptions"
      This has been already outlined in my paper "The theory of Kairons"
      https://www.semanticscholar.org/paper/The-Theory-of-Kairons-Jadczyk/7c7ab1ce148f705e7daed96f4a26d315e9a84d05

      Delete
    9. It that the same as this one:
      https://arxiv.org/pdf/0711.4716 ?
      When I first encountered it, it was way too much unintelligible to understand any of things and symbols there. Hopefully, with your 'lecture' here, it might be a bit more understandable now. Thanks!

      Delete
    10. That's it. But it is even beyond my own comprehension - too difficult for me to understand. It needs to be simplified first.

      Delete
  15. "Did not change "Hermitian" conjugate"

    I'm lost. You wrote ρ(u*)=ρ(u)*

    If ρ(u)=
    {{u0, u1, u2, u3},
    {u1, u0, -iu3, iu2},
    {u2, iu3, u0, -iu1},
    {u3, -iu2, iu1, u0}}
    then
    ρ(u*)=
    {{u0, u1, u2, u3},
    {u1, u0, iu3, -iu2},
    {u2, -iu3, u0, iu1},
    {u3, iu2, -iu1, u0}}
    but Hermitian conjugate ρ(u)*=ρ(u)=
    {{u0, u1, u2, u3},
    {u1, u0, -iu3, iu2},
    {u2, iu3, u0, -iu1},
    {u3, -iu2, iu1, u0}}
    so
    ρ(u*) not equals ρ(u)*

    ReplyDelete
    Replies
    1. Good. So we may have a problem here. I have to think about it.

      Delete
    2. @Bjab
      But u* = τ(u).
      I do not see you using it in going from
      ρ(u)=
      {{u0, u1, u2, u3}
      to
      ρ(u*)=
      {{u0, u1, u2, u3},

      Delete
    3. "I do not see you using ..."
      Now I don't see either.
      Thank you!

      Delete
    4. "I do not see you using ..."

      Now I don't see it either.
      Thank you!

      Delete
    5. If X=(X+X*)/2 +(X-X*)/2 are not of the form cI, then at least one of them, ->
      is not? of them?

      Delete
    6. "I do not see you using it in going from ρ(u)="

      Arku, I don't know if you acknowledged my previous comment (because I don't see it) in which I thanked you for making me realize where I made a mistake.

      Delete
    7. If X=(X+X*)/2 +(X-X*)/2 are not ->
      If X=(X+X*)/2 +(X-X*)/2 is not

      Delete
    8. Now you see it. I am quantumly unpredictable as it seems. That is partly owing to the fact that today my big 12'' speaker came that I have connected to my Yamaha music machine and I am oscillating between my desk and trying the Full organ and sax sounds.

      Delete
    9. "(because I don't see it)"
      Well, I see it now.

      Delete
    10. 12" = 30cm

      A late Christmas gift?

      Delete
    11. Indeed. And it sounds even better than I expected.

      Delete
  16. (1) "It is an easy exercise to show that F is invariant if and only if PF commutes with all ρ(u)."
    It is not so easy, for me anyhow. I console myself that it should not be easy since it is the key point to the Schur's lemma. Let me take as a simplest illustrative example rotations of R3 and projection onto a 2d plane within it. Clearly, projection made after rotation yields the same result as rotation made after projection if and only if we project onto the plane normal to the rotation axis. But i don't know how to show this formally.
    (2) Now i still better understand the Wigner's genious idea that elementary particles are not stable pieces of matter, but invariant structures, invariant with respect to the Lorentz transforms, which are believed to be the motions of our space of the special relativity theory.
    (3) In attempt to find orthonormal projector PF⟂ i've got the following set of matrices complementary to the Pauli matrices:
    {1/2(1, 1, 1, 1)}, {1/2(1, i, -i, 1)}, {1/2(0, 0, 0, 2)}
    I checked that they are projectors, but how to extract the basis in explicit form is not clear yet.

    ReplyDelete
    Replies
    1. Ad (1) Good that you have asked. I will expand on this issue.

      Delete
    2. Ad (2) Wigner was playing with unitary representations of groups. We are playing with *-representations of algebras. But the reasoning is similar in both cases. If we have *algebra, we have the group of its unitary elements, and if we have a group, we can build an "enveloping *-algebra". That is why the methods are so similar for both cases.

      Delete
  17. To P.S. 02-01-25 12:56
    "E3=I-σ3, E4=σ1+iσ2 will then E3 and E4 span our F⟂?"
    In explicit form E3 = {0, 0, 0, 2} and E4 = {0, 2, 0, 0}.
    They are orthogonal to each other and to E1 and E2, respectively.
    Is this sufficient to recognize them as the basis of F⟂?

    ReplyDelete
    Replies
    1. This is very good. We are getting there! Now we see that it would be more elegant to define E1,E2,E3,E4 taking 1/2 of their original expressions. Let's do it.
      Now write in matrix form (as a 2x2 mateix) a general matrix in F. This would be aE1+bE2.
      Write in a matrix form a general matrix in F⟂. This would be aE3+bE4.
      Look at the form of these matrices. How would you characterize matrices from F in one word?
      How would you characterize matrices from F⟂ in one word?
      Check by matrix multiplication that matrices so characterized indeed form a left ideal.

      Delete
    2. Yees! Everything is cool:
      The basis for F is
      E1 = {1,0,0,0} E2 = {0,0,1,0}
      General element {a,0,b,0} it is the form of a right ideal
      The basis for F⟂ is
      E3 = {0,0,0,1} E4 = {0,1,0,0}
      General element {0,c,0,d} it is the form of a left ideal
      (earnestly checked by matrix multiplication)
      Thank you for the most detailed guidance!

      Delete
    3. But I was thinking that they form two complementary LEFT ideals.... Can you check again?

      Delete
    4. As usual, what is 'evident' plays evil joke with me. You are quite right: they are both ideals of the same chirality. Further, i thought that if we multiply an arbitrary A by X {x,0,y,0} from the right, then X is the right ideal, but it seems to be the other way around.
      Thank you once again for clarifying everything to the grounds...

      Delete
    5. The following occurred to me: So, we have that matrices with only first column non-zero form a left ideal. While matrices with only the second column non-zero form the complementary left ideal. So, perhaps, matrices with only first row non-zero form a right ideal, while matrices with only the second column non-zero form a complementary right ideal?

      Delete
    6. No, i won't get into this trap. One should transpose and take a matrix with the zero first or second ROW. They will be the two complementary right ideals.

      Delete
  18. @Ark, thank you very much for the P.S. 02-01-25 12:56.
    I followed the proof, but need to think it over.
    As regards the phrase: "So it all depends crucially on the exercise mentioned above" (i.e., to prove that if F is invariant, also F⟂ is invariant), you said in the basic part of the post that it is evident:
    "If this happens (F is invariant), then, evidently, also PF⟂= I-PF commutes with all ρ(u), thus F is invariant if and only if F⟂is invariant". It is really evident, if there is no some catch there...)

    ReplyDelete
    Replies
    1. The part "to prove that if F is invariant, also F⟂ is invariant" is not THAT evident. It requires writing some little scalar products and some little reasoning if we want to prove it from definitions.

      Delete
    2. Well, may be the following considerations will suit.
      Given: F is an invariant subspace of a vector space E with respect to an operator A, want to prove that its orthogonal complement F⟂ is also an invariant subspace of E.
      Let us take an arbitrary x from F and an arbitrary y from F⟂. Then,
      0 = = |F is invariant| = = =>
      A*y is orthogonal to all x, i.e., lies in F⟂ for any y from F⟂,
      which is the definition of invariancy of F⟂ with respect to A*.
      For a self-conjugated operator A=A*, it is proved.
      Projector is self-conjugated PF=PF*, ok,
      but for A=ρ(u), we have A* = ρ*(u) = ρ(u*) and i am a bit confused with * at (u*), won't it spoil the matter?

      Delete
    3. I was thinking of doing it using scalar product. Something like this:
      Assume F is invariant. That means ρ(u)x is in F for all u in A and x in F. We want to show that F⟂ is also invariant, that is that
      ρ(v)y is in F⟂ for all v in A and y in F⟂ .

      Do, let us take y in F⟂ and v in A. We want to show that ρ(v)y is in F⟂. Tha is we want to show that the scalar product
      (x,ρ(v)y) = 0 for all x in F. But
      (x,ρ(v)y)=(ρ(v)*x,y)=(ρ(v*)x,y), and u=v* is also in A. Thus ρ(v*)x is in F, and, since y orthogonal to F, we get zero.

      Is this acceptable?

      Delete
    4. This is just i wanted to say, but expressed in normal mathematical language. Now my problem with '*' is seen even better. Why do we take u=v*?

      Delete
    5. It was not necessary. I wrote it to connect to "That means ρ(u)x is in F". The point is that A is closed under the * operation. If something is true for all u in A, then it is in particular true for our v*, since it is an element of A,

      Delete
    6. Now it is all clarified. Interestingly, first i tried to construct the proof similarly to how you did it when explained Exercise 1. And i almost got the result, but at the last moment noticed that on the left-hand side i consider an element x from F and on the right-hand side i have to consider an element from F⟂, hence they cannot be the same element, and that proof was abandoned.
      Subspaces F and F⟂ have nothing in common, and that is why all the following statements and theorems are proved for each of them separately, but the assertment relating them both goes first, it is indeed the most primary and important.

      Delete
  19. with dim(F)=m
    with dim(F)=m (where 0
    m by m

    ReplyDelete
  20. (shitty html)
    with dim(F)=m (less then) n ->
    with dim(F)=m (where 0 less then m less then n

    m times m ->
    m by m

    ReplyDelete
  21. Too many ns in the fixing.

    ReplyDelete

Thank you for your comment..

The Spin Chronicles Part 32: Ideal exercises

  The Spin Chronicles Part 30: Solutions to exercises of Part 30 This post is all written by Saša. Saša agreed to present the results of ou...