The Spin Chronicles Part 30:
Solutions to exercises of Part 30
This post is all written by Saša. Saša agreed to present the results of our common work (Saša, Anna, Bjab). So here they are.
Solutions to exercises of Part 30
For easier understanding of these exercises and their solutions, let us recall what was said in Part 30 about the self-adjoint idempotents: p = p* = pp = p2.
We write a general element of A as p=(p0,p) (or p=(p,p4) as in Part 28), where p0 is a complex scalar, and p is a complex vector. Then p* = (p0*,p*), where * inside the parenthesis stands for complex conjugation. The p* = p means that p0* = p0 and p* = p. In other words p0 and p must be real.
Now we recall a general multiplication formula for A:
for p=(p0,p) and q=(q0,q),
pq = (p0q0 + p·q, p0q + q0p + i p⨯q).In particular for pp we get, pp = (p02 + p·p, 2p0p) since p⨯p=0.
Thus pp = p implies p02 + p·p = p0 and 2p0p = p.Then either p=0 or not. If p=0, then p02 = p0 and we have two solutions p0=0 or p0=1. They correspond to trivial Hermitian idempotents p=0 and p=1. On the other hand, if p is not a zero vector, then from the second equation we get p0 = 1/2. Substituting this value of p0 into the first equation we get,
1/4 + p·p = 1/2 or p·p = 1/4.
We deduce that p = 1/2 n, where n is a unit vector in V.Therefore a general form of a nontrivial Hermitian idempotent is:
p = (1+n)/2 = (1/2,1/2 n).
This nontrivial Hermitian idempotent p = (1/2,1/2 n) will be a crucial element for determining left and right ideals, that these exercises in Part 30 were largely about, as Ark mentioned in his comments to Bjab:
Linear subspace S of A is called left ideal of A if for s in S and any u in A we have that us is in S.
In other words S is invariant under the left action of A. 0 and A are two trivial left ideals.
If there is a non-trivial left ideal, it means that the representation L is reducible (from the definition of reducibility).
and when asked how determining the solution for up=u provides for left ideal,
Let Ip = {s in A: sp=s}.
This is our set. I used the symbol s instead of u. But it is the same set, right?
Take any u in A. Suppose s is in Ip. Is then us also in Ip?
Well, if sp = s, then usp = us, or better (us)p = us.
Therefore us is also in Ip.
So, we are now equipped to start tackling those exercises, and if something additionally turns up to be needed to explain or define, it will be dealt with within the solution of the corresponding exercise.
Exercise 1.
Let n be a unit vector in V. Let Tn denote the space tangent to the unit sphere at n. Thus Tn can be identified with the subspace of V consisting of all vectors of V perpendicular to n.
Let J be defined as a linear operator on Tn, defined by:
Ju = u⨯n.
Show that J is well defined and it defines a complex structure on Tn (i.e. J2 = -1).
Show that J is an isometry, that is <Ju,Jv> = <u,v> for all u and v in Tn.
Solution:
From Wikipedia, "In mathematics, a well-defined expression or unambiguous expression is an expression whose definition assigns it a unique interpretation or value."
For n unit vector in V and u a vector in Tn,
Ju = u⨯n = |u| |n| sinφ(u,n) n⊥(u,n) = |u| n⊥(u,n) = u⊥,
where we used standard definition for cross or vector product "⨯", and that magnitude |n|=1, as well as sinφ(u,n)=1, because the angle φ(u,n) between u in Tn and unit vector n is π/2 (90°). The resulting vector u⊥ has the magnitude |u| and a unique direction given by unit vector n⊥(u,n) perpendicular to both u and n, thus also in Tn, determined by the right-hand rule for the cross product.
Therefore J is well defined.
J also defines complex structure on Tn, i.e. J2 = -1,
J2u = J(Ju) = Ju⊥ = (u⨯n)⨯n = (n·u)n - (n·n)u = 0u - 1n = -1u,
where in triple product expansion, (a⨯b)⨯c = (c·a)b - (c·b)a, the "·" stands for dot or scalar product which gives 0 for perpendicular vectors.
Regarding the isometry, that is <Ju,Jv> = <u,v>, we recall from Part 28 that our Hilbert space scalar product <u,v> for u=(u0,u) and v=(v0,v) in A was defined as scalar part of the product (u0*,u*)(v0,v), that is <u,v> = (u0* v0 + u*·v), which for u and v in Tn turns into simple dot product (u·v).
<Ju,Jv> = <u⨯n,v⨯n> = <u⊥,v⊥> = u⊥·v⊥ = |u⊥| |v⊥| cosφ(u⊥,v⊥) = |u| |v| cosφ(u,v) = u·v = <u,v>,
where angle φ(u⊥,v⊥) between u⊥ and v⊥ is identical to angle φ(u,v) between u and v, because both u⊥ and v⊥ are rotated for the same amount of 90° in the same direction in relation to u and v, respectively.
Exercise 2.
Read Wikipedia article "Linear complex structure", section "Relation to complexifications".
Take J from Exercise 1. and extend it by linearity to TnC = Tn + iTn.
Find a solution of up=u (see the discussion under the previous post Part 29) for p=(1+n)/2 within TnC. Express it as an eigenvector of the operator iJ - to which eigenvalue?
Solution:
In the Wikipedia article, we read that,
"If J is a complex structure on V, we may extend J by linearity to VC:
J(v⊗z) = J(v)⊗z.",
which in case of complex vector u and our J from Exercise 1. means simply,
Ju = u⨯n,
where n is unit vector in V and u complex vector in TnC.
Then we proceed with finding a solution of up=u, for p=(1+n)/2=(1/2,1/2 n), where general complex u=(u0,u) is within TnC.
Using the general multiplication formula, for up=u we get:
up = (u0,u) (1/2,1/2 n) = u = (u0,u),
(1/2 u0 + 1/2 u·n, 1/2 u0n + 1/2 u + i/2 u⨯n) = (u0,u),
where equating left-hand side with the right-hand one, gives that scalar part u0 = u·n, while for vector part u, we get the condition:
u = u0n + i u⨯n.
Since we are looking for a solution to be within TnC, that is for those u = uRe + i uIm that are perpendicular to n, we see that scalar part u0 = u·n = 0, which then gives the solution:
u = (u0,u) = (0,i u⨯n),
that is a complex vector u in TnC, u = i u⨯n, which when substituting the expression for J, becomes u = iJu, that is u is eigenvector of iJ with eigenvalue +1.
Cross-check:
up = (0,i u⨯n) (1/2,1/2 n) = u = (0,i u⨯n)
(i/2 (u⨯n)·n, i/2 u⨯n + i i/2 (u⨯n)⨯n) = (0,i u⨯n)
(0, i/2 u⨯n - 1/2 ((n·u)n - (n·n)u) = (0,i u⨯n)
(0, i/2 u⨯n - 1/2 (-u) = (0,i u⨯n)
(0, i/2 u⨯n + i/2 u⨯n) = (0,i u⨯n)
(0,i u⨯n) = (0,i u⨯n).
So, our solution u = iJu = i u⨯n, really is the solution for up=u within TnC.
Exercise 3.
Let n and p be as above. Show that up=u if and only if un=u.
Solution:
We are looking that for n unit vector in V and p=(1/2,1/2 n): up=u ⟺ un=u.
Let's first check un=u ⟹ up=u.
For u=(u0,u) to be solution for un=u, we see that,
un = (u0,u) (0,n) = u = (u0,u)
(0 + u·n, 0 + u0n + i u⨯n) = (u0,u)
(u·n, u0n + i u⨯n) = (u0,u),
scalar part u0 = u·n and vector part u = u0n + i u⨯n.
The vector part is therefore composed of the components of which one is proportional to n and the other is perpendicular to n. Then in the scalar part u0 = u·n evidently contributes only the component that is proportional to n, while for the vector component that is perpendicular to n we saw in Exercise 2. that scalar part is 0, because that general complex vector is in TnC. Thus we conclude that the solution of un=u can be decomposed into two independent part as u = u1 + u2, where u1 = z(1,n), z being arbitrary complex scalar or complex number, z = a + ib, and u2 = i u2⨯n = u2, both of which are the solution of un=u on its own.
For u1=z(1,n):
u1n = z(1,n)(0,n) = u1 = z(1,n)
z(0 + n·n,0 + n + i n⨯n) = z(1,n)
z(1,n) = z(1,n)
and for u2=(0,iu2⨯n)=(0,u2):
u2n = (0,iu2⨯n)(0,n) = u2 = (0,u2)
(0 + i (u2⨯n)·n,0 + 0 + i (iu2⨯n)⨯n) = (0,u2)
(0,-((n·u2)n - (n·n)u2) = (0,u2)
(0,-(0 - u2)) = (0,u2)
(0,u2) = (0,u2).
Now we need to check if both solutions u1 and u2 are also solutions of up=u, where p=(1/2,1/2 n).
For u2=(0,u2)=(0,iu2⨯n) we already saw in Exercise 2. that it is a solution of up=u, so only need to check for u1=z(1,n):
u1p = z(1,n)(1/2,1/2 n) = u1 = z(1,n):
z(1/2 + 1/2 n·n,1/2 n + 1/2 n + 1/2 i n⨯n) = z(1,n)
z(1,n) = z(1,n).
Therefore, un=u ⟹ up=u.
For the other direction, up=u ⟹ un=u, we saw in Exercise 2. that we got the same conditions for the scalar and vector parts of u from up=u like in the first part of this Exercise 3., and that the solution within TnC is our u2=(0,u2)=(0,iu2⨯n), and here we also confirmed that u2 is a solution for un=u. So we would need to check only if u1 is a solution of u1p=u1, which we just did, and that then it satisfies u1n=u1, which we also did in the first part of this Exercise.
So, we already proved the other direction, up=u ⟹ un=u, which means that up=u if and only if un=u, or up=u ⟺ un=u.
Exercise 4.
Find an error in the proof below. I can't, but it looks suspicious to me.
Statement.
Let n and n' be two different unit vectors. Then u satisfies both un=u and un'=u if and only if u=0.
Proof.
Set n-n' = v. Suppose v is a nonzero vector. Then we have uv = 0. That means (u·v, u0v+iu⨯v)=0. Thus u is perpendicular to v. Now, in u0v+iu⨯v = 0, the first term is proportional to v, the second is perpendicular. Thus both must be zero. It follows that u0=0, and u⨯v=0. Thus if v is not zero, then u=0.
Solution.
Neither Bjab nor I could spot or find an error in the proof. When asked why the proof looked suspicious, Ark replied in the comments:
This is that two left ideals corresponding to different n's do not intersect (excluding the zero vector). But left and right ideals intersect. It may have some philosophical and physical implications. I will start discussing it in the next post. They are spinor ideals.
A linear subspace S of A is called left ideal of A if for any u in A and s in S we have that us is in S. In other words S is invariant under the left action of A. 0 and A are two trivial left ideals. If there is a non-trivial left ideal that means that the representation L is reducible (from the definition of reducibility). So, we have found an infinite number of ways in which L is reducible - there is a left ideal for each unit vector n in V, and different n's produce different ideals. Spinors, by the standard definition, are elements of such a left ideal. Usually people choose n=e3.
At this point, it might be useful to show the minimal forms of our two independent solutions of up=u, where p is the general non-trivial Hermitian idempotent p=(1+n)/2. Obviously, the minimal form for u1=z(1,n) is (1+n), while for u2=(0,u2), where u2=iu2⨯n is in TnC, that is u2 = u2Re + iu2Im, we get:
u2 = u2Re + iu2Im = i u2⨯n = i (u2Re + iu2Im)⨯n =
i (u2Re⨯n + iu2Im⨯n) = (n⨯u2Im + iu2Re⨯n),
or u2Re = n⨯u2Im and u2Im = u2Re⨯n, which will be handy for next Exercises.
Before going there, we might just notice that really for a given n, we can span the particular left ideal of A by (1+n) and corresponding u2=iu2⨯n within TnC.
Exercise 5.
Consider the following example:
Choose n = (0,0,1) = e3. Choose e1 = (1,0,0) for the real part of u2. Get the expression for u2. Define
E1=(1+n)
E2 = u2
Show that E1 and E2 span the left ideal of A. For this calculate the action of (e0,e1,e2,e3) on E1 and E2 and express the results as a linear combinations of E1 and E2.
Solution.
Choosing n = e3, we immediately get E1 = (1+e3). Then for E2 = u2 = u2Re + iu2Im, after choosing u2Re=e1, u2Im = u2Re⨯n = e1⨯e3 = -e2, we get E2 = e1 - ie2.
Acting with e0,e1,e2 and e3 from the left on E1 and E2, we get:
e0(E1 + E2) = E1 + E2,
e1(E1 + E2) = E2 + E1,
e2(E1 + E2) = iE2 - iE1,
e3(E1 + E2) = E1 - E1,
which can then be written in the matrix form as, for example for e1:
= ,
where the resulting matrix [aij] transposed, i.e. [aij]T = [aji], represents the matrix acting on the basis.
In that way we get:
L(e0) : σ0 = Id = , L(e1) : σ1 = ,
L(e2) : σ2 = , L(e3) : σ3 = ,
which are in fact Pauli σi matrices.
In an analogous manner we can find the right ideal of A, with the only difference from the procedure for left ideal being that now we look for the solution of pu=u, instead up=u. It is pretty straightforward to get two independent solution of pu=u, one in the non-trivial Hermitian idempotent form of (1+n), and the other with interchanged factors in the cross product compared to the left ideal, u2=in⨯u2 being in TnC, which then gives u2Re = u2Im⨯n and u2Im = n⨯u2Re.
For the calculation of the matrix representation of R(ei), Ark suggested to choose u2Re = e2, which for the same n = e3 like in the case of the left ideal of A, u2Im = e3⨯e2 = -e1, or for the basis of this particular right ideal of A:
E1 = (1+n) = (1+e3)
and
E2 = u2 = e2 - ie1.
The resulting matrix representation of R(ei) are again Pauli σi matrices, with a slight difference in comparison to the representation L(ei): σ2 now corresponded to R(e1), while σ1 to R(e2).
Exercise 6.
Is the representation L of A on our left ideal from Exercise 5. irreducible or reducible?
Solution.
The answer would be "irreducible".
Ark additionally commented:
Representation reducible - the representation space has a non-trivial invariant subspace.
Representation irreducible - no non-trivial invariant subspaces.
Trivial means {0} or the whole space.
Representation space: space on which the representation acts.In our case: two-dimensional complex space spanned by E1 and E2.
That is: our left ideal.
And in Part 31, also said:
Note. In the case we have discussed in previous posts, we have E = A, and, for ρ=L, ρ(u)x=ux. Therefore looking for a non-trivial invariant subspace is the same as looking for a non-trivial left ideal. For ρ=R, the right regular representation, looking for an invariant subspace is the same as looking for a right ideal.
Since we have found non-trivial left and right ideals for A, they suggest that both, left regular representation L(A) and the right one R(A), are in fact reducible, as Anna already reasoned applying the Shur's lemma. Same conclusion can be reached when we look at the bases; in matrix representation, the basis for A was given by 16 4×4 matrices L(eμ)R(eν)=R(eν)L(eμ), that is L(eμ) and R(eν) were represented by 4×4 matrices, while now after finding non-trivial left and right ideals, we see that L(eμ) and R(eν) are represented by 2×2 Pauli σi matrices, also suggesting their reducibility.
However, Exercise 6. concerns the reducibility or irreducibility of the representation L of A, that is of the left regular representation of A, on our left ideal, not on the whole A. If it would be reducibile, it would suggest that we could find an invariant subspace where L(eμ) would be in matrix representation represented by 1×1 matrices, which are in fact just complex scalars, i.e. simple complex numbers. In other words, it would not be a matrix representation anymore.
Exercise 1.(simple exercise in abstract thinking) Let A denote the algebra of all 2x2 complex matrices. It is a *-algebra with unit if we define * as the Hermitian conjugate (complex conjugate and transpose), and if we take for 1 the unit matrix. So, we have a particular *-algebra.
Let E denote the space of all complex numbers. Usually we denote it as C.
Then E is a one-dimensional complex space. TIn E we have the standard
scalar product (x,y). If x,y are two vectors in E, then (x,y)=x*y, the
product of complex conjugate of x by y.
Now we define a representation ρ of A on E: ρ(X) = det(X). If X is in A, then det(X) is a complex number. This complex number is interpreted as a linear operator acting on E by multiplication:
ρ(X)x=det(X)x.
Is this a representation?
Is this a *-representation?
Is this representation irreducible?
Saša would probably answer this last question instantly by noticing that since E is -one-dimensional, there are no nontrivial subspaces at all! But I want this last question to be answered by applying the Shur's lemma.
Exercise 2.(simple exercise in abstract thinking) Let A denote the algebra of all 2x2 complex matrices. It is a *-algebra with unit if we define * as the Hermitian conjugate (complex conjugate and transpose), and if we take for 1 the unit matrix. So, we have a particular *-algebra.
Let E denote the space of all column vectors (a,b), where a ,b are complex numbers. Usually we denote it as C2. Then E is a two-dimensional complex space. In E we have the standard scalar product (x,y). Vectors (1,0) and (0,1) form an orthonormal basis in E.
Now we define a representation ρ of A on E: ρ(X) x = Xx. X is in A, x is in E.
Is this a representation?
Is this a *-representation?
Is this representation irreducible?
I want this last question to be answered by applying the Shur's lemma.
I can see strange letters in formula:
ReplyDeleteJu = u⨯n = = |u| |n| ...
is π/2 (90°) ->
ReplyDeleteis π/2 (90°)
Indeed, sh***ed html. There may be conversion problems between different systems. I will be fixing them as they are being discovered.
Delete(a⨯b)⨯c = (c·a)·b - (c·b)·a, the "·" stands for dot ...->
ReplyDelete(a⨯b)⨯c = (c·a)b - (c·b)a
(extra dots can be confusing)
Re-uploaded the whole text in a different way. Looks better now.
Delete"Looks better now"
DeleteMuch better.
by unit vector n⊥(u,n) ->
ReplyDelete⊥(u,n) in subscript
Fixed.
Delete(i/2 (u⨯n)·n), i/2 u⨯n + i i/2 (u⨯n)⨯n) = (0,i u⨯n) ->
ReplyDeleteParentheses mismatched (and excessive dot)
Parenthesis fixed. I do not see an excessive dot.
Delete"I do not see an excessive dot."
DeleteOh, indeed that dot is necessary.
Some dots in:
(0, i/2 u⨯n - 1/2 ((n·u)·n - (n·n)·u) = (0,i u⨯n)
are excessive.
Yes, they may be excessive if we know that two vectors one next to the other in writing mean that it's their dot product, but chosen to explicitly show that so to be different from the scalar multipliying a vector or another scalar. In other words, when no dots, there is a scalar in the product, when dot, only vectors. FWIW.
Delete@Bjab
DeleteExcessive dots removed. Increased font size for easier reading.
Added happy Saša image at the end.
"Excessive dots removed"
DeleteRemoved but not everywhere.
There are still in
(u⨯n)⨯n = (n·u)·n - (n·n)·u
and in
(a⨯b)⨯c = (c·a)·b - (c·b)·a
and in
(0, i/2 u⨯n - 1/2 ((n·u)·n - (n·n)·u) = (0,i u⨯n)
My eyes are getting tired from this dot search. I think I did them all. Thanks.
Delete" I think I did them all."
DeleteYes, you did. Thank you!
@Bjab
DeleteYou were right about excessive dots, according to the "standard" I mentioned in previous comment. Thanks!
@Ark
One (last) "e" missing in the next to last paragraph in "L(eμ)R(eν)=R(eν)L(μ)" -> L(eμ)R(eν)=R(eν)L(eμ).
It may some and philosophical implications. physical implications. ->
ReplyDeleteGood opportunity to correct/clarify this.
Fixed. Thanks.
DeleteGot scared of myself and ideals by that look and stare. :))
ReplyDeleteAnd that red font text was intended to be like a comment and removed upon checking. Well, now we can check all together if wanted, are there any errors concerning the math or its interpretation in the post?
That image at the end is much better. :)) Thank you for a smiling me representation. ;)
Deletez(1/2 + 1/2 n·n,1/2 n + 1/2 n + i n⨯n) = z(1,n) ->
ReplyDeletez(1/2 + 1/2 n·n,1/2 n + 1/2 n + 1/2 i n⨯n) = z(1,n)
Fixed. Thanks.
DeleteThank you Bjab very much for a thorough cross-check. Much appreciated.
DeleteConcerning this last part:
ReplyDelete"However, Exercise 6. concerns the reducibility or irreducibility of the representation L of A, that is of the left regular representation of A, on our left ideal, not on the whole A. If it would be reducibile, it would suggest that we could find an invariant subspace where L(eμ) would be in matrix representation represented by 1×1 matrices, which are in fact just complex scalars, i.e. simple complex numbers. In other words, it would not be a matrix representation anymore."
It is a little bit iffy for me. I would rather suggest to use Shur's Lemma to show that any operator acting on the left ideal and commuting with all the representation is necessarily a multiple of identity. It will certainly help to use the fact that matrices representing the basis are, on this ideal, the same as Pauli matrices.
Agree, my argumentation is more of a hand-waving variety, as popularly said for those arguments which don't stand on firm grounds.
DeleteProper application of Shur's lemma in proofs is still a bit beyond my current skill set, maybe Anna could chime in on this one?
Trying to realize the suggestion of Ark to "use Shur's Lemma to show that any operator acting on the left ideal and commuting with all the representation is necessarily a multiple of identity".
DeleteWe know from Part 31 that the space of representation L on the left ideal is spanned by E1 and E2 basis elements and, in the matrix form, its general element has the form of an {a, 0, b, 0} matrix.
Commutation of a 2x2 matrix with a left ideal means that the left ideal is the right ideal at the same time. This can happen only if either a=0 or b=0. In this case, we are left with only one nonzero basis element, the 2-dim space of ideals is reduced to 1-dim space, and we will not consider this degenerate case.
So, let 'a' and 'b' be both nonzero. We should find when an arbitrary 2x2 matrix commutes with the ideal of the form {a, 0, b, 0}. By substracting columns and multiplying by a number, one can bring an arbitrary matrix to the form {1, x, 0, y}.
Muptiplying it by {a, 0, b, 0} from the left and then from the right and comparing the results, we see that they coincide only if x=0 and y=1, i.e., the matrix {1, x, 0, y} is the identity matrix.
According to the Shur's lemma, regular representation L on the left ideal is irreducible.
Not sure at all, just intuitive wandering around and about.
I am not entirely happy with your intuitive thinking. Whivh means I did not a good job at all presenting the main ideas. I have to strive to be more clear and do not spare words or examples. Therefore I will add a P.S. in which I will attempt to organize thoughts and put things in order. Then, perhaps, we will get it right....
DeleteOr, better, I will devote an entire post, tomorrow, to clarifying the picture.
DeleteThere is, however, one exercise that needs to be done anyway: show that any 2x2 complex matrix that commutes with all three Pauli matrices, is a multiple of the identity matrix. How this fact relates to Shur's lemma?
DeleteIn fact there is even better exercise: show that any 2x2 matrix commuting with any TWO Pauli matrices is a multiple of identity? Why does commuting with two implies necessarily automatically, commuting with the third?
DeleteArk, it is the most easy part to make technical calculations. The harder is to understand what should be done. I'm happy that i guessed right that we should check commutativity of any 2x2 matrix with... what? Why do you insist on checking commutativity with Pauli matrices, although the space of left ideals is spanned by E1 and E2, which are {1,0,0,0} and {0,0,1,0} matrices? We can easily get that commutativity of {x,y,z,w} with {1,0,0,0} and {0,0,1,0} entails y=z=0 and x=w. Hence, it is proportional to Id matrix. None of linear operators except identity commutes with representation ρ on the space of ideals and by the Shur's lemma this representation is irreducible.
DeleteIt seems to me that we are now uprooting my deepest misunderstanding.
Yes, with the Pauli matrix the situation is the same: commutation of any 2x2 matrix {x,y,z,w} with any two of Pauli matrices entails y=-y and z=-z, that is y=z=0, and x=w. It is sufficient to show this fact for any two σ_i because the third one is the product of two others and if A commutes with two of them, then it commutes with the third one:
DeleteA σi = ±i A σj σk = ±i σj A σk = ±i σj σk A = - σi A
Is this at least satisfactory?
I think we need an example with a block-diagonal-structured matrix for clarity, for reducible L and R in case of regular representation on entire A and irreducible them in case of representation on left ideals.
Hope you will not ask to do this as an exercise...)
I added Exercise 1, specially for you, at the bottom of this post.
DeleteA good, simple but, unfortunately, not a very illustrative example because complex numbers are all multipliers of unity and commute with everything, including the 2x2 complex matrices
DeleteAdded Exercise 2.
DeleteYou mean representation of matrices by their eigenvectors?
DeleteTo begin with, i don't know whether it is a representation...
There is no mention of eigenvectors in the exercise. Please, explain the source of your confusion. I would like to understand the source of it.
DeleteOh, i see at last what Ex.2 is about. This is simply the representation of matrices by themselves, right?
DeleteThen:
(1) It is a representation because the subsequent action of matrix B on vector x and then matrix A on vector x is the same as the action of matrix AB on x.
(2) It is a *-representation because the representation of a conjugated matrix A* is the conjugate representing matrix A*.
(3) It is an irreducible representation because matrix multiplication is generally noncommutative and the only case when linear operators acting on E (in the form of 2x2 matrices) commute with all ρ(u) (2x2 matrices again ) are multiples of the identity operator, i.e. of the unit matrix in our case.
This is a very good and cmplete solution!
DeleteBeethoven's Ode to joy sounds in my soul after these words of yours! Yesterday it dawned on me what is actually a representation. It was my hopeless puzzle for years. You've made this miracle for me. Ma plus profonde gratitude
DeleteThe main problem in understanding the Shur's lemma and all that was that i could not relate commutativity to containing a subspace. And that is because i searched for a commutant INSIDE the subspace, whereas it is just OUTSIDE, in the complementary orthogonal subspace, existance of which indicates that there is enough room in our embracing space for two mutually orthogonal nontrivial subspaces, in each of which we can organize a representation. Уф. Exercise 6 is still to be done accurately.
DeleteSo, Exercise 6 once again.
DeleteMy great mistake was that i tried to take the matrices from the space spanned by E1 and E2 as representing matrices.
But the representing matrices are not them but the matrices acting on them! Thanks to Saša we know that these are the Pauli matrices again.
Any Mat(2,C) operator acting on the space (E1, E2) can be expanded in the basis of the unit matrix and three Pauli matrices, the latter and also the representing matrices, hence, we should check for the commutants of Pauli matrices with themselves. And these we know to be nothing except matrices proportional to the unit matrix.
Therefore, according to the Shur's lemma, the representation L on the space of left ideal (E1, E2) is irreducible.
Does this have something to do with truth, or appear to be only my new illusions?
Saša, many thanks for this great deed. It was hardly possible to make those numerous exercises more accurately and explain them more intelligibly.
ReplyDeleteWhen reading yesterday your dialog with Ark concerning Ex.3 i overlooked where did the TWO elements spanning the left ideal come from. Now i've grasped that they are two independent solutions u1 and u2.
As regards Ex.6, please don't make a Shur's lemma expert of me, it is definitely beyond my skill to come up with something better than you did. Of course i will keep trying.
And i like more the first picture of you, where you are concentrated and armoured for the battle with ideals, as it better corresponds to the furious spirit of our present adventure.
@Anna
DeleteThank you for the kind words.
And please, don't think less or bad of your skills, they are great, the way you handled L(A) reducibility was impressive!
@Ark
If there are no more comments or improvements, can you please remove the red font text?
It itches me seeing it there, as it draws attention to those parts of the post, which are frankly not so important to jump out from the rest of it.
Thanks.
@Saša All red that my eyes could spot removed. Missing e added.
DeleteThank you!
Delete