Let me begin this Sunday morning post with a thought-provoking quote from Britannica:
monad, (from Greek monas, “unit”), an elementary individual substance that reflects the order of the world and from which material properties are derived. The term was first used by the Pythagoreans as the name of the beginning number of a series, from which all following numbers derived. Giordano Bruno, in De monade, numero et figura liber (1591; “On the Monad, Number, and Figure”), described three fundamental types: God, souls, and atoms. The idea of monads was popularized by Gottfried Wilhelm Leibniz in Monadologia (1714). In Leibniz’s system of metaphysics, monads are basic substances that make up the universe but lack spatial extension and hence are immaterial. Each monad is a unique, indestructible, dynamic, soul-like entity whose properties are a function of its perceptions and appetites. Monads have no true causal relation with other monads, but all are perfectly synchronized with each other by God in a preestablished harmony. The objects of the material world are simply appearances of collections of monads.
This rich and multilayered description of monads has fueled centuries of metaphysical speculation, philosophical debates, and even mathematical explorations. Today, we’ll delve into an intriguing reinterpretation of the monad concept—not in the metaphysical sense, but through the lens of mathematics, specifically the Clifford geometric algebra Cl(V), which we’ve symbolically denoted by a single letter: A.
Our focus is not merely academic. We aim to explore whether A can serve as a foundational model for the “objects of the material world.” In doing so, we confront a compelling question: How does one move from a singular monad to a collection of monads? After all, the material world seems to consist of interactions among many such entities.
Here, two possibilities come to mind. The first is divine intervention—if God, with infinite power, can create one monad, then it is presumably trivial to create an infinite supply of them, stored in some vast metaphysical “container.” The second possibility, however, is far more intriguing: the monad A might possess a self-replicating property. If so, understanding this self-replication mechanism would require us to study the monad in exquisite detail. This is precisely the journey we are embarking on: a careful and meticulous examination of A.
Nowadays, variations of have found widespread application in fields like quantum computing and quantum cryptography. Increasingly, papers and books are being published that explore A—or, as it’s often called in this context, the “Qubit.” However, for most researchers in these areas, A is not viewed as a monad in the philosophical or foundational sense. Instead, it is seen as a practical and versatile tool, a means to develop cutting-edge technologies, often driven by the demands of the military-industrial complex.
The questions these researchers ask about A
I place myself firmly in the second category of people—those who are drawn to the pursuit of understanding. That said, I wouldn’t hesitate to use the book for self-defense if the situation demanded it. But the essence of our endeavor here is not pragmatic or utilitarian; it’s a journey of curiosity and exploration, seeking to uncover the subtle and surprising truths that lie hidden within.
A contains geometry, algebra, and has a distinct
quantum-mechanical smell. Let us follow our noses and look closer into
the quantum-mechanical aspects of A.
A is an algebra, an algebra with involution "*". In fact, it is a C*-algebra and a von Neumann algebra - the baby version of them. There is a lot of publications about algebraic approach to quantum mechanics. It has started around 1936 with Birkhoff and von Neumann paper about the "The logic of quantum mechanics" (Ann. Math. 37, No. 4, pp. 823-843, 1936). The textbook by J.M. Jauch, "Foundations of Quantum Mechanics", Addison-Weley 1968, developed these ideas further, from a physicist's perspective. Algebras are used in quantum mechanics as "algebras of observables", which is somewhat confusing since ordinarily only selfadjoint elements of the algebra are being considered as "real observables". The product ab of two self-adjoint observable will not, in general, be self-adjoint, so real observables do not form an algebra under the algebra product (that is why Birkhoff and von Neuman were mainly focused on the Jordan product (ab+ba)/2). But there as simple observables, whose values are only 0 and 2. These form "quantum logic". Jauch calls them "propositions". They are presented by self-adjoint idempotents: p = p* = p2. Possible eigenvalues of self-adjoint idempotents are 1 and 0. They are treated as logical "yes" and "no". Let us concentrate on such elements of A and see what we can get this way?
We write a general element of A as p = (p0,p) ( or (p,p4) as in the previous post ), where p0 is a complex scalar, and p is a complex vector. Then p*= (p0*,p*), where * inside the parenthesis stands for complex conjugation. The p*=p means that p0*=p0 and p*=p. In other words p0 and p must be real.
Now we recall a general multiplication formula for A:
for p = (p0,p), q =(q0,q)
pq = (p0q0+p·q, p0q + q0p + i p⨯q) .
In particular for pp we get
pp = (p02 + p·p, 2p0p)
since p⨯p=0.
Thus pp = p implies
p02 + p·p = p0,
and
2p0p = p.
Then either p = 0 or not. If p =0, then p02 =p0, and we have two solutions p0=0 or p0=1. They correspond to trivial Hermitian idempotents p = 0 and p=1. On the other hand, if p is not a zero vector, then from the second equation we get p0 = 1/2. Substituting this value of p0 into the first equation we get
1/4 + p·p = 1/2, or
p·p = 1/4.
We deduce that p = 1/2n, where n is a unit vector in V.
Therefore a general form of a nontrivial Hermitian idempotent is:
p = (1+n)/2.
This is our general propositional question in A: it is something about a direction in V. How to interpret it? We will be returning to this question again and again.
Exercise 1. Let n be a unit vector in V. Let Tn denote the space tangent to the unit sphere at n. Thus Tn can be identified with the subspace of V consisting of all vectors of V perpendicular to n. Let J be defined as a linear operator on Tn defined by
Ju = u⨯n
Show that J is well defined and it defines a complex structure on Tn (i.e. that J2 = -1). Show that J is an isometry, that is that <Ju,Jv> = <u,v> for all u,v in Tn.
Exercise 2. Read Wikipedia article "Linear complex structure", section "Relation to complexifications". Take J from the previous exercise and extend it by linearity to TnC = Tn+iTn. Find a solution of up = u (see the discussion under the previous post) for p=(1+n)/2 within TnC . Express it as an eigenvector of the operator iJ - to which eigenvalue?
Exercise 3. Let n i p be as above. Show that up=u if and only if un=u.
Exercise 4. Find an error in the proof below. I can't, but it looks suspicious to me.
Statement. Let n and n' bet two different unit vectors. Then u satisfies both un=u and un'=u if and only if u=0.
Proof. Set n-n' = v. Suppose v is a nonzero vector. Then we have uv = 0. That means (u.v, u0v+iuxv)=0, Thus u is perpendicular to v. Now, in u0v+iuxv = 0 the first term is proportional to v, the second is perpendicular. Thus both must be zero. It follows that u0=0, and uxv=0. Thus if v is not zero, then u=0.
Exercise 5. Consider the following example:
Choose n = (0,0,1) - e3. Choose e1 = (1,0,0) for the real part of u1. Get the expression for u1. Define
E1=(1+n)
E2=u1
Show that E1,and E2 span the left ideal of A. For this calculate the action of e0,e1,e2,e3 on E1 and E2 and express the results as a linear combinations of E1,E2.
Exercise 6. Is the representation L of A on our left ideal from Exercise 5 irreducible or reducible?
To be continued.
P.S. 29-12-24 4:07 Anna in her comment under the last post mentioned pp. 60-61 of P. Lounesto monograph, where he describes one minimal left ideal of Mat(2,C). Lounesto is concerned only with one proposition p, corresponding to n = (0,0,1). Thus he takes p = (1+e3)/2. Here we are interested in all such p. We are interested in both: algebra and geometry.
P.S. 29-12-24 14:25 In the last post I was talking about the fear of flying. And today I read the news and see that we have plane crashes on a daily basis. Strange is this world.
P.S. 30-12-24 7:33 This is my reply to Anna's comment concerning "observables". Why do we assume then Hermitian. One possible answer is that in a complex Hilbert space every operator A can be written as A= (A+A*)/2 +(A-A*)/2. The first term is Hermitian, the second anti-Hermitian, but then i(A-A*) is Hermitian. So, somehow, Hermitian operators are the basic blocks.
But there is another way to answer this question. We have classical logic with logical "and", "or" and negation. This is modeled in set theory by set-theoretical operations of intersection, sum, and complement. Let us concentrate on logical "and". Instead of sets we can take characteristic functions of sets. Then intersection corresponds to the product of characteristic functions. In quantum logic - a noncommutative version of classical logic, logical "and" is modeled by intersections of subspaces In a Hilbert space subspaces are represented by orthogonal projections. So, we have the "logic of projections" in a rather natural way. Hermitian operators have spectral resolutions in terms of projections. Thus we call them observables. They can be analyzed in terms of yes-no questions. But also normal operators )A is normal if it commutes with its Hermitian adjoint) have t his property. They are also good candidates for observables. Their eigenvalues are, in general, complex, but one deal with complex numbers and try to give some meaning to them. On the other hand, if A is normal, then it cab be written as B+iC, where B and C are Hermitian and commute. Thus they can be simultamneously diagonalized. So, there is no real gain in using normal operators.
P.S. 31-12-24 11:15 From the Web:
Former President Jimmy Carter, in an interview for the January issue of GQ magazine, reveals how, on the recommendation of then-CIA director Stansfield Turner, he once authorized a psychic to make targeting decisions--while "in a trance"--for America's satellite surveillance system:
GQ: One of the promises you made in 1976 was that if you were elected, you would look into the [UFO] reports from Roswell and see if there had been any cover-ups. Did you look into that?
Carter: Well, in a way. I became more aware of what our intelligence services were doing. There was only one instance that I'll talk about now. We had a plane go down in the Central African Republic--a twin-engine plane, small plane. And we couldn't find it. And so we oriented satellites that were going around the earth every ninety minutes to fly over that spot where we thought it might be and take photographs. We couldn't find it. So the director of the CIA came and told me that he had contacted a woman in California that claimed to have supernatural capabilities. And she went in a trance, and she wrote down latitudes and longitudes, and we sent our satellites over that latitude and longitude, and there was the plane.
I assume the story, as reported and told by Carter is true. Of course Carter could be told the story by some high rank military official who does not necessarily told all the truth. Nevertheless I wonder how using geometric algebra can help us in explaining, or at least providing a room for, such phenomena?
P.S. 31-12-24 14:53 Happy New Year 2025. With Fireworks and Pauli matrices.
"Thus pp = p implies
ReplyDeletep0^2 + p·p = p0,
and
2p0p = p."
There seems yet another possibility, though don't know if it would be valid.
First equations is also quadratic equation for p0 which gives two solutions: p0 = 1/2 (1 +/- Sqrt(1 - 4 |p|^2)), that when added with opposite signs for Sqrt also satisfy second equation, 2p0=1, regardless of the value for |p|. When |p|=0, we return to two trivial cases, p0=0 and p0=1.
Would that also be a valid possibility?
If |p| > 0, then from the second equation we have p0=1/2. There is no escape from that.
DeleteI understand that, but what I am asking is if it would be valid to write 2p0 = 1, as 2p0 = (1/2 (1+Sqrt) + 1/2 (1-Sqrt)) = 1?
Deletep0 is EITHER + OR -, UNLESS 1 - 4 |p|^2) = 0, in which case indeed 2p0 = 1.
DeleteOK, so combing + and - together in the same expression is strictly forbidden, understood. In a way it basically screams that a superposition should not be considered on this level.
DeleteThanks.
Right. Suppose you have simple equation x^2=1. It has two solutions:.x =+1 and x=-1. You propose to add and get from this 2x = 0, which would give you a third solution x=0. Strictly forbidden, as you have noticed.
DeleteThank you.
DeleteThat simple example x^2=1 makes it evidently and perfectly clear. Another instance where my mind played "games" on me. :)
And a nice show-up that not all is subjective, there really are things that are objective, even if they are just simple mathematical truths.
In my opinion, the most remarkable about monada is that it is "soul-like entity whose properties are a function of its perceptions and appetites". This is precisely a definition of 'meaning': the meaning of information is the effect which it produces on the recipient.
ReplyDeleteAnother remark is a bit closer to physics. Why should we take self-adjoint operators to represent observables? Because its spectrum is real and we need real values to describe reality. Ok. And why we should use idempotent operators? I can propose that idempotents are in some sense similar to absolutes: you make a step forward multiplying p*p = ..., but remain at the same place ...=p because this place is infinity.
Very valuable comment. I will answer to the physics part in a special P.S. tomorrow.
Delete"the meaning of information is the effect which it produces on the recipient"
DeleteWouldn't that imply that the "meaning" of information would be changeable and basically subjective, same like the recipients of the information are? And even with same recipients, receiving the same information at different points on their learning spiral might, and very often does, produce different effects on them.
I would expect that the "meaning" itself would be an intrinsic and inherent property, immutable and permanent, while the recipients' capability and receivership capacity change, and with them their ability to "read" properly more or less of the "meaning" of information.
For example, I might woke up one morning and decide to go full woke, and declare myself as whatever, let's say a cow. If then somebody says to me, "you're a cow", it might and probably will have completely different meaning to me than to that other person, not to mention my reply back, "thank you very much, so are you", while in fact neither of us would be assuming the real, proper meaning of the word, or information bit, the cow.
The monad Cl(V) and its basis and use as a basis seems highly related to things like the self-adjoint operators, primitive idempotents, metrics, Higgs sector, and Cartan subalgebra due to an affinity for being on the diagonal of a matrix, in particular for me the diagonals of a Hodge Star map matrix.
Delete@Saša "I would expect that the "meaning" itself would be an intrinsic and inherent property,..."
DeleteA property of what? or of whom, to begin with. Let it be a human recipient, you or me, for example. When we are preoccupied by doing one of the exersizes Ark offers us, do we hear a bird tweeting outdoors? I don't. There is exactly zero amount of information for me though my brain receives this sound. In case we are not so busy, we percieve the tweeting and can extract some minor information (the spring will come soon!), but another bird will extract much more information and, most probably, with quite different meaning. That is why i agree that meaning of information is a function of the state of a particular recepient and can be formally characterized by the change in this state provided by the input signal.
Frankly speaking, these ideas are not mine. Some time ago, Ark published a talk of Aleksey Krugly in his blog (on a different subject). Alexey has an unpublished paper titled "On the meaning of information". Now i think he should finish it and publish elsewhere, since there is interest to this theme.
@John G. Yes, yes, all of those you mentioned, and the Hodge star transformation is especially magic. It is particularly the desire to understand a little about the mysterious meaning of these concepts that brought us here, to the Ark's blog.
Delete"A property of what? or of whom, to begin with."
DeleteOf information itself, of an event or of a fact that information speaks of.
Whether we hear the bird tweeting or not, does not change the fact that it tweeted, and that's the information and it's meaning.
If we did not hear it, it does not mean that information is non-existent or that its meaning is null. If we did hear it, we might add different layers of interpretation to it, but the meaning remained unchanged, just like in the case when we did not hear it.
At least that's how I see it.
What I'm hinting at is that only God or Universe in its limitless capacity knows and reads the information to its fullest extent, that is knows Itself completely. All others, parts of It, know and read the information according to their levels of being and knowledge acquired.
DeleteFor example, a cosmic ray muon ionized an atom in our body. The information about that event and its meaning is surely known to that atom and its electron that got out of it, even if we by all likelihood would not be aware of the incident ever happening. Does that lack of our awareness and knowledge make the information about the ionized atom non-existent and its meaning zero in the scope of everything there is?
Good morning ;=)) I am Ark's neighbor and friend. I am very interested in your exchanges. I hesitated for a long time to take part in your comments, but Ark asked me to ;=)) The HODGE STAR (and the Hodge Duality) is the KEY ! And it's funny, when this morning I read some lines on Wikipedia. I read this : "The Hodge star can also be interpreted as a form of the geometric correspondence between an axis of rotation and an infinitesimal rotation...". I had the intuition that the SPIN could be hidden here !? But it was just an intuition ;=) I'm writing an paper about all this ideas...
DeleteI our case the Hodge operator is simply the multiplication by "-i" or by "i" (depending on convention).
DeleteMultiply by i or (-i) is make a quarter turn (90°) to a vector ... why are they talking of "infinitesimal rotation" ?
DeleteIf you look at
Deletehttps://ark-jadczyk.blogspot.com/2024/12/the-spin-chronicles-part-23-rotational.html
you see there that we take for rotations X = ie3 and exp(itX)= exp(ite3), X is an infinitesimal generator here since
exp(itX) = 1+itX+...
We have rotations around the 3-rd axis. To get unitary rotation we exponentiate not e3 but ie3.
The 90° rotation takjed about here is here in complex plane, not in an ordinary real plane in V.
DeleteThe Wick rotation is coming ;=) I don't know why ... but I will find.
DeleteIs coming, for sure, but first some more algebra will be needed.
DeleteI'm not sure to well understand, but what you call X = ie3 is equal to (e1e2e3)e3 = e1e2(e3e3) = e1e2 (because i = e1e2e3 and e3e3 = 1). So, you have a surface. In quantum mechanics, the transformation exp(−itH/ħ) describes time evolution. H is "energy" (units J) and ħ is (units J.s) so i.t.H/ħ is a surface too... So maybe the exp(ite3) looks like a quantum evolution. No ? I'm lost in calculus for dummies ;=))) (have a look just here https://en.wikipedia.org/wiki/Wick_rotation)
Delete"So maybe the exp(ite3) looks like a quantum evolution"
DeleteYes, you can look at it like that. See e.g. Larmor precession at:
https://physics.stackexchange.com/questions/294228/particle-with-spin-in-uniform-magnetic-field
OK ... I will try to be more precise in the future ! ;=)) Have a good new year's eve ;=)
Delete@Alain Cagnati "I'm lost in calculus for dummies ;=)))" The seeming simplicity of the calculations performed in this Blog is a bit misleading. This simplicity is the one that goes after complexity, not before it. Ark is trying to make the most hard job of doing deeply hidden fundamental things evident, and we are trying to follow him. This is by no means an occupation for dummies.
DeleteSorry, I wanted to say : It's me the dummy ! sorry ...
Deleteor (p,p4) as ->
ReplyDeleteor (p,p4) as
or (p,p4) as ->
ReplyDeleteor (p,p4) as
Thanks. Fixed.
DeleteA sometimes is upright or cursive instead of bold.
ReplyDeleteNot anymore. Thanks.
ReplyDeleteHi, found some observational typo :)
ReplyDelete"A contains geometry, algebra, and has a distinct quntum-mechanical smell."
"about algebraic approach to quanum mechanics. It has started around 1936"
Thanks for these publications.
Fixed. Thank you.
Delete"And today I read the news and see that we have plane crashes on a daily basis. Strange is this world."
ReplyDeleteThis is not at all strange if we accept the working hypothesis that Russian (special) services are "good" at preparing such events.
American special services are much better and more efficient, as they are all over the world and able to use the practically unlimited unaccounted money. Israeli special services also have a good record.
DeleteThis is true, but when it comes to plane crashes with civilian passengers, it is easy to attribute at least five such incidents in the last fifteen years to Russian services and not American or Israeli ones.
DeleteIt is easy to make a number of wrong conclusions if restricted yourself to the censored media controlled by American special agencies and Deep State money. That is what this money is for: form making people to make conclusions that they want. And it works, as we can see, pretty well.
DeleteWe can't say whether the conclusions are 100% wrong or right. Nevertheless, the above mentioned working hypothesis holds up quite well.
DeleteOK. Added Exercise 1 at the end of this post. It will be useful for the next post.
DeleteAdded Exercises 3 and 4.
DeleteA few words about airplanes. Not falling, but flying. My favorite argument, borrowed from David Deutsch, is “Airplanes fly.” This is the most wonderful thing, and it is an ironclad argument in favor of the right direction in which our minds lead us. Despite all forms of solipsism, we can realize our dreams, and flying airplanes are the best demonstration of this.
Delete"This is the most wonderful thing, and it is an ironclad argument in favor of the right direction in which our minds lead us."
DeleteIt should be noted, however, that an admittedly small number of people (e.g. the inhabitants of Hiroshima or Nagasaki) might have a completely different opinion on this subject.
Yet it is not the science that is to be blamed for all evil, but politicians and people, including scientists, who follow blindly the official propaganda and let these politicians to get elected to power.
DeleteIt may work quite well for you, when you restrict yourself to the news controlled by the Deep State money. As I said: the total control of the media works quite well and makes most of people to draw wrong conclusions and state wrong hypotheses. Even supposedly "free" media like Twitter were strongly controlled. Now Twitter is less controlled, but still you can't publish a number of things on Twitter, and even if you can, you get punished by "limited visibility".
ReplyDelete"It may work quite well for you".
ReplyDeleteThat's right. It works well for me. This working hypothesis has no weak points so far. And it organizes things better than the statement that "strange is this world".
Well, by bringing your hypothesis forward while being silent about other, much bigger problems in the world, created by other powers, you just demonstrated your lack of objectivism and a "fobia". Which is not surprising taking into account the efficient indoctrination by the controlled media. That is how this control is supposed to work, and, as we see on your example, it works.
Delete"while being silent about other, much bigger problems in the world"
DeleteIt was not me who first brought up the subject of aircraft accidents. If you bring up some "much bigger problem in the world" on your blog maybe I might have something to say about it.
Ad Ex 1
ReplyDeleteIt all results from vector calculus.
But in matrix language we have:
Let matrix J =
0, n3, -n2
-n3, 0, n1
n2, -n1, 0
then
J² =
-n2²-n3², n1n2, n1n3
n1n2, -n1²-n2², n2n3
n1n3, n2n3, -n1²-n2²
We know that n1u1+n2u2+n3u3=0
So for example
(-n2²-n3², n1n2, n1n3) u =
= (-n2²-n3²)u1 - n1(n2u2 + n3u3) =
= (-n2²-n3²)u1 - n1n1u1=
= -u1
Very good so far. Now the isometry....
DeleteAdded Exercise 2.
In fact it can be done in more elegant way using known identities for vector (or cross) product.
Delete"In fact it can be done in more elegant way using known identities for vector (or cross) product."
DeleteDidn't you notice my remark:
"It all results from vector calculus"?
"In fact it can be done in more elegant way using known identities for vector (or cross) product."
DeleteEx 1.
J well defined;
Ju = u × n = |u| |n| sin(ϕ_(u,n)) n⊥(u,n) = |u| n⊥(u,n) = u⊥,
as |n|=1 and ϕ_(u,n)=pi/2, thus J is well defined as is assigned unique u⊥ to u, "going" counter-clockwise in the Tn from u for pi/2 about n (right-hand rule for cross product).
J^2 = -1;
J^2u = J(Ju)) = Ju⊥ = u⊥ × n = (u × n) × n = (n.u).n - (n.n).u = 0.n - 1.u = -u,
where "." is the dot (scalar) product and "double" cross product formula: (a×b)×c = (c.a).b - (c.b).a
Isometry = ;
= <(u×n),(v×n)> = = u⊥.v⊥ = |u⊥| |v⊥| cos(ϕ_(u⊥,v⊥)) = |u| |v| cos(ϕ_(u,v)) = u.v = ,
as Hilbert space scalar product = u0* v0 + u* v, for general complex u=(u0,u) and v=(v0,v), which turns into "ordinary" dot product for u and v in V, and the angle between u⊥ and v⊥, ϕ_(u⊥,v⊥), is the same angle between u and v, ϕ_(u,v), as both u⊥ and v⊥ are "rotated" for pi/2 in the same direction from u and v.
It "ate" the <,> parts for some reason;
Delete=
=
Hope now it'll show them.
It looks very nice!
DeleteIt doesn't look nice.
DeleteEspecially such things as:
"where "." is the dot (scalar) product and "double" cross product formula: (a×b)×c = (c.a).b - (c.b).a"
Thanks.
DeleteIt's maybe better and more precise to drop the "going counter-clockwise" part as it depends on the perspective, and just leave the "assigned unique u⊥ to u, by right-hand rule for cross product". And the isometry part is a bit "crunched" with some part missing, but understandable enough as it is.
For Ex. 2 and consequently Ex. 3 it might take some more time though.
Ad Ex 4.
ReplyDelete"Find an error in the proof below. I can't, but it looks suspicious to me."
I can't find it either, but why does that proof seem suspicious to you?
I wasn't able to find an error either. FWIW.
DeleteThis is that two left ideals corresponding to different n's do not intersect (excluding the zero vector). But left and right ideals intersect. It may some and philosphical implications. physical implications. I will start discussing it in the next post.
DeleteThey are spinor ideals.
Have you explained what an ideal is?
DeleteA linear subspace S of A is called lefr ideal of A is for any u in A and s in S we have that us is in S. In other words S ia invariant under the left action of A. 0 and A are two trivial left ideals. If there is a non-trivial left ideal that means that the representation L is reducible (from the definition of reducibility). So, we have found an infinite number of ways in which L is reducible - there is a left ideal for each unit vector n in V, and diferent n's produce different ideals. Spinors, by the standard definition, are elements of such a left ideal. Usually people choose n=e3.
DeleteI will try to understand it (despite of strange English).
Delete"A linear subspace S of A is called lefr ideal of A is for any u in A and s in S we have that us is in S."
DeleteI tried but I don't understand the above definition and its correspondence to our up=u equation.
Ok. Let
DeleteIp = {s in A: sp=s}
This is our set. I used the symbol s instead of u. But it is the same set, right?
Take any u in A. Suppose s in in Ip. Is then us also in Ip?
Well, if sp=s then usp = us, or, better
(us)p us.
Therefore us is also in Ip.
Better?
(us)p us ->
Delete(us)p = us
"Better?"
Yes, thank you.
Need help, got into a bit of a confusion.
ReplyDeleteSo, p = (p0, p1) = 1/2 (1+n), where n is unit vector in V (p1 is vector component of p), and u = (u0, u1), where u1 is vector component of u.
From up = u, we get;
up = (u0, u1) (1/2, 1/2 n) =
= (1/2 u0 + 1/2 u1.n, 1/2 u0 n + 1/2 u1 + i/2 u1×n) = (u0, u1) = u,
where "." is dot and "×" is cross product, which leads to;
u0 = 1/2 u0 + 1/2 u1.n -> u0 = u1.n, (1)
u1 = 1/2 u0 n + 1/2 u1 + i/2 u1×n -> u1 = u0 n + i u1×n. (2)
For u to be in (Tn + iTn), that is in a "subspace" of V where vectors are perpendicular to n, u1 has to be u1 = i u1×n, and thus
u0 = i (u1×n).n = 0, as u1×n is perpendicular to n.
On the other hand, also u1 is in principle perpendicular to n for u to be in (Tn + iTn), thus we get;
u = i (u1×n) = i |u1| n⊥(u1,n) = i |u| n⊥(u,n) = i u⊥ = iJu,
for J and u⊥ from Exercise 1.
But, in either case, with such u of the form (0, i u×n), I can't get up=u.
However, if I take that in (2) u1×n is 0, i.e. that u1 and n are colinear, then I get, u1 = |u1| n_u = u0 n, or |u1| = u0 and n_u = n, which then also gives, u0 = |u1| > 0, and thus
u = a (1+n), where a>0, satisfies up = u.
For u = a(1+n),
un = (a,an)(0,n) = (an.n, an + i an×n) = (a, an) = a(1+n) = u.
In the other direction, from un = u, we then get;
un = (u0, u1) (0, n) = (u1.n, u0 n + i u1×n) = (u0, u1),
which leads to (1) and (2) as above, i.e. u that satisfies un=u is;
u = a(1+n), where a>0.
For u = a(1+n), up = (a, an) (1/2, 1/2 n) =
= (a/2 + a/2 n.n, a/2 n + a/2 n + i a/2 n×n) = (a, an) = a(1+n) = u.
Therefore, for p = 1/2 (1+n), where n is unit vector in V, up=u, which gives u = a(1+n), where a>0, iff un=u (as in Exercise 3.).
But, in that case, i.e. for u=a(1+n) or p = 1/2 (1+n), I can't get that for any v=(v0, v1) in A, vu or vp is of the form a(1+n) or 1/2 (1+n), that is I don't get that such p or u form a left ideal for A.
The equation up=u has two linearly independent solutions one is u=p, the second is one of the u=(0,u1), with u1 satisfying u1 = i u1×n. The left ideal is spanned by these two solutions. Is that helping?
DeleteThe issue with u=(0, i u1×n) is that I don't get up=p with it;
Deleteup = (0, i u1×n) (1/2, n/2) = (0 + i (u1×n).n, i/2 (u1×n) + 0 + i i/2 (u1×n)×n) = (0, i/2 u1×n - 1/2 (u1×n)×n),
implying that such u is not a solution for up=p.
What am I missing?
But (u1×n)×n) term can be simplified. You know it,
DeleteOf course, but then up = (0, i/2 u1×n - 1/2 ((n.u1).n - (n.n).u1)).
DeleteAhaaa, if u1 = i u1×n, then n.u1=0 and from double cross product we get an additional i/2 u1×n, so in total up = (0, i u1×n) = u, which is in fact u = (0, -(u1×n)×n) = (0, u1). Nice, thanks!
OK, so if I got this right, for u=(u0,u1) in A, left ideal of A is;
Deleteu = |u1| (1, n + i n_u×n),
where n is an arbitrary unit vector and n_u is unit vector of u1, both in V. Is that correctly mathematically expressed statement?
And with that, exercises are done, hopefully correctly.
DeleteSomething is wrong with formula
u = |u1| (1, n + i n_u×n)
Sure. And u1 is a complex vector. This needs to be remembered.
DeleteI am going to add another exercise, to have a particular example.
E2=u1
Deleteor
E2 = (0,u1)
E2 = (0,u1). But E2=u1 is also allowed.
DeleteI mean writing E2=u1 is also allowed. It has exactly the same meaning.
DeleteThanks, and true, two solutions for up=p, where p=(1/2, n/2) and n is unit vector in V and u=(u0, u1) is in A, when combined give,
Deleteu=(u1.n, u0 n + i u1×n),
that is conditions expressed by equations (1) and (2) from the previous comment, which looks a bit more appropriate.
Although, I'm still a bit suspicious about it, as it does not give so clear form for vu in case of arbitrary v=(v0, v1) in A.
In a sense that u1 = (0,u1)
DeleteYou may be less suspicious upon completing the exercise.
Delete"Nevertheless I wonder how using geometric algebra can help us in explaining, or at least providing a room for, such phenomena"
DeleteYeah, The Univers rewards beliefs in fairy tales.
Indeed. Like it was the case with atoms of Democritus. But it takes time.
Delete"But it takes time."
DeleteSometimes.
Yet most often the reward is here and now.
The reward is in finding the explicit form of u1. And then getting the matrix representation of the basis of the algebra. Here and now.
DeleteSo, in principle I can't write two solutions for up=u as coming from single u=(u0, u1) like I did in previous comments, but need to look at them separately, like linearly independent, and then use them as a basis for left ideal. Hope I got that right.
DeleteFor E2 = u1 = i u1×n, where Re(E2)=e1 and n=e3, I got E2 = e1 - ie2, while for E1 = 1 + e3, it seems a "single" u=(u0, u1) can not be "composed" out of them to write the left ideal like I did in previous comments.
You got E1 and E2 right. Now act on them with ei (i=0,1,2,3), and express the results back through E1 and E2. Write the basis ei in a 2x2 matrix form.
DeleteI got;
Deletee0 (E1 + E2) = E1 + E2,
e1 (E1 + E2) = E2 + E1,
e2 (E1 + E2) = iE2 - iE1,
e3 (E1 + E2) = E1 - E2.
For the matrix (2×2) representation, you mean what exactly?
Calculate, for example
Deletee1 E1 = a11 E1 + a21 E2
e1 E2 = a12 E1 + a22 E2
Write the matrix for e1.
The same with e3 and e3.
with e2 and e3.
DeleteGot almost exact Pauli sigma matrices, for e1 got sigma1 and for e3 got sigma3, identity for e0, while for e2 got -sigma2. Nice.
DeleteIs that OK, or should for e2 also be exactly sigma2? Don't see a mistake on calculating it though.
Did you number the rows and columns as I have suggested? Remember, matrices acting on the basis are transposed to those acting on coordinates. I took into account in my numbering of aij.
DeleteAha, haven't transposed them, and as only sigma2 matrix changes to -sigma2 when transposed, all other sigma matrices are pure diagonal, it's explained.
DeleteSo, with the choice of n=e3 and Re(u1)=e1, and the basis for left ideal E1=(1+n) and E2=i(u1×n), we get sigma matrices.
Very nice, thank you for guidance to get that prize/reward!
And, as you commented yesterday, there really is an infinite number of possibilities to construct the basis (E1,E2).
Good. Now we understand how Pauli got his matrices :)
DeleteTomorrow I will try to summarize the results.
Small correction;
Deletesigma1 is not diagonal, but when transposed it does not change. FWIW.
Physically, the operator J is something like the momentum operator [radius-vector x velocity] as it rotates an element u in the tangent plane by the right angle pi/2 counterclockwise.
ReplyDeleteThinking in terms of the coming New Year, i imagine a Big Clock; to rotate its hands we need to insert a key in the 3d direction by normal to the Clock's plane.
I wonder, if we can think of Time as such a Wizard who turns our 3d world round and round again?
Yes. We need to turn time counterclockwise:
Deletehttps://thebulletin.org/doomsday-clock/
Or maybe the World by spinning around and about also pulls the time with Itself? :)
DeleteHappy New Year!
@Ark, least of all i would like to cause such an ominous association on the New Year eve. But you are right, that clock really spoils the joy.
Delete@Saša, that's an interesting view! Everyone must have its counterpart to interact with.
Happy New Year!
momentum --> angular momentum
ReplyDeleteAdded Exercise 6.
ReplyDeleteWell, from purely calculational point of view, from the basis of our left ideal, i.e. E1=1+e3 and E2=e1-ie2, we get;
DeleteL(E1) = L(e4) + L(e3) = L4 + L3,
L(E2) = L(e1) - iL(e2) = L1 - iL2,
where L1, L2, L3 and L4 are the basis of L(A), and they look like:
L(E1) =
{{1, -i, 0, 0},
{i, 1, 0, 0},
{0, 0, 1, 1},
{0, 0, 1, 1}},
L(E2) =
{{0, 0, 1, 1},
{0, 0, -i, -i},
{-1, i, 0, 0},
{1, -i, 0, 0}}.
According to https://sheaves.github.io/Representation-Theory-Irreducibility-Indecomposability/
as L(E1) is block-diagonal, then it should be reducible, but L(E2) is not block-diagonal, so it should be irreducible.
Maybe a hint what to look for?
Hint:
Deletehttps://math.stackexchange.com/questions/1714758/irreducible-implies-the-commutant-consists-of-multiples-of-identity
In finite dimensional case (as our case) "Banch" can be omitted. Every finite dimensional space is a Banach space.
DeleteIn addition, following Anna's reasoning using Shur's lemma in comment to previous post Part 29, there is also representation R(A) of which every element commutes with every element of L(A), so also R(E1) and R(E2) commute with L(E1) and L(E2), for E1 and E2 being the basis of our left ideal, which means that there is another non-trivial representation on our left ideal with all its elements commuting with this L representation that is non-trivial, implying that both would still be reducible.
DeleteHere we need to be more careful. Because our left ideal is not right ideal. The subspace spanned by E1 and E2 is NOT invariant under R(A). We need to solve the equation pu=p (instead of up=p as before)to find the right ideal! And so we have another exercise!
DeleteAnd probably you will easily guess the answer!
DeleteBefore that, regarding the irrep or not of L(A) on our left ideal; according to hint you gave, we should look for projections P in the commutant L(A)'=R(A) of our left ideal and see if there is any P different from 0 or Id. If there is not, then L(A) on our left ideal is irrep. Is that correct?
DeleteIf so, what is exactly projection P?
"We need to solve the equation pu=p (instead of up=p as before)"
DeleteYou mean: pu=u (instead of up=u...
Regarding the right ideal, would it be the commutant of the left ideal?
DeleteBecause then for all us in left ideal, we would just have su for our right ideal, right?
@Bjab
DeleteYes, my typo.
@Saša
DeleteDo not worry about projectors. Think about commutant.
Re Exercise 6.
ReplyDelete@Ark, what do you mean by "irreduceability" or "reduceability"? Definitions? Meaning?
Representation reducible - the representation space has a non-trivial invariant subspace.
DeleteRepresentation irreducible - no non-trivial invariant subspaces.
Trivial means {0} or the whole space.
Representation space: space on which the representation acts.
In our case: two-dimensional complex space spanned by E1 and E2.
DeleteThat is: our left ideal.
DeleteWell, I choose another possibility for creating the left ideal, n=E1 and Im(u1)=e3, and got new basis E1'=1+E1 and E2'=-e2+ie3. Then made the intersection by equating the expression in basis (E1,E2) with the other in basis (E1',E2');
Deletea1L(E1) + a2L(E2) = b1L(E1') + b2L(E2'),
and using linear independence of the L(ei) got that,
a1 = a2 = b1 = ib2,
which suggests to me that there is invariant subspace in the ideal of A spanned by just a(L(E1)+L(E2)) and therefore representation L(A) on left ideal of A would be reducible.
Not sure of my argumentation though, so a check-up is much welcomed. Thanks.
It is not clear to me what is it that you are trying to prove and what exactly you are doing. Perhaps state it as a Proposition and Proof. Then we will analyze it.
DeleteUnderstood wrongly what it is to do to prove the irreducibility or reducibility of L regular representation of left ideal of A, so was looking for a non-trivial subspace in the left ideal, while our left ideal is already a non-trivial subspace in L(A), invariant for left multiplication.
Delete"our left ideal is already a non-trivial subspace in L(A)"
DeleteIn L(A)?
Yes, that's how I understood your note at the end of the new post Part 31. A non-trivial left ideal is a non-trivial invariant subspace for left multiplication in the whole vector space od our consideration, hence linear combination of L(E1) and L(E2), where E1 and E2 span the left ideal, make a non-trivial invariant subspace in L(A).
DeleteDid I understand that wrong?
For the right ideal of A, i.e. solving pu=u, using the same p=(1/2,n/2) where n is unit vector in V, in analogous manner to getting left ideal with only difference being in position of factors in cross product, we get; u = 1 + n + i n×u1, where the complex vector is u1 = i n×u1.
ReplyDeleteSo far so good. Now take Re(u1) = e2, and calculate the matrix representation of R(ei).
DeleteGot E1=1+e3 and E2=e2-ie1, and from them the 2×2 matrices;
DeleteR(e1) = {{0, -i}, {-i, 0}},
R(e2) = {{0, 1}, {1, 0}},
R(e3) = {{1, 0}, {0, -1}}.
But, as R1 = -i R2, i.e. there are not linearly independent, I must have made a mistake. Will check it again.
DeleteYup, an additional (-) sign for R(e1).
DeleteWhen corrected, we again got Pauli matrices, with small difference compared to the left ideal that here with the right ideal we have the interchange 1<->2, sigma2 = R(e1) and sigma1 = R(e2), while for the left ideal it was the exact correspondence, i.e. L(ei) = sigma_i.
@Saša
DeleteYou worked very hard. Can you write a whole post "Solutions to exercises of Part 30" that contains all the information that you have in an easy exposition for other readers?
It can take a couple of days...
DeleteOK, can do.
DeleteAlthough I'm not sure if the 'solution' and its explanation for Exercise 6. are the correct ones.
Word format will be OK?
Don't worry, writing it down for other readers usually helps to make sure that things are correct. Format is html.
DeleteOr rtf format, as it is automatically converted to html when posting, I think.
DeleteBy now I see why you have been strugling with math expressions on your blog, html is such an unfriendly language for that. :/
Delete@Ark. Thank you for your P.S. 30-12-24 7:33 This is my reply to Anna's comment concerning "observables".
ReplyDeleteI still have to understand better the "logic of projections", but these ideas reminded me of the paper by Carlos Rodriguez "UNREAL PROBABILITIES, Partial Truth with Clifford Numbers". The author presents an elegant marriage of the probability theory and Clifford algebras with the resulting natural substantiation of quantum mechanics. By his definition, the wave function 'psi' establishes a correspondence between propositions and elements of Cl(n).
Here is another ability of the wonderful Clifford algebras - to express probabilities! Indeed, the standard logic is based on two elements "true" or "false", which are only two points (a 0-dim circle). But it is insufficient, since we need all real values R for observables. Rodrigues shows how Clifford-valued 'psi' can express partial truth and thus extend logic to the entire space R1 (1-dim circle, if we close it at the infinity).
Thanks, Anna. This paper is still on my list to study and to understand.
Delete