In Part 40, we embarked on a journey through the GNS construction—a profound bridge between abstract algebra and the tangible world of Hilbert spaces. Starting with a finite-dimensional *-algebra
and a positive functional (a state), we constructed a Hilbert space , a *-representation of by linear operators acting on , and a unit cyclic vector in . This elegant framework satisfied the condition:
for all .
Yet, in our exploration, we treated these states as distant, almost alien entities—exotic creatures whose inner lives remained a mystery. They performed their roles impeccably, but we never paused to ask: What animates them? What is their essence? Do they hum with the quiet resonance of mathematical truth, or do they roar with the intensity of physical reality? Where do they come from, and what do they yearn to reveal? In this note, we will rectify this oversight. We will cultivate a deeper connection with these states, learning to appreciate their beauty and, perhaps, even growing to cherish them.
Friedrich Engels once invoked the English proverb: The proof of the pudding is in the eating. In our case, the states are the ingredients with which we prepare the pudding—a rich, nourishing dish of mathematical insight. But what is pudding without chocolate? To make our exploration more palatable, more resonant with the human spirit, we will sweeten it with a familiar tool: the algebra of complex matrices. Though our *-algebra may seem abstract, even when rooted in the geometric Clifford algebra of space, its isomorphism to this well-known matrix algebra brings it closer to our hearts. This matrix algebra, wielded with care, will be the chocolate that enriches our pudding—a tool, yes, but one that transforms the unfamiliar into the delightful.
As we proceed, let us remember that mathematics is not merely a cold, mechanical exercise. It is a dance of ideas, a symphony of structures, and a journey of discovery. By developing a deeper connection with these states, we not only illuminate their mathematical significance but also uncover the poetry hidden within their formalism. Let us eat the pudding, savor the chocolate, and celebrate the beauty of the journey.
So, what is state? First of all it is a linear functional on A. But our A is not only an algebra. It is also a Hilbert space. In fact, it is even a Hilbert algebra (see Part 26), with its scalar product satisfying:
<ba,c> = <b,ca*>. (0)
So,
if f is a state, it is, first of all, a linear functional on a Hilbert space. I
assume that the Reader has an elementary knowledge of Hilbert spaces.
When learning about Hilbert space, one of the early things about them
that we learn is that every continuous linear functional on a Hilbert
space is given by a scalar product with a certain vector, in our case say Φf. Here this vector is an element of A: Φf∈A. Thus
f(a) = <Φf,a> for all a∈A. (*)
Well, why call it Φf? Perhaps we can use the same symbol f, now f denoting the element of the algebra representing the functional f? This would be eco-friendly, while the meaning would be clear from the context.
That was my first idea, but that was a bad idea. Here is why: The left-hand side of (*) is linear in f, but on the right-hand side we have the scalar product, which is anti-linear in the first argument, so writing f(a) = <f,a> would be inconsistent, it would make a bad pudding. Yet a simple change fixes that: we call f the element of the algebra that accomplishes the following:
f(a) = <f*,a>. (1)
Now it is consistent, and we can continue. The functional f should be a state, that is it should be positive:
f(a*a) ≥ 0 ⩝a∈A.
This implies, as we have seen before, that f has the Hermitian property
f(a*)=f(a)*.
How
is this property reflected in the properties of the functional f
represented through (1) as an element of the algebra denoted by the
same symbol?
At first I tried to use the Hilbert algebra identity (0), but, unfortunately, this identity is written in a form that is not quite suitable for this purpose. So, it is better to reach for chocolate: we realize A as the algebra Mat(2,C), where the scalar product is given by
<a,b> = ½Tr(a*b).
Then
f(a) = <f*,a> = ½Tr(f**a) = ½Tr(fa),
and using the trace property Tr(uv)=Tr(vu):
f(a*) = ½Tr(fa*) = ½Tr(a*f) = <a,f>, (2)
while
f(a)* = <f*,a>* = <a,f*> . (3)
For f to have the Hermitian property the difference of (2) and (3) should be 0, so we get
<a, (f-f*)> = 0 for all a.
This implies f=f*.
Thus the algebra element f representing the functional f must be Hermitian.
What about the positivity condition? Let's use the chocolate again. The matrix f representing the functional f is a Hermitian matrix. So it has real eingenvalues. Suppose one of its eigenvalues λ is negative. Let p=p*=pp be the orthogonal projection on the corresponding subspace. Then fp=λp. So
f(p*p)=<f*,pp>= ½Tr(fp) = ½Tr(λp) = λ/2
if p projects on 1-dimensional subspace, and it is λ if p projects on 2-dimensional subspace. This would be negative. Therefore the matrix representing f must have only positive (we say "positive" to denote non-negative) eigenvalues.
Thus positive matrix f½ exists, so that f=(f½)2. Then
f(a) = ½Tr(fa) = ½Tr(f½ f½ a) = ½Tr(f½ af½ ) =<f½ ,a f½ >.
We still have the condition f(1)=1. That means that <f½ , f½ > =1. (Why?)
Thus f½ is a unit vector, and thus Tr(f) = 2. (Can you see it?)
The formula
f(a) = =<f½ ,a f½ >
looks almost exactly the same as the formula
f(a) = (Ω,ρ(a)Ω)
from Part 40: GNS construction. It will look even better if instead of "a f½" we write L(a)f½, where L denotes the left regular representation of A on A:
f(a) = <f½ ,L(a)f½ >.
For it to look exactly the same we must ensure that f½ is a cyclic vector. But is it?
We will go into these little details (where the devil is hiding) in the next post.
By the way: in quantum theory mixed states are represented by "density matrices": positive matrices of trace 1. Thus our ½f is the standard density matrix of quantum theory.
P.S. 26-01-25 14:38 In this post I tried DeepSeek AI to smooth out the introductory part of this post (it is in different font). Not too bad, I think.
The Russians Come to North Beach, San FranciscoI was contacted by allegedly agents of Putin’s FSB about three years ago. They said they were from Channel 5 Saint Petersburg, Russia. They took about two hours video of my talking about the topics above (not the Nimitz Tic Tac of course that had not yet been disclosed). I never heard anything back from them until a week before Donald Trump’s 70 year birthday in June 2016 during the campaign. They asked me about what I thought Trump’s attitude was about Putin, Crimea, Ukraine, NATO and Syria. I was surprised because they were not asking me about physics. Michael Savage has mentioned my name on his radio show periodically and Trump was interviewed on his show several times. I suspect that was the connection. Also we see from Chapter 6 I was involved with alleged agents of KGB (Igor Akchurin) when I was working behind the scenes helping to formulate Reagan’s SDI with Lawry Chickering, Cap Weinberg Jr and Marshall Naify. The parting remark of the Russian was a strong hint that Putin himself was following my ideas:“Keep the emails coming Jack. They really like you in Moscow.”
P.S. 28-01-25 7:57 Alex Levichev sent me today his new paper:
P.S. 28-01-25 7:36 Replying to a comment by Saša, here is a copy of one of my old web pages
https://x.com/CLT_Exam/status/1883915789121818823
- A First Course in Noncommutative Rings, T. Y. Lam
- Riemannian Geometry, Manfredo P. do Carmo
- p-adic Numbersn An Introduction, Fernando Q. Gouvêa
- Manifolds, Tensor Analysis, and Applications (Applied Mathematical Sciences) (v. 75), Ralph Abraham, Jerrold E. Marsden, Tudor Ratiu
Hopefully we won't mix/swap sugar for salt or vice versa in this pudding, as they are both of the same color and similar texture, like we did with wedge and cross products in earlier parts/posts of the series when discussing A as 8-dim real vs. 4-dim complex algebra. FWIW.
ReplyDeleteCan you please remind me how we get in (3) (a,f*) and not (a*,f)?
DeleteGeneral Hilbert space scalar product property
Delete(x,y)* = (y,x)
Thank you.
DeleteFinally i fixed this point for myself:
Delete(a*b)* = b*a for Cl algebra product
and
* = for Hilbert space scalar product
In fact Laura has just prepared a wonderful pudding, I am eating it right now, and I think there is salt and sugar in it. It tastes very "Southern".
ReplyDeleteBon appetit!
DeleteWhile still drinking coffee, it didn't happen once that turkish was served salted instead of putting sugar in it. That think was not drinkable at all. :))
You have to put a little bit of salt in sweet things to enhance the sweetness. And actually, it is butterscotch pudding.
ReplyDelete"You have to put a little bit of salt in sweet things"
DeleteAnd vice versa - a little bit of sugar, for examle, in borshch.
@Laura, i will highly appreciate a link to a receipt of pudding if you recommend one.
@Ark, i'm sorry for trying to make a culinary blog out of our serious scientific talk, but just only a little bit of.
I use an old basic blanc mange recipe with variations.
DeleteVanilla variation: 1 liter of milk, 8 rounded soup spoons of sugar, a cup of corn starch (226 grams), a pinch of salt. Mix together in a sauce pan. Put on medium heat and stir constantly until it thickens and boils gently. Cook for 30 seconds more. Remove from heat and add a tea spoon of vanilla. Put in small dishes in the refrigerator. Serve with a spoon of whipped cream.
Chocolate variation: everything the same only add 8 soup spoons of cocoa and an extra spoon of sugar.
Butterscotch variation: put the spoons of sugar in the saucepan with 4 spoons of water. Have the milk and cornstarch mixture ready before starting to caramelize the sugar. Heat the sugar and water stirring regularly (not constantly) until the sugar turns a light golden color. Pour the milk and cornstarch in all at once while stirring rapidly. Continue stirring until the whole mixture thickens and comes to a gentle boil. Cook for 30 seconds more, and remove from heat.
You can also use half cream and half milk for a very rich version.
Note: I actually make this using rice milk since I cannot digest milk protein. You can make it with coconut milk too. The version Ark was eating actually had no cow's milk in it.
Laura, thanks a lot! I will try chocolate version first.
DeleteIt's a very versatile and basic recipe. You can add butter and egg yolks to make it richer. You can serve the vanilla version with fruit in various forms from fresh to preserves. You can make it sweeter if you like, or less sweet. The chocolate version is my favorite.
Delete"we realize A as the algebra Mat(2,C), where the scalar product is given by
ReplyDelete= ½Tr(a*b)."
What * means?
When "a" is realized as a matrix, the tau involution becomes the Hermitian conjugation of the matrix.
DeletePlease elaborate this fragment:
ReplyDelete½Tr(λp) = λ/2
if p projects on 1-dimensional subspace, and it is λ if p projects on 2-dimensional subspace.
Yes, please, i also wanted to ask for the same!
DeleteTr(λp) =λTr(p)
Deletep is Hermitian. For a Hermitian matrix Tr is the sum of its eigenvalues (counting multiple eigenvalues). If p projects on 1-dimensional subspace, its eigenvalues ar 0 and 1. If p projects on 2-dimensional subspace, its eigenvalues are 1 and 1.
In general the trace of an orthogonal projection operator equals to the dimension of the space on which it projects.
Is that explanation sufficient?
"Is that explanation sufficient?"
DeleteProjection in the form p=(1+n was on how-many-dimensional subspace?
*
DeleteProjection in the form p=(1+n)/2 was on how-many-dimensional subspace?
"the trace of an orthogonal projection operator equals to the dimension of the space on which it projects" -
Deleteintuitively clear, but i will try to look for a rigorous proof.
So far, found that
if A is a matrix whose columns form a basis for a subspace, the orthogonal projection matrix onto the column space of A is: P = A(A^TA)^(−1)AT https://www.geeksforgeeks.org/projection-matrix/
@Bjab
DeleteGood question. Take p=(1+n)/2. It is an element of the algebra A. It is a projection in the sense p=p*=pp. But as an algebra element it does not yet projects on anything. Only when we have a *-representation of the algebra, it can really project. We were dealing with two different representations: the left regular, when A acts on itself from the left, which is reducible, and the representation using Pauli matrices (or on the left ideal Ap), which is irreducible. In the first case p projects on 2-dimensional subspace of 4-dimensioonal space, in the second case p projects on 1-dimensional subspace of 2-dimensional space.
When we use p in the context of Tr operation on matrices, it is implicit that we are dealing with the second case.
I know it can be somewhat confusing.
"I know it can be somewhat confusing."
DeleteYes, it is confusing.
"When we use p in the context of Tr operation on matrices, it is implicit that we are dealing with the second case."
"and it is λ if p projects on 2-dimensional subspace."
So what is this projection on 2-dimensional subspace? Identity?
@Bjab Yes!!!
Delete@Bjab, couldn't you please explain why "this projection on 2-dimensional subspace is identity"? i did not understand...
DeleteIf for simplicity n = {0,0,1}, then in explicit form, projector p = {1/2, 0; 0, 0}, right?
DeleteActing by p on an arbitrary element of A we have
{1/2, 0; 0, 0}{a, b; c, d} = 1/2{a, b; 0, 0}
Is it an identity projection?
@Anna:
Delete"If for simplicity n = {0,0,1}, then in explicit form, projector p = {1/2, 0; 0, 0}, right?"
I would vote for p = {1, 0; 0, 0}, and you?
@Bjab, ok, you mean
Delete{1, 0; 0, 0}{a, b; c, d} = {a, b; 0, 0}
but it does not make much difference. In addition, as far as we use representation in the form of Pauli matrices, this is the "second case", and i am lost completely.
@Anna
DeleteTry it in an irreducible representation - second column zero.
@Anna:
Delete" you mean
{1, 0; 0, 0}{a, b; c, d} = {a, b; 0, 0}"
I would rather mean:
{a, b; c, d}{1, 0; 0, 0} = {a, 0; c, 0}
Then, it is {1, 0; 0, 0}{a, b; 0, 0} = {a, 0; 0, 0}
DeleteIndeed, looks like projection to 1-dim space, but only in such a special case of ideal. And what about projection to 2-dim space?Bjab said that it is identity, how can we show this?
@Anna:
Delete"Then, it is {1, 0; 0, 0}{a, b; 0, 0} = {a, 0; 0, 0}"
{1, 0; 0, 0}{a, b; 0, 0} = {a, b; 0, 0}
"Then, it is {1, 0; 0, 0}{a, b; 0, 0} = {a, 0; 0, 0}"
DeleteSorry. i meant {1, 0; 0, 0}{a, 0; b, 0} = {a, 0; 0, 0}
second column is zero, as Ark recommended
when you take first column nonzero (left ideal), you multiply by p from the right.
Deletewhen you take first raw nonzero (right ideal), you multiply by p from the left.
Ok, let it be left ideal and multiplication by p from the right:
Delete{a, 0; b, 0}{1, 0; 0, 0} = {a, b; 0, 0}
and what of it? Where is the promised projection from 2d to 1d space?
While when we act by p from the left, as i thought it should be when we do not solve an equation but simply project something, we get
Delete{1, 0; 0, 0}{a, 0; b, 0} = {a, 0; 0, 0}
which looks at least like transition from a 2d matrix to 1d complex number.
when I multiply the matrix
Deletea 0
b 0
by the matrix
1 0
0 0
from the right, I get
a 0
b 0
would rather write this last matrix as (a,0;b,0) - rows separated by ";"
solve an equation -> solve an equation to find the ideal subspace
DeleteWe solve the equation up=u to get the left ideal. Of course if u is a solution, then up=p, so p acts from the right, as the identity on the space of solutions. For n=(0,0,1) the space of solutions, in matrix representation, consists of matrices with zero second column.
DeleteDo you agree with that?
Correction: then up=u.
Delete"...so p acts from the right, as the identity on the space of solutions..."
DeleteYes, i agree with all that, but i cannot see why p is a projector from 2d to 1d space in this case.
You are right.
DeleteBut now that you have this left ideal (an it is two-dimensional now), and if you act on this left ideal with p from the left (which you can do, since it is a left ideal), it projects two-dimensional space on one-dimensional subspace.
It is finally clarified. So i was right when i tried to apply p from the left and obtain:
Delete{1, 0; 0, 0}{a, 0; b, 0} = {a, 0; 0, 0}
but i didn't legitimate this by noting that we act on a left ideal.
Right?
Yes!
Delete=1. (Why?)
ReplyDeleteSomething like that:
= |def| = 1/2Tr((f½)*f½) = | f½ positive and => self-conjugate(?) => (f½)*f½ = f | = ½Tr(f) = ½Tr(f 1) = f(1) = 1
Not sure about (f½)* = f½. If f½ is a positive number, it is ok, but does it hold when f½ is a positive matrix?
There are equivalent definitions of a positive matrix:
Delete1) Hermitian matrix with non-negative eigenvalues
2) Matrix of the form m* m, where m is any matrix
3) Matrix M with the property
u* M u ≥ 0 for all vectors u, where u* is copmplex transpose of u.
In our case we use 1) or 2) to ensure that f½ is Hermitian. In fact that is how f½ was defined: as the positive square root of f. So it has the same eigenvectors as f, and its eigenvalues
are positive square roots of eigenvalues of f.
Something more needs explanation?
No, thank you very much, the explanation is exhausting.
DeleteIn addition, i see why some symbols disappeared from my previous comment - the readers should not use triangle brackets. Restoring the message in the normal form.
(f½, f½)=1. (Why?)
With the (1) and (2) definitions of a positive matrix from the above answer, we have:
(f½, f½) = |def| = ½Tr((f½)*f½) = | f½ Hermitian (f½)*f½ = f | = ½Tr(f) = ½Tr(f 1) = f(1) = 1
Thus our ½f is ->
ReplyDeleteThus our f½ is
"P.S. 26-01-25 16:59 Jack Sarfatti, theoretical physicist, has a book (+ co-authors) with a similar title: "Destiny Matrix, 2020"."
ReplyDeleteHave you checked the page 20 where it says:
"Sarfatti 2011 DARPA-NASA 100 yr Starship -> 2020", while developing some sort of gravity field equation on the whiteboard?
Seeing Sarfatti working in cooperation with DARPA, whatever has been published in that book immediately has a bit foul smell.
In addition to strange aura and vibe coming from many of the images in the manuscript, like a bit egotistic and even narcissistic guy...
And then on page 215 it said, "Sarfatti's most recent paper on retrocausality in quantum physics has been published by the American Institute of Physics AIP Conference Proceedings, 1841, 040003 where he also claims to be able to explain our consciousness as a simple universal natural phenomenon that will allow us to make conscious nano-electronic AI machines."
He does not seem like a trustworthy guy, in my book at least.
What's your general comment on him and the content of this manuscript posted on Academia site?
With high positioned friends he can't be completely trustworthy. Like Elon Musk. But good lies must always contain some truths. Once you know that, you use the Sieve of Eratosthenes. In the past I had some nice discussion with him. I added a P.S. with traces of one such exchange,
DeleteArk, your furious discussion with RKiehn2352atxaol.com impresses greatly. Now i know that "Lorentz force does not pop out of Lie derivative" and even has a vague idea why it does not.
ReplyDeleteSo much interesting themes slightly aside of the present Blog-tour. Hope, someday you take these tunes also and arrange them in your special manner: beauty and clarity.
"mathematics is not merely a cold, mechanical exercise. It is a dance of ideas, a symphony of structures, and a journey of discovery.
poetry hidden within their formalism" - thank you again for making it accessible for a wider range of admirers.
@Ark.
ReplyDeletein the post I read (in boldface):
"Thus our ½f is the standard density matrix of quantum theory."
½f or rather f½ ?
Here ½f is correct. In quantum theory density matrix, usually denoted rho, gives expectation value through the formula
Delete= Tr(A rho)
In particular it is required that Tr(rho)=1. If we want f(a) to be the expectation value, adn we have f(a) = ½Tr(fa) , we must take rho = ½f . But this is because we have defined =½Tr(a*b). Probably it would be better to define =Tr(a*b), but it is too late for that. My fault.
(a*,b) on the left hand side of =. HTML ate the Dirac-type brackets.
Delete(a*b). Sorry.
Delete"(a*b). Sorry."
DeleteI don't get it.
I don't get also the comment at 3:12 PM - please rewrite it not using > but ).
O> Rewrite:
DeleteHere ½f is correct. In quantum theory density matrix, usually denoted rho, gives expectation value E(a) of an observable "a" through the formula
E(a) = Tr(A rho)
In particular it is required that Tr(rho)=1. If we want f(a) to be the expectation value, and we have f(a) = ½Tr(fa) , as in the post, we must take rho = ½f . But this is because we have defined the scalar product (a,b) =½Tr(a*b). Probably it would be better to define (a,b) =Tr(a*b), but it is too late for that. My fault.
"Probably it would be better to define (a,b) =Tr(a*b), but it is too late for that. My fault."
DeleteScalar product (a,b) must be equal ½Tr(a*b) (to be consistant) so this was not your fault.
What is A in:
DeleteE(a) = Tr(A rho)
Should be Tr(a rho). Sorry.
DeleteStarted reading in a bit more serious mode your paper "Theory of Kairons" (2009), and there on its 2nd page it said:
ReplyDelete"The standard, linear and continuous time is associated with the name of the “dancer” time - Chronos, while the god of the discontinuous time, the “jumper”, is called Kairos ^2",
and in Footnote 2:
"2 More on this subject in the forthcoming paper “Some aspects of contemporary Kairicity ” by P. Anges and the present author.".
However, on your list of publications,
https://drive.google.com/file/d/1nL_c4CTL1VJGWFGnkbjfI6oDeXzbCXJn/view
I don't see that paper. Can you please provide a link to it?
Also, at the end of Ch 1, it said:
"This paper will be purely mathematical. A possible physical interpretation of the results as well as a generalization to the case of Spinning Kairons, using Clifford algebraic techniques, will be given in a forthcoming paper."
Was that "forthcoming" paper published, and if so, can you provide a link to it?
Thanks.
The collaboration came to a sudden an unfortunate end. Reasons are described in this long thread.
DeleteBut I am planning to return to the subject. It is still on my mind.
"The collaboration came to a sudden and unfortunate end. Reasons are described in this long thread."
DeleteMissed that completely. Thanks for the link to that thread.
And OK, I'll get from the Kairons paper what I can digest at the moment, and hopefully fill in the gaps in my knowledge base from the references therein. Although you said that Kairons paper needs some simplifying, even for your own taste, symbols therein slowly but steadily are becoming more understandable and intelligible. Step by step and maybe no so far in the future I'd be able to get the gist of it. ;)
@Ark, may i return to your remark in discussion after Part 39 for a moment?
ReplyDeleteYou said that "The set of ALL isotropic vectors usually forms a cone, not a vector subspace".
At the same time, light cone is a realization (not a single one) of the absolute of Minkowski space. An absolute of a space is its ideal, right? Hence it follows that the light cone is an ideal of Minkowski space V, and its endomorphisms (motions, Lorentz transforms) are spinor representations of some algebra built on V. What is this algebra, not accidentally Cl(3)?
" An absolute of a space is its ideal, right?"
DeleteYou would have to define what is "an absolute".
According to what I understand an "absolute" is an invariant of a group of transformations. Light cone is invariant under Lorentz transformations and dilations. I have never seen a statement that an absolute must be an ideal.
On the other hand, by definition, a left ideal of Cl(V) is invariant under left action of invertible elements of the Clifford algebra, which is SL(2,C) extended by dilations. So, we may call it an absolute. It is a different absolute.
By 'absolute' i mean just infinitely distant points of a space:
Deletehttps://ru.wikipedia.org/wiki/%D0%98%D0%B4%D0%B5%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0%D1%8F_%D1%82%D0%BE%D1%87%D0%BA%D0%B0
(cannot find an English version of the definition).
The distance from an ordinary point to any point of light cone is infinity, therefore, the cone is an absolute.
As regards the concept of ideal, i suppose that multiplication of any element by infinitely large element gives again an infinitely large element, so the place of all infinitely distant elements can be seen as an ideal.
Of course this is a very simplified consideration, but i'm trying to find approach to the Rozenfeld's idea that 'spinor coordinates are flat generators of absolutes'.
'The distance from an ordinary point to any point of light cone is infinity, therefore, the cone is an absolute'
Deletethis argumentation is wrong, i am sorry, but the fact that the light cone is an absolute of Minkowski space is nevertheless true.
"By 'absolute' i mean just infinitely distant points of a space:"
DeleteThis description is within a hyperbolic geometry. We do not have hyperbolic geometry. We have either Euclidean (for space) or Minkowski (for space+time) metric.
At some point in the future I will, perhaps, discuss the conformal compactification. Then there will be "points at infinity". So far the only point at infinity that appeared here was the north pole on the 2-sphere, which is a infinite distance from any point in the plane.
Ark, perhaps, i distorted these ideas, but they are not mine, they stem from Vadim Varlamov. Let me translate it as close to the original as possible:
Delete"(1) The light cone is the realization of the absolute of Minkowski space (but not the only one). Penrose built his two-spinor calculus (the Newman-Penrose formalism) on the light cone".
(2) "According to Rosenfeld, the coordinates of the spinors are flat generators of absolutes".
"but the fact that the light cone is an absolute of Minkowski space is nevertheless true."
DeleteFor this to be true you would have to precisely define what "absolute" is. First the framework within which it is being defined, then the definition.
"they stem from Vadim Varlamov. "
DeleteWell, then Varlamov is expressing his thoughts in a poetical way. This is not mathematics. This is his "informal talk".
Agreed, this is informal talk. For rigorous treatment we have to study that 'aspirin-needed' paper of V.V. about Rozenfeld's geometric concept of spinors.
DeleteHi.
ReplyDeleteI try to follow your conversations. It's not easy ;=))
In my opinion, and after reading this recent paper (https://link.springer.com/article/10.1007/s00006-024-01368-1) : Self-Dual Maxwell Fields from Clifford Analysis (C. J. Robson)
I'm very happy to understand (because it's very close to my ideas)... This paper is very close from the Kassandroc's paper. Where Clifford-Cauchy-Riemann condition lead to see Maxwell equation and Dirac equation just as a CCR condition on a multivector function on Cl(3, 1). Cl(3,0) will be better, as Kassandrov said...
But, at the end, Robson say : "It is also worth noting here that Hiley and Callaghan [17,18] have shown that a general multivector in Clifford Algebra can be used to define a quantum wavefunction. This is another angle to explore."
Is it far from your discussions ?? I hope not ;=)
Alain, thank you for the paper of Robson. I have a look at it and want to read it attentively.
DeleteIt is worth noting that as long ago as in 1935 Yury Rumer showed that, when written in spinoric form, the system of Dirac equations for particle of zero mass transforms into the system of Maxwell's equations, see
https://ikfia.ysn.ru/wp-content/uploads/2018/01/Rumer1936ru.pdf
pp.72-73 formula (5.21)
In what follows, Rumer warns that despite of this deep relation between these two equations, there is also a principal difference owing to physical difference between wave fields of photon and electron. If you wish, i can translate a piece of the following text explaining the difference in more detail.
@Alain There is a very good little book: D.J. Garling, "Clifford Algebras: And Introduction" , CUP 2011. Ch. 9.3 is "Maxwell's equations". Nicely done.
DeleteThe last comment was from me ;=))
ReplyDeleteThank you very much Anna. I am a little surprised to imagine that a man in 1935 had already seen the connection between Dirac and Maxwell's equations. Is the connection also made with the Clifford-Cauchy-Riemann condition in Cl(3, 0) ?
ReplyDeleteMoreover, I'm friend with Olivia Caramello, a very famous italian mathematician. She wrote a book (a big ;=) about topos of Grothendieck. https://www.oliviacaramello.com/Papers/CaramelloTheUnifyingNotionOfTopos.pdf
I'm in love with Grothendieck ;=)) even if I don't understand his theories ;=)))
I know (epistemologically) that his mathematical theories will be useful for physicists !
In the Robson's paper, I can see at the end, some words about De Rham Cohomology, and so on... This is precisely Olivia's specialty.
In the last lines of Olivia's book you can read : "Another natural subject of study for a possible topos-theoretic interpretation would be that of important dualities in physics such as the AdS/CFT correspondence and mirror symmetry."
Even without having a deep understanding of this very difficult subject, I know (I am certain) that this is the place to look !
And I think Robson is pointing in the same direction ;=))
Hodge self-duality and mirror symmetry are linked. There is a sort of dissymmetry between right/left (odd/even part of Cl(3,0)). An asymmetry which gives the ticking and the arrow of time...
I know that my words are closer to poetry than to mathematics, and this will not please Ark ;=)), but all my intuition pushes me to search ;=)