Physical humbug

A good thing and a bad thing today:

1. The good thing is: I expanded my paper which deals with more advanced questions on this realist interpretation of QM (based on mass-without-mass models of elementary particles that I have been pursuing). I think I see everything clearly now: Maxwell’s equations only make sense as soon as the concepts of charge densities (expressed in coulomb per volume or area unit: C/m3 or C/m2) and currents (expressed in C/s) start making sense, which is only above the threshold of Planck’s quantum of action and within the quantization limits set by the Planck-Einstein relation. So, yes, we can, finally, confidently write this:

Quantum Mechanics = All of Physics = Maxwell’s equations + Planck-Einstein relation

2. The bad thing: I had an annoying discussion on ResearchGate on the consistency of quantum physics with one of those people who still seem to doubt both special as well as general relativity theory.

To get my frustration out, I copy the exchange below – as it might be informative when you are confronted with weirdos on some scientific forum too! It starts with a rather non-sensical remark on the reality of infinities, and an equally non-sensical question on how we get quantization from classical equations (Maxwell’s equations and then Gauss and Stokes theorem), to which the answer has to be: we do not, of course! For that, you need to combine them with the Planck-Einstein relation!

Start of the conversation: Jean Louis Van Belle, I found Maxwell quite consistent with, for instance Stokes aether model. Can you explain how he ‘threw it out‘. It was a firm paradigm until Einstein removed it’s power to ‘change‘ light speed, yet said “space without aether is unthinkable.” (Leiden ’21). He then mostly re-instated it in his ’52 paper correcting 1905 interpretations in bounded ‘spaces in motion within spaces) completed in the DFM. ‘QM’ then emerges.

My answer: Dear Peter – As you seem to believe zero-dimensional objects can have properties and, therefore, exist, and also seem to believe infinity is also real (not just a mathematical idealization), then we’re finished talking, because – for example – no sensible interpretation of the Planck-Einstein relation is possible in such circumstances. Also, all of physics revolves around conjugate variables, and these combine in products or product sums that have very small but finite values (think of typical canonic commutator relations, for example): products of infinity and zero are undefined – in mathematics too, by the way! I attach a ‘typically Feynman’ explanation of one of these commutator relations, which talks about the topic rather well. I could also refer to Dirac’s definition of the Dirac function (real probability functions do not collapse into an infinite probability density), or his comments on the infinities appearing in the perturbation theory he himself had developed, and which he then distanced himself from exactly because it generated infinities, which could not be ‘real’ according to him. I’ve got the feeling you’re stuck in 19th century classical physics. Perhaps you missed one or two other points from Einstein as well (apart from the references you give).To relate this discussion to the original question of this thread, I’d say: physicists who mistake mathematical idealizations for reality do obviously not understand quantum mechanics. Cheers – JL

PS: We may, of course, in our private lives believe that God ‘exists’ and that he is infinite and whatever, but that’s personal conviction or opinion: it is not science, nothing empirical that has been verified and can be verified again at any time. Oh – and to answer your specific question on Maxwell’s equations and vector algebra (Gauss and Stokes theorem), they do not incorporate the Planck-Einstein relation. That’s all. Planck-Einstein (quantization of reality) + Maxwell (classical EM) = quantum physics.

Immediate reply: Jean Louis Van Belle , I don’t invoke either zero dimensional objects, infinity or God! Neither the Planck length or Wolframs brilliant 10-93 is ‘zero’. Fermion pair scale is the smallest ‘Condensed Matter‘ but I suggest we must think beyond that to the condensate & ‘vacuum energy’ scales to advance understanding. More 22nd than 19th century! Einstein is easy to ‘cherry pick’ but his search for SR’s ‘physical’ state bore fruit in 1952!

[This Peter actually did refer to infinities and zeroes in math as being more than mathematical idealizations, but then edited out these specific stupidities.]

My answer: Dear Peter – I really cannot understand why you want to disprove SRT. SRT (or, at least, the absoluteness of lightspeed) comes out of Maxwell’s equations. Einstein just provided a rather heuristic argument to ‘prove’ it. Maxwell’s equations are the more ‘real thing’ – so to speak. And then GRT just comes from combining SRT and Mach’s principle. What problem are you trying to solve? I understand that, somehow, QM does NOT come across as ‘consistent’ to you (so I do not suffer from that: all equations look good to me – I just have my own ‘interpretation’ of it, but I do not question their validity). You seem to suspect something is wrong with quantum physics somewhere, but I don’t see exactly where.

Also, can you explain in a few words what you find brilliant about Wolfram’s number? I find the f/m = c2/h = 1.35639248965213E50 number brilliant, because it gives us a frequency per unit mass which is valid for all kinds of mass (electron, proton, or whatever combination of charged and neutral matter you may think of), but so that comes out of the E = mc2 and E = hf, and so it is not some new ‘God-given’ number or something ‘very special’: it is just a straight combination of two fundamental constants of Nature that we already know. I also find the fine-structure constant (and the electric/magnetic constants) ‘brilliant numbers’ but, again, I don’t think they are something mysterious. So what is Wolfram’s number about? What kind of ratio or combination of functions or unexplained explanation or new undiscovered simplification of existing mainstream explanations does it bring? Is it a new proportionality constant – some elasticity of spacetime, perhaps? A combination of Planck-scale units? Does it connect g and the electric constant? An update of (the inverse of) Eddington’s estimate of the number of protons in the Universe based on latest measurements of the cosmological constant? Boltzmann’s number and Avogadro’s constant (or, in light of the negative exponent, their inverse) – through the golden ratio or a whole new ‘holographic’ theory? New numbers are usually easy to explain in terms of existing theory – or in terms of what they propose to change to existing theory, no?

Perhaps an easy start is to give us a physical dimension for Wolfram’s number. My 1.35639248965213E50 number is the (exact) number of oscillations per kg, for example – not oscillations of ‘aether’ or something, but of charge in motion. Except for the fine-structure constant, all numbers in physics have a physical dimension (except if they’re scaling or coupling constants, such as the fine-structure constant), even if it’s only a scalar (plain number), it’s a number describing x units of something) or a density (then it is x per m3 or m2, per J, per kg, per coulomb, per ampere, etcetera – whatever SI unit or combination of SI units you want to choose).

On a very different note, I think that invoking some statement or a late paper of Einstein in an attempt to add ‘authority’ to some kind of disproof of SRT invokes the wrong kind of authority. 🙂 If you would say Heisenberg or Bohr or Dirac or Feynman or Oppenheimer started doubting SRT near the end of their lives, I’d look up and say: what? Now, no. Einstein had the intellectual honesty to speak up, and speak up rather loudly (cf. him persuading the US President to build the bomb).

As for the compatibility between SRT and GRT and quantum mechanics, the relativistically invariant argument of the wavefunction shows no such incompatibility is there (see Annex II and III of The Zitterbewegung hypothesis and the scattering matrix). Cheers – JL

[…]

Personal conclusion: I think I’ll just stay away from ResearchGate discussions for a while. They are not always good for one’s peace of mind. :-/

All of physics

This five-pager has it: all you ever wanted to know about the Universe. Electron mass and proton mass are seen as input to the model. To the most famous failed experiment in all of classical physics – the 1887 Michelson-Morley experiment, which disproved aether theories and established the absoluteness of lightspeed – we should add the Kamioka Nucleon Decay Experiment, which firmly established that protons do not decay. All the rest is history. 🙂

Post scriptum (26 April): I added another five-pager on fundamental concepts on ResearchGate, which may or may not help to truly understand what might be the case (I am paraphrasing Wittgenstein’s definition of reality here). It is on potentials, and it explains why thinking in terms of neat 1/r or 1/r2 functions is not all that helpful: reality is fuzzier than that. Even a simple electrostatic potential may be not very simple. The fuzzy concept of near and far fields remains useful.

I am actually quite happy with the paper, because it sort of ‘completes’ my thinking on elementary particles in terms of ring currents. It made me feel like it is the first time I truly understand the complementarity/uncertainty principle – and that I invoke it to make an argument.

The nuclear force and gauge

I just wrapped up a discussion with some mainstream physicists, producing what I think of as a final paper on the nuclear force. I was struggling with the apparent non-conservative nature of the nuclear potential, but now I have the solution. It is just like an electric dipole field: not spherically symmetric. Nice and elegant.

I can’t help copying the last exchange with one of the researchers. He works at SLAC and seems to believe hydrinos might really exist. It is funny, and then it is not. :-/

Me: “Dear X – That is why I am an amateur physicist and don’t care about publication. I do not believe in quarks and gluons. Do not worry: it does not prevent me from being happy. JL”

X: “Dear Jean Louis – The whole physics establishment believes that neutron is composed of three quarks, gluons and a see of quark-antiquark pairs. How does that fit into your picture? Best regards, X”

Me: “I see the neutron as a tight system between positive and negative electric charge – combining electromagnetic and nuclear force. The ‘proton + electron’ idea is vague. The idea of an elementary particle is confusing in discussions and must be defined clearly: stable, not-reducible, etcetera. Neutrons decay (outside of the nucleus), so they are reducible. I do not agree with Heisenberg on many fronts (especially not his ‘turnaround’ on the essence of the Uncertainty Principle) so I don’t care about who said what – except Schroedinger, who fell out with both Dirac and Heisenberg, I feel. His reason to not show up at the Nobel Prize occasion in 1933 (where Heisenberg received the prize of the year before, and Dirac/Schroedinger the prize of the year itself) was not only practical, I think – but that’s Hineininterpretierung which doesn’t matter in questions like this. JL”

X: “Dear Jean Louis – I want to to make doubly sure. Do I understand you correctly that you are saying that neutron is really a tight system of proton and electron ? If that is so, it is interesting that Heisenberg, inventor of the uncertainty principle, believed the same thing until 1935 (I have it from Pais book). Then the idea died because. Pauli’s argument won, that the neutron spin 1/2 follows the Fermi-Dirac statistics and this decided that the neutron is indeed an elementary particle. This would very hard sell, if you now, after so many years, agree with Heisenberg. By the way, I say in my Phys. Lett. B paper, which uses k1/r + k2/r2 potential, that the radius of the small hydrogen is about 5.671 Fermi. But this is very sensitive to what potential one is using. Best regards, X.”

Quaternions and the nuclear wave equation

In this blog, we talked a lot about the Zitterbewegung model of an electron, which is a model which allows us to think of the elementary wavefunction as representing a radius or position vector. We write:

ψ = r = a·e±iθ = a·[cos(±θ) + i · sin(±θ)]

It is just an application of Parson’s ring current or magneton model of an electron. Note we use boldface to denote vectors, and that we think of the sine and cosine here as vectors too! You should note that the sine and cosine are the same function: they differ only because of a 90-degree phase shift: cosθ = sin(θ + π/2). Alternatively, we can use the imaginary unit (i) as a rotation operator and use the vector notation to write: sinθ = i·cosθ.

In one of our introductory papers (on the language of math), we show how and why this all works like a charm: when we take the derivative with respect to time, we get the (orbital or tangential) velocity (dr/dt = v), and the second-order derivative gives us the (centripetal) acceleration vector (d2r/dt2 = a). The plus/minus sign of the argument of the wavefunction gives us the direction of spin, and we may, perhaps, add a plus/minus sign to the wavefunction as a whole to model matter and antimatter, respectively (the latter assertion is very speculative though, so we will not elaborate that here).

One orbital cycle packs Planck’s quantum of (physical) action, which we can write either as the product of the energy (E) and the cycle time (T), or the momentum (p) of the charge times the distance travelled, which is the circumference of the loop λ in the inertial frame of reference (we can always add a classical linear velocity component when considering an electron in motion, and we may want to write Planck’s quantum of action as an angular momentum vector (h or ħ) to explain what the Uncertainty Principle is all about (statistical uncertainty, nothing ontological), but let us keep things simple as for now):

h = E·T = p·λ

It is important to distinguish between the electron and the charge, which we think of being pointlike: the electron is charge in motion. Charge is just charge: it explains everything and its nature is, therefore, quite mysterious: is it really a pointlike thing, or is there some fractal structure? Of these things, we know very little, but the small anomaly in the magnetic moment of an electron suggests its structure might be fractal. Think of the fine-structure constant here, as the factor which distinguishes the classical, Compton and Bohr radii of the electron: we associate the classical electron radius with the radius of the poinlike charge, but perhaps we can drill down further.

We also showed how the physical dimensions work out in Schroedinger’s wave equation. Let us jot it down to appreciate what it might model, and appreciate why complex numbers come in handy:

Schroedinger’s equation in free space

This is, of course, Schroedinger’s equation in free space, which means there are no other charges around and we, therefore, have no potential energy terms here. The rather enigmatic concept of the effective mass (which is half the total mass of the electron) is just the relativistic mass of the pointlike charge as it whizzes around at lightspeed, so that is the motion which Schroedinger referred to as its Zitterbewegung (Dirac confused it with some motion of the electron itself, further compounding what we think of as de Broglie’s mistaken interpretation of the matter-wave as a linear oscillation: think of it as an orbital oscillation). The 1/2 factor is there in Schroedinger’s wave equation for electron orbitals, but he replaced the effective mass rather subtly (or not-so-subtly, I should say) by the total mass of the electron because the wave equation models the orbitals of an electron pair (two electrons with opposite spin). So we might say he was lucky: the two mistakes together (not accounting for spin, and adding the effective mass of two electrons to get a mass factor) make things come out alright. 🙂

However, we will not say more about Schroedinger’s equation for the time being (we will come back to it): just note the imaginary unit, which does operate like a rotation operator here. Schroedinger’s wave equation, therefore, must model (planar) orbitals. Of course, the plane of the orbital itself may be rotating itself, and most probably is because that is what gives us those wonderful shapes of electron orbitals (subshells). Also note the physical dimension of ħ/m: it is a factor which is expressed in m2/s, but when you combine that with the 1/m2 dimension of the ∇2 operator, then you get the 1/s dimension on both sides of Schroedinger’s equation. [The ∇2 operator is just the generalization of the d2r/dx2 but in three dimensions, so x becomes a vector: x, and we apply the operator to the three spatial coordinates and get another vector, which is why we call ∇2 a vector operator. Let us move on, because we cannot explain each and every detail here, of course!]

We need to talk forces and fields now. This ring current model assumes an electromagnetic field which keeps the pointlike charge in its orbit. This centripetal force must be equal to the Lorentz force (F), which we can write in terms of the electric and magnetic field vectors E and B (fields are just forces per unit charge, so the two concepts are very intimately related):

F = q·(E + v×B) = q·(E + c×E/c) = q·(E + 1×E) = q·(E + j·E) = (1+ j)·q·E

We use a different imaginary unit here (j instead of i) because the plane in which the magnetic field vector B is going round and round is orthogonal to the plane in which E is going round and round, so let us call these planes the xy– and xz-planes respectively. Of course, you will ask: why is the B-plane not the yz-plane? We might be mistaken, but the magnetic field vector lags the electric field vector, so it is either of the two, and so now you can check for yourself of what we wrote above is actually correct. Also note that we write 1 as a vector (1) or a complex number: 1 = 1 + i·0. [It is also possible to write this: 1 = 1 + i·0 or 1 = 1 + i·0. As long as we think of these things as vectors – something with a magnitude and a direction – it is OK.]

You may be lost in math already, so we should visualize this. Unfortunately, that is not easy. You may to google for animations of circularly polarized electromagnetic waves, but these usually show the electric field vector only, and animations which show both E and B are usually linearly polarized waves. Let me reproduce the simplest of images: imagine the electric field vector E going round and round. Now imagine the field vector B being orthogonal to it, but also going round and round (because its phase follows the phase of E). So, yes, it must be going around in the xz– or yz-plane (as mentioned above, we let you figure out how the various right-hand rules work together here).

Rotational plane of the electric field vector

You should now appreciate that the E and B vectors – taken together – will also form a plane. This plane is not static: it is not the xy-, yz– or xz-plane, nor is it some static combination of two of these. No! We cannot describe it with reference to our classical Cartesian axes because it changes all the time as a result of the rotation of both the E and B vectors. So how we can describe that plane mathematically?

The Irish mathematician William Rowan Hamilton – who is also known for many other mathematical concepts – found a great way to do just that, and we will use his notation. We could say the plane formed by the E and B vectors is the EB plane but, in line with Hamilton’s quaternion algebra, we will refer to it as the k-plane. How is it related to what we referred to as the i– and j-planes, or the xy– and xz-plane as we used to say? At this point, we should introduce Hamilton’s notation: he did write i and j in boldface (we do not like that, but you may want to think of it as just a minor change in notation because we are using these imaginary units in a new mathematical space: the quaternion number space), and he referred to them as basic quaternions in what you should think of as an extension of the complex number system. More specifically, he wrote this on a now rather famous bridge in Dublin:

i2 = -1

j2 = -1

k2 = -1

i·j = k

j·i= k

The first three rules are the ones you know from complex number math: two successive rotations by 90 degrees will bring you from 1 to -1. The order of multiplication in the other two rules ( i·j = k and j·i = –k ) gives us not only the k-plane but also the spin direction. All other rules in regard to quaternions (we can write, for example, this: i ·j·k = -1), and the other products you will find in the Wikipedia article on quaternions) can be derived from these, but we will not go into them here.

Now, you will say, we do not really need that k, do we? Just distinguishing between i and j should do, right? The answer to that question is: yes, when you are dealing with electromagnetic oscillations only! But it is no when you are trying to model nuclear oscillations! That is, in fact, exactly why we need this quaternion math in quantum physics!

Let us think about this nuclear oscillation. Particle physics experiments – especially high-energy physics experiments – effectively provide evidence for the presence of a nuclear force. To explain the proton radius, one can effectively think of a nuclear oscillation as an orbital oscillation in three rather than just two dimensions. The oscillation is, therefore, driven by two (perpendicular) forces rather than just one, with the frequency of each of the oscillators being equal to ω = E/2ħ = mc2/2ħ.

Each of the two perpendicular oscillations would, therefore, pack one half-unit of ħ only. The ω = E/2ħ formula also incorporates the energy equipartition theorem, according to which each of the two oscillations should pack half of the total energy of the nuclear particle (so that is the proton, in this case). This spherical view of a proton fits nicely with packing models for nucleons and yields the experimentally measured radius of a proton:

Proton radius formula

Of course, you can immediately see that the 4 factor is the same factor 4 as the one appearing in the formula for the surface area of a sphere (A = 4πr2), as opposed to that for the surface of a disc (A = πr2). And now you should be able to appreciate that we should probably represent a proton by a combination of two wavefunctions. Something like this:

Proton wavefunction

What about a wave equation for nuclear oscillations? Do we need one? We sure do. Perhaps we do not need one to model a neutron as some nuclear dance of a negative and a positive charge. Indeed, think of a combination of a proton and what we will refer to as a deep electron here, just to distinguish it from an electron in Schroedinger’s atomic electron orbitals. But we might need it when we are modeling something more complicated, such as the different energy states of, say, a deuteron nucleus, which combines a proton and a neutron and, therefore, two positive charges and one deep electron.

According to some, the deep electron may also appear in other energy states and may, therefore, give rise to a different kind of hydrogen (they are referred to as hydrinos). What do I think of those? I think these things do not exist and, if they do, they cannot be stable. I also think these researchers need to come up with a wave equation for them in order to be credible and, in light of what we wrote about the complications in regard to the various rotational planes, that wave equation will probably have all of Hamilton’s basic quaternions in it. [But so, as mentioned above, I am waiting for them to come up with something that makes sense and matches what we can actually observe in Nature: those hydrinos should have a specific spectrum, and we do not such see such spectrum from, say, the Sun, where there is so much going on so, if hydrinos exist, the Sun should produce them, right? So, yes, I am rather skeptical here: I do think we know everything now and physics, as a science, is sort of complete and, therefore, dead as a science: all that is left now is engineering!]

But, yes, quaternion algebra is a very necessary part of our toolkit. It completes our description of everything! 🙂

The physics of the wave equation

The rather high-brow discussions on deep electron orbitals and hydrinos with a separate set of interlocutors, inspired me to write a paper at the K-12 level on wave equations. Too bad Schroedinger did not seem to have left any notes on how he got his wave equation (which I believe to be correct in every way (relativistically correct, too), unlike Dirac’s or others).

The notes must be somewhere in some unexplored archive. If there are Holy Grails to be found in the history of physics, then these notes are surely one of them. There is a book about a mysterious woman, who might have inspired Schrödinger, but I have not read it, yet: it is on my to-read list. I will prioritize it (read: order it right now).

Oh – as for the math and physics of the wave equation, you should also check the Annex to the paper: I think the nuclear oscillation can only be captured by a wave equation when using quaternion math (an extension to complex math).

The Language of Physics

The meaning of life in 15 pages ! Or… Well… At least a short description of the Universe… Not sure it helps in sense-making. 🙂

Post scriptum (25 March 2021): Because this post is so extremely short and happy, I want to add a sad anecdote which illustrates what I have come to regard as the sorry state of physics as a science.

A few days ago, an honest researcher put me in cc of an email to a much higher-brow researcher. I won’t reveal names, but the latter – I will call him X – works at a prestigious accelerator lab in the US. The gist of the email was a question on an article of X: “I am still looking at the classical model for the deep orbits. But I have been having trouble trying to determine if the centrifugal and spin-orbit potentials have the same relativistic correction as the Coulomb potential. I have also been having trouble with the Ademko/Vysotski derivation of the Veff = V×E/mc2 – V2/2mc2 formula.”

I was greatly astonished to see X answer this: “Hello – What I know is that this term comes from the Bethe-Salpeter equation, which I am including (#1). The authors say in their book that this equation comes from the Pauli’s theory of spin. Reading from Bethe-Salpeter’s book [Quantum mechanics of one and two electron atoms]: “If we disregard all but the first three members of this equation, we obtain the ordinary Schroedinger equation. The next three terms are peculiar to the relativistic Schroedinger theory”. They say that they derived this equation from covariant Dirac equation, which I am also including (#2). They say that the last term in this equation is characteristic for the Dirac theory of spin ½ particles. I simplified the whole thing by choosing just the spin term, which is already used for hyperfine splitting of normal hydrogen lines. It is obviously approximation, but it gave me a hope to satisfy the virial theoremOf course, now I know that using your Veff potential does that also. That is all I know.” [I added the italics/bold in the quote.]

So I see this answer while browsing through my emails on my mobile phone, and I am disgusted – thinking: Seriously? You get to publish in high-brow journals, but so you do not understand the equations, and you just drop terms and pick the ones that suit you to make your theory fit what you want to find? And so I immediately reply to all, politely but firmly: “All I can say, is that I would not use equations which I do not fully understand. Dirac’s wave equation itself does not make much sense to me. I think Schroedinger’s original wave equation is relativistically correct. The 1/2 factor in it has nothing to do with the non-relativistic kinetic energy, but with the concept of effective mass and the fact that it models electron pairs (two electrons – neglect of spin). Andre Michaud referred to a variant of Schroedinger’s equation including spin factors.”

Now X replies this, also from his iPhone: “For me the argument was simple. I was desperate trying to satisfy the virial theorem after I realized that ordinary Coulomb potential will not do it. I decided to try the spin potential, which is in every undergraduate quantum mechanical book, starting with Feynman or Tippler, to explain the hyperfine hydrogen splitting. They, however, evaluate it at large radius. I said, what happens if I evaluate it at small radius. And to my surprise, I could satisfy the virial theorem. None of this will be recognized as valid until one finds the small hydrogen experimentally. That is my main aim. To use theory only as a approximate guidance. After it is found, there will be an explosion of “correct” theories.” A few hours later, he makes things even worse by adding: “I forgot to mention another motivation for the spin potential. I was hoping that a spin flip will create an equivalent to the famous “21cm line” for normal hydrogen, which can then be used to detect the small hydrogen in astrophysics. Unfortunately, flipping spin makes it unstable in all potential configurations I tried so far.”

I have never come across a more blatant case of making a theory fit whatever you want to prove (apparently, X believes Mills’ hydrinos (hypothetical small hydrogen) are not a fraud), and it saddens me deeply. Of course, I do understand one will want to fiddle and modify equations when working on something, but you don’t do that when these things are going to get published by serious journals. Just goes to show how physicists effectively got lost in math, and how ‘peer reviews’ actually work: they don’t. :-/

A simple explanation of quantum-mechanical operators

I added an Annex to a paper that talks about all of the fancy stuff quantum physicists like to talk about, like scattering matrices and high-energy particle events. The Annex, however, is probably my simplest and shortest summary of the ordinariness of wavefunction math, including a quick overview of what quantum-mechanical operators actually are. It does not make use of state vector algebra or the usual high-brow talk about Gilbert spaces and what have you: you only need to know what a derivative is, and combine it with our realist interpretation of what the wavefunction actually represents.

I think I should do a paper on the language of physics. To show how (i) rotations (i, j, k), (ii) scalars (constants or just numerical values) and (iii) vectors (real vectors (e.g. position vectors) and pseudovectors (e.g. angular frequency or momentum)), and (iv) operators (derivatives of the wavefunction with respect to time and spatial directions) form ‘words’ (e.g. energy and momentum operators), and how these ‘words’ then combine into meaningful statements (e.g. Schroedinger’s equation).

All of physics can then be summed up in half a page or so. 🙂

PS: You only get collapsing wavefunctions when adding uncertainty to the models (i.e. our own uncertainty about the energy and momentum). The ‘collapse’ of the wavefunction (let us be precise: the collapse of the dissipating wavepacket) thus corresponds to the ‘measurement’ operation. 🙂

PS2: Incidentally, the analysis also gives an even more intuitive explanation of Einstein’s mass-energy equivalence relation, which I summarize in a reply to one of the many ‘numerologist’ physicists on ResearchGate (copied below).

All of physics…

I just wrapped up my writings on physics (quantum physics) with a few annexes on the (complex) math of it, as well as a paper on how to model unstable particles and (high-energy) particle events. And then a friend of mine sent me this image of the insides of a cell. There is more of it on where it came from.

Just admit it: it is truly amazing, isn’t? I suddenly felt a huge sense of wonder – probably because of the gap between the simple logic of quantum physics and this incredible complex molecular machinery.  

I quote: “Seen are Golgi apparatus, mitochondria, endoplasmic reticulum, cell wall, and hundreds of protein structures and membrane-bound organelles. The cell structure is of a Eukaryote cell i.e. a multicellular organism which means it can correspond to the cell structure of humans, dogs, or even fungi and plants.” These images were apparently put together from “X-ray, nuclear magnetic resonance (NMR) and cryoelectron microscopy datasets.”

I think it is one of those moments where it feels great to be human. 🙂

The nature of antimatter (and dark matter too!)

The electromagnetic force has an asymmetry: the magnetic field lags the electric field. The phase shift is 90 degrees. We can use complex notation to write the E and B vectors as functions of each other. Indeed, the Lorentz force on a charge is equal to: F = qE + q(v×B). Hence, if we know the (electric field) E, then we know the (magnetic field) B: B is perpendicular to E, and its magnitude is 1/c times the magnitude of E. We may, therefore, write:

B = –iE/c

The minus sign in the B = –iE/c expression is there because we need to combine several conventions here. Of course, there is the classical (physical) right-hand rule for E and B, but we also need to combine the right-hand rule for the coordinate system with the convention that multiplication with the imaginary unit amounts to a counterclockwise rotation by 90 degrees. Hence, the minus sign is necessary for the consistency of the description. It ensures that we can associate the aeiEt/ħ and aeiEt/ħ functions with left and right-handed spin (angular momentum), respectively.

Now, we can easily imagine a antiforce: an electromagnetic antiforce would have a magnetic field which precedes the electric field by 90 degrees, and we can do the same for the nuclear force (EM and nuclear oscillations are 2D and 3D oscillations respectively). It is just an application of Occam’s Razor principle: the mathematical possibilities in the description (notations and equations) must correspond to physical realities, and vice versa (one-on-one). Hence, to describe antimatter, all we have to do is to put a minus sign in front of the wavefunction. [Of course, we should also take the opposite of the charge(s) of its antimatter counterpart, and please note we have a possible plural here (charges) because we think of neutral particles (e.g. neutrons, or neutral mesons) as consisting of opposite charges.] This is just the principle which we already applied when working out the equation for the neutral antikaon (see Annex IV and V of the above-referenced paper):

Don’t worry if you do not understand too much of the equations: we just put them there to impress the professionals. 🙂 The point is this: matter and antimatter are each other opposite, literally: the wavefunctions aeiEt/ħ and –aeiEt/ħ add up to zero, and they correspond to opposite forces too! Of course, we also have lightparticles, so we have antiphotons and antineutrinos too.

We think this explains the rather enormous amount of so-called dark matter and dark energy in the Universe (the Wikipedia article on dark matter says it accounts for about 85% of the total mass/energy of the Universe, while the article on the observable Universe puts it at about 95%!). We did not say much about this in our YouTube talk about the Universe, but we think we understand things now. Dark matter is called dark because it does not appear to interact with the electromagnetic field: it does not seem to absorb, reflect or emit electromagnetic radiation, and is, therefore, difficult to detect. That should not be a surprise: antiphotons would not be absorbed or emitted by ordinary matter. Only anti-atoms (i.e. think of a antihydrogen atom as a antiproton and a positron here) would do so.

So did we explain the mystery? We think so. 🙂

We will conclude with a final remark/question. The opposite spacetime signature of antimatter is, obviously, equivalent to a swap of the real and imaginary axes. This begs the question: can we, perhaps, dispense with the concept of charge altogether? Is geometry enough to understand everything? We are not quite sure how to answer this question but we do not think so: a positron is a positron, and an electron is an electron¾the sign of the charge (positive and negative, respectively) is what distinguishes them! We also think charge is conserved, at the level of the charges themselves (see our paper on matter/antimatter pair production and annihilation).

We, therefore, think of charge as the essence of the Universe. But, yes, everything else is sheer geometry! 🙂

The End of Science?

There are two branches of physics. The nicer branch studies equilibrium states: simple laws, stable particles (electrons and protons, basically), the expanding (oscillating?) Universe, etcetera. This branch includes the study of dynamical systems which we can only describe in terms of probabilities or approximations: think of kinetic gas theory (thermodynamics) or, much simpler, hydrostatics (the flow of water, Feynman, Vol. II, chapters 40 and 41), about which Feynman writes this:

“The simplest form of the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. You will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not.” (Feynman, I-3-7)

Still, we believe first principles do apply to the flow of water through a pipe. In contrast, the second branch of physics – we think of the study of non-stable particles here: transients (charged kaons and pions, for example) or resonances (very short-lived intermediate energy states). The class of physicists who studies these must be commended, but they resemble econometrists modeling input-output relations: if they are lucky, they will get some kind of mathematical description of what goes in and what goes out, but the math does not tell them how stuff actually happens. It leads one to think about the difference between a theory, a calculation and an explanation. Simplifying somewhat, we can represent such input-output relations by thinking of a process that will be operating on some state |ψ⟩ to produce some other state |ϕ⟩, which we write like this:

⟨ϕ|A|ψ⟩

A is referred to as a Hermitian matrix if the process is reversible. Reversibility looks like time reversal, which can be represented by taking the complex conjugate ⟨ϕ|A|ψ⟩* = ⟨ψ|A†|ϕ⟩: we put a minus sign in front of the imaginary unit, so we have –i instead of i in the wavefunctions (or i instead of –i with respect to the usual convention for denoting the direction of rotation). Processes may not reversible, in which case we talk about symmetry-breaking: CPT-symmetry is always respected so, if T-symmetry (time) is broken, CP-symmetry is broken as well. There is nothing magical about that.

Physicists found the description of these input-output relations can be simplified greatly by introducing quarks (see Annex II of our paper on ontology and physics). Quarks have partial charge and, more generally, mix physical dimensions (mass/energy, spin or (angular) momentum). They create some order – think of it as some kind of taxonomy – in the vast zoo of (unstable) particles, which is great. However, we do not think there was a need to give them some kind of ontological status: unlike plants or insects, partial charges do not exist.

We also think the association between forces and (virtual) particles is misguided. Of course, one might say forces are being mediated by particles (matter- or light-particles), because particles effectively pack energy and angular momentum (light-particles – photons and neutrinos – differ from matter-particles (electrons, protons) in that they carry no charge, but they do carry electromagnetic and/or nuclear energy) and force and energy are, therefore, being transferred through particle reactions, elastically or non-elastically. However, we think it is important to clearly separate the notion of fields and particles: they are governed by the same laws (conservation of charge, energy, and (linear and angular) momentum, and – last but not least – (physical) action) but their nature is very different.

W.E. Lamb (1995), nearing the end of his very distinguished scientific career, wrote about “a comedy of errors and historical accidents”, but we think the business is rather serious: we have reached the End of Science. We have solved Feynman’s U = 0 equation. All that is left, is engineering: solving practical problems and inventing new stuff. That should be exciting enough. 🙂

Post scriptum: I added an Annex (III) to my paper on ontology and physics, with what we think of as a complete description of the Universe. It is abstruse but fun (we hope!): we basically add a description of events to Feynman’s U = 0 (un)worldliness formula. 🙂