The true mystery of quantum physics

In many of our papers, we presented the orbital motion of an electron around a nucleus or inside of a more complicated molecular structure[1], as well as the motion of the pointlike charge inside of an electron itself, as a fundamental oscillation. You will say: what is fundamental and, conversely, what is not? These oscillations are fundamental in the sense that these motions are (1) perpetual or stable and (2) also imply a quantization of space resulting from the Planck-Einstein relation.

Needless to say, this quantization of space looks very different depending on the situation: the order of magnitude of the radius of orbital motion around a nucleus is about 150 times the electron’s Compton radius[2] so, yes, that is very different. However, the basic idea is always the same: a pointlike charge going round and round in a rather regular fashion (otherwise our idea of a cycle time (T = 1/f) and an orbital would not make no sense whatsoever), and that oscillation then packs a certain amount of energy as well as Planck’s quantum of action (h). In fact, that’s just what the Planck-Einstein relation embodies: E = h·f. Frequencies and, therefore, radii and velocities are very different (we think of the pointlike charge inside of an electron as whizzing around at lightspeed, while the order of magnitude of velocities of the electron in an atomic or molecular orbital is also given by that fine-structure constant: v = α·c/n (n is the principal quantum number, or the shell in the gross structure of an atom), but the underlying equations of motion – as Dirac referred to it – are not fundamentally different.

We can look at these oscillations in two very different ways. Most Zitterbewegung theorists (or realist thinkers, I might say) think of it as a self-perpetuating current in an electromagnetic field. David Hestenes is probably the best known theorist in this class. However, we feel such view does not satisfactorily answer the quintessential question: what keeps the charge in its orbit? We, therefore, preferred to stick with an alternative model, which we loosely refer to as the oscillator model.

However, truth be told, we are aware this model comes with its own interpretational issues. Indeed, our interpretation of this oscillator model oscillated between the metaphor of a classical (non-relativistic) two-dimensional oscillator (think of a Ducati V2 engine, with the two pistons working in tandem in a 90-degree angle) and the mathematically correct analysis of a (one-dimensional) relativistic oscillator, which we may sum up in the following relativistically correct energy conservation law:

dE/dt = d[kx2/2 + mc2]/dt = 0

More recently, we actually noted the number of dimensions (think of the number of pistons of an engine) should actually not matter at all: an old-fashioned radial airplane engine has 3, 5, 7, or more cylinders (the non-even number has to do with the firing mechanism for four-stroke engines), but the interplay between those pistons can be analyzed just as well as the ‘sloshing back and forth’ of kinetic and potential energy in a dynamic system (see our paper on the meaning of uncertainty and the geometry of the wavefunction). Hence, it seems any number of springs or pistons working together would do the trick: somehow, linear becomes circular motion, and vice versa. But so what number of dimensions should we use for our metaphor, really?

We now think the ‘one-dimensional’ relativistic oscillator is the correct mathematical analysis, but we should interpret it more carefully. Look at the dE/dt = d[kx2/2 + mc2]/dt = = d(PE + KE)/dt = 0 once more.

For the potential energy, one gets the same kx2/2 formula one gets for the non-relativistic oscillator. That is no surprise: potential energy depends on position only, not on velocity, and there is nothing relative about position. However, the (½)m0v2 term that we would get when using the non-relativistic formulation of Newton’s Law is now replaced by the mc2 = γm0c2 term. Both energies vary – with position and with velocity respectively – but the equation above tells us their sum is some constant. Equating x to 0 (when the velocity v = c) gives us the total energy of the system: E = mc2. Just as it should be. 🙂 So how can we now reconcile this two models? One two-dimensional but non-relativistic, and the other relativistically correct but one-dimensional only? We always get this weird 1/2 factor! And we cannot think it away, so what is it, really?

We still don’t have a definite answer, but we think we may be closer to the conceptual locus where these two models might meet: the key is to interpret x and v in the equation for the relativistic oscillator as (1) the distance along an orbital, and (2) v as the tangential velocity of the pointlike charge along this orbital.

Huh? Yes. Read everything slowly and you might see the point. [If not, don’t worry about it too much. This is really a minor (but important) point in my so-called realist interpretation of quantum mechanics.]

If you get the point, you’ll immediately cry wolf and say such interpretation of x as a distance measured along some orbital (as opposed to the linear concept we are used to) and, consequently, thinking of v as some kind of tangential velocity along such orbital, looks pretty random. However, keep thinking about it, and you will have to admit it is a rather logical way out of the logical paradox. The formula for the relativistic oscillator assumes a pointlike charge with zero rest mass oscillating between v = 0 and v = c. However, something with zero rest mass will always be associated with some velocity: it cannot be zero! Think of a photon here: how would you slow it down? And you may think we could, perhaps, slow down a pointlike electric charge with zero rest mass in some electromagnetic field but, no! The slightest force on it will give it infinite acceleration according to Newton’s force law. [Admittedly, we would need to distinguish here between its relativistic expression (F = dp/dt) and its non-relativistic expression (F = m0·a) when further dissecting this statement, but you get the idea. Also note that we are discussing our electron here, in which we do have a zero-rest-mass charge. In an atomic or molecular orbital, we are talking an electron with a non-zero rest mass: just the mass of the electron whizzing around at a (significant) fraction (α) of lightspeed.]

Hence, it is actually quite rational to argue that the relativistic oscillator cannot be linear: the velocity must be some tangential velocity, always and – for a pointlike charge with zero rest mass – it must equal lightspeed, always. So, yes, we think this line of reasoning might well the conceptual locus where the one-dimensional relativistic oscillator (E = m·a2·ω2) and the two-dimensional non-relativistic oscillator (E = 2·m·a2·ω2/2 = m·a2·ω2) could meet. Of course, we welcome the view of any reader here! In fact, if there is a true mystery in quantum physics (we do not think so, but we know people – academics included – like mysterious things), then it is here!

Post scriptum: This is, perhaps, a good place to answer a question I sometimes get: what is so natural about relativity and a constant speed of light? It is not so easy, perhaps, to show why and how Lorentz’ transformation formulas make sense but, in contrast, it is fairly easy to think of the absolute speed of light like this: infinite speeds do not make sense, both physically as well as mathematically. From a physics point of view, the issue is this: something that moves about at an infinite speed is everywhere and, therefore, nowhere. So it doesn’t make sense. Mathematically speaking, you should not think of v reaching infinite but of a limit of a ratio of a distance interval that goes to infinity, while the time interval goes to zero. So, in the limit, we get a division of an infinite quantity by 0. That’s not infinity but an indeterminacy: it is totally undefined! Indeed, mathematicians can easily deal with infinity and zero, but divisions like zero divided by zero, or infinity divided by zero are meaningless. [Of course, we may have different mathematical functions in the numerator and denominator whose limits yields those values. There is then a reasonable chance we will be able to factor stuff out so as to get something else. We refer to such situations as indeterminate forms, but these are not what we refer to here. The informed reader will, perhaps, also note the division of infinity by zero does not figure in the list of indeterminacies, but any division by zero is generally considered to be undefined.]


[1] It may be extra electron such as in, for example, the electron which jumps from place to place in a semiconductor (see our quantum-mechanical analysis of electric currents). Also, as Dirac first noted, the analysis is actually also valid for electron holes, in which case our atom or molecule will be positively ionized instead of being neutral or negatively charged.

[2] We say 150 because that is close enough to the 1/α = 137 factor that relates the Bohr radius to the Compton radius of an electron. The reader may not be familiar with the idea of a Compton radius (as opposed to the Compton wavelength) but we refer him or her to our Zitterbewegung (ring current) model of an electron.

Complex beauty

This is an image from the web: it was taken by Gerald Brown in 2007—a sunset at Knysna, South Africa. I love the colors and magic.

800px-Knysnasunset

It is used in a Wikipedia article on Mie scattering, which summarizes the physics behind as follows:

“The change of sky colour at sunset (red nearest the sun, blue furthest away) is caused by Rayleigh scattering by atmospheric gas particles, which are much smaller than the wavelengths of visible light. The grey/white colour of the clouds is caused by Mie scattering by water droplets, which are of a comparable size to the wavelengths of visible light.”

I find it amazing how such simple explanation can elucidate such fascinating complexity and beauty. Stuff like this triggered my interest in physics—as a child. I am 50+ now. My explanations are more precise now: I now understand what Rayleigh and/or Mie scattering of light actually is.

The best thing about that is that it does not reduce the wonder. On the contrary. It is like the intricate colors and pattern of the human eye. Knowing that it is just “the pigmentation of the eye’s iris and the frequency-dependence of the scattering of light by the turbid medium in the stroma of the iris” (I am quoting from Wikipedia once more) does not make our eyes any less beautiful, isn’t it?

I actually do not understand why mainstream quantum physicists try to mystify things. I find reality mysterious enough already.

Where is the Paul Krugman of physics?

We were so busy deconstructing myths in the QED sector of physics that we hardly had time for the QCD sector—high-energy physics. In any case, we do not think quantum field theory or the quark hypothesis have much explanatory power, so it is not like I feel I missed anything. What Paul Dirac wrote about the sorry state of physics back in 1958 still rings very true today:

“Quantum mechanics may be defined as the application of equations of motion to particles. […] The domain of applicability of the theory is mainly the treatment of electrons and other charged particles interacting with the electromagnetic field—a domain which includes most of low-energy physics and chemistry. Now there are other kinds of interactions, which are revealed in high-energy physics and are important for the description of atomic nuclei. These interactions are not at present sufficiently well understood to be incorporated into a system of equations of motion. Theories of them have been set up and much developed and useful results obtained from them. But in the absence of equations of motion these theories cannot be presented as a logical development of the principles set up in this book. We are effectively in the pre-Bohr era with regard to these other interactions. It is to be hoped that with increasing knowledge a way will eventually be found for adapting the high-energy theories into a scheme based on equations of motion, and so unifying them with those of low-energy physics.” (Paul A.M. Dirac, The Principles of Quantum Mechanics, 4th edition (1958), p. 312)

Dirac did not find it necessary to change these words in his lifetime. Moreover, as he got older – he left Earth in 1984 – he became increasingly vocal about the New Nonsense which came out of various Departments of Theoretical Physics after WW II. His skepticism was grounded both in logic as well as in what has become increasingly rare among academics: a deep understanding of the physicality of whatever it is that physicists are studying. He must have felt pretty lonely, for example, when stating this at the occasion of one of his retirement talks:

“Most physicists are very satisfied with the situation. They say: “Quantum electrodynamics is a good theory, and we do not have to worry about it anymore.” I must say that I am very dissatisfied with the situation because this so-called ‘good theory’ [Dirac refers to perturbation and renormalization theory here] involves neglecting infinities. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small—not neglecting it just because it is infinitely great and you do not want it!”

He was over 70 then, but still very active: the Wikipedia article on him notes that, at that age, he was still walking about a mile each day, and swimming in nearby lakes! However, the same article also notes that “his refusal to accept renormalization resulted in his work on the subject moving increasingly out of the mainstream”, so nobody cares about his later writings. Too bad. I think Dirac died rather unhappily because of that, but at least he did not go crazy or, worse, commit suicide—like Paul Ehrenfest.

What Dirac refers to as the ‘pre-Bohr era’ in physics – so that is now – resembles the alchemical period in chemistry, which is usually defined as an era which preceded the era of fundamental understanding of chemical processes. Indeed, alchemy’s Elixir Vitae, the Grand Magisterium, and the Red Tincture have been substituted by the modern-day equivalents of quarks and gluons, W/Z bosons and various other condensates, including the Higgs field. All utter nonsense! We have found only one fundamental ‘condensate’ in Nature so far, and that is the electric charge. Indeed, the elementary charge presents itself in very different shapes and densities depending on whether we look at it as being the essence of an electron, a muon, a proton or as a constituent of a neutron. This remarkable malleability of the elementary charge, both as a physical quantity as well as a logical concept, is the one and only true mystery in quantum physics for me.

Can we analyze it any further going forward? Maybe. Maybe not. The dynamics of pair creation and annihilation for electron and positrons suggest our interpretation of antimatter having an opposite spacetime signature makes sense, but proton-antiproton annihilation and – perhaps the most interesting process in high-energy physics – antiproton-neutron annihilation shows the idea of a strong force is and remains very relevant. Mainstream physics just went into a blind alley with the quark hypothesis to try to understand it. We should reverse gear and get out of that. Anti-proton neutron annihilation proves a neutron consists of a proton and an electron because, while we do observe proton-antineutron annihilation, we do not observe proton-neutron annihilation—and then there are a zillion other arguments in favor of explaining a neutron as a composite particle consisting of a proton and a neutron: think of their mass, magnetic moment, the way a neutron disintegrates outside of a nucleus, proton-neutron interactions, etcetera. We have analyzed this elsewhere so we cannot dwell on this here. So where are we?

I cannot answer that question on  your behalf. I can only say where might be. I spent about ten years on what I think of as a sensible theory of quantum electrodynamics, so I am happy about that. So that’s done and now I should move to something new. However, I am not so motivated to go beyond electrodynamics and dig into the math one needs to analyze a force whose structure is totally unknown. Why not? Because it is like moving from linear to non-linear models in math and statistics, or like moving from analyzing the dynamics of equilibrium states to non-equilibrium processes. If you have ever done one of the two (I did as an econometrist long time ago), you will know that is a huge effort. But, of course, it is never about the effort: it is about the potential rewards. The investment might be worthwhile if the reward would be great.

Unfortunately, that is not the case: what do we gain by being able to explain what might or might not be going at the femtometer scale? The ring current model explains the Compton radius, mass, spin and the magnetic moment of electrons, protons and neutrons – and everything above that scale (call it the picometer scale) – perfectly well. Do we need to go beyond? We are, effectively, entering what Gerard ‘t Hooft referred to as the Great Desert. Indeed, I have always said we should not be discouraged by physicists who say we cannot possibly imagine the smallest of smallest scales and, therefore, meekly accept concepts like hidden or rolled-up dimensions without protest. However, I do have to admit that trying to imagine how one can possibly pack like 400,000 TeV into some femtometer-scale space may effectively be beyond my intellectual capabilities.

The proposition is outright unattractive because the complexity here is inversely proportional to the number of clues that we have got here. So I do not think we will get out of this alchemical period of high-energy physics any time soon. Mankind has constructed colliders whose powers we can double or triple over the next decades, perhaps, but those experiments still amount to studying the mechanics of a Swiss precision watch by repeatedly smashing it and studying the fragments. I think there are serious limitations to what we can learn from that. Oliver Consa is right in saying that more ‘US$ 600m budgets to continue the game’ will not solve the problem: “Maybe it is time to consider alternative proposals. Winter is coming.”

I am not so skeptical. I think summer is coming: after 100 years, this Bohr-Heisenberg revolution (or Diktatur, we should say, perhaps) is finally running out of steam. Why? Not because it failed to convince academics: that, it did all too well—some Nobel Prizes have obviously been awarded a bit prematurely. It is running out of steam because it failed to convince the masses: they are smarter and think for themselves now. It is e-democracy now, and does not stand for elite. Summer may not come with a lot of answers—but summers usually do come with a lot of fun. 🙂

Economics was once being described as the dismal science because it defended the wrong things. I am an economist myself, and I credit Paul Krugman by turning that perception around, almost single-handedly. Paul Krugman is extremely smart. More importantly, he had the guts. Who is going to be the Paul Krugman of physics? If they do not find him or her soon, the science of physics may evolve from a rotten state – hopelessly Lost in Math, as Hossenfelder writes – to something that is far worse: the obsolete state. The good thing about that is that the academics are leaving the space wide open for people like me: spacetime rebels and philosophers. 🙂

Post scriptum: In case you wonder where the 400,000 TeV and femto-meter reference would come from, I am effectively referring to a delightful popular book which was written by Gerard ‘t Hooft—a few years before he and his PhD thesis advisor (Martinus Veltman) got the 1999 Nobel Prize in Physics “for elucidating the quantum structure of the electroweak force”: In Search of the Ultimate Building Blocks (1996). Based on the quantum field theories he contributed so much to, he finds things start to go crazy when trying to imagine frequencies of 1032 Hz or higher.

We think things get crazy much sooner. The Planck-Einstein relation tells us that a frequency of 1032 Hz corresponds to an energy of E = h·f ≈ (4×10−15 eV·s)·(1032 Hz) = 4×1017 eV = 400,000 tera-electronvolt (1 TeV = 1012 eV). A photon with such energy would actually be much smaller than a femtometer (10−15 m): λ = c/f = 10−24 m. This, of course, assumes a one-cycle photon model—but such model is quite straightforward and logical. In-between the femtometer scale (10−15) and that 10−24 m scale, we are missing a 1,000,000,000 factor, so the question is actually this: how can one pack 400,000 TeV (think of the energy of 400,000,000 protons here) into something that is 1,000,0000 times smaller than a proton?

Of course, you might say that we should probably not use a photon-like model of a particle here: why would we assume all of the energy should be in some (linear) electromagnetic oscillation? If there is another force – another force than electromagnetic – then it will have a very different structure, right? Right. And so that should solve the problem, right? No. Not right. That makes things even worse: some non-linear multi-dimensional force or potential allows one to pack even more energy in much smaller spaces! So then things start looking crazy at much larger scales than that 10−24 m scale. When calculating the (strong) force inside of a proton using a two-dimensional oscillator model, for example, you get a number that is equal to 850,000 newton (N), more or less. For an electron, we get a much more modest value (0.115 N, to be precise) but it is still enormous at these small scales.

We should wrap up here—this is just a blog post and so we cannot be too long here. Just note it is already quite something to explain how a proton, whose radius is about 0.83-0.84 fm, can pack its energy—which is less than one GeV. We can do it, but calculations for smaller but more massive stuff (particles that would be smaller than protons, in other words) quickly give very weird results: mass or energy densities quickly approach those of black holes, in fact! Now, your imagination may be more flexible than mine, but I am not ready yet to think we all consist of black holes separated by nothingness.

Now you will ask: “So what is your theory then? And what is its comparative advantage to other theories?” It is rather simple: I think we consist of protons, neutrons and electrons separated by electromagnetic fields, and the main advantage of my theory is that I can explain what protons, neutrons and electrons actually are. One cannot say the same of quarks, gluons, W/Z bosons, Higgs particles and other New Nonsense concepts. Someone with more credentials than me wrote this a while ago:

“The state of the classical electromagnetic theory reminds one of a house under construction that was abandoned by its working workmen upon receiving news of an approaching plague. The plague was in this case, of course, quantum theory.” (Doris Teplitz, Electromagnetism: Paths to Research, 1982)

I’d rather help finish the house than entertain New Nonsense.