Explaining the proton mass and radius

Our alternative realist interpretation of quantum physics is pretty complete but one thing that has been puzzling us is the mass density of a proton: why is it so massive as compared to an electron? We simplified things by adding a factor in the Planck-Einstein relation. To be precise, we wrote it as E = 4·h·f. This allowed us to derive the proton radius from the ring current model:

proton radius This felt a bit artificial. Writing the Planck-Einstein relation using an integer multiple of h or ħ (E = n·h·f = n·ħ·ω) is not uncommon. You should have encountered this relation when studying the black-body problem, for example, and it is also commonly used in the context of Bohr orbitals of electrons. But why is n equal to 4 here? Why not 2, or 3, or 5 or some other integer? We do not know: all we know is that the proton is very different. A proton is, effectively, not the antimatter counterpart of an electron—a positron. While the proton is much smaller – 459 times smaller, to be precise – its mass is 1,836 times that of the electron. Note that we have the same 1/4 factor here because the mass and Compton radius are inversely proportional:

ratii

This doesn’t look all that bad but it feels artificial. In addition, our reasoning involved a unexplained difference – a mysterious but exact SQRT(2) factor, to be precise – between the theoretical and experimentally measured magnetic moment of a proton. In short, we assumed some form factor must explain both the extraordinary mass density as well as this SQRT(2) factor but we were not quite able to pin it down, exactly. A remark on a video on our YouTube channel inspired us to think some more – thank you for that, Andy! – and we think we may have the answer now.

We now think the mass – or energy – of a proton combines two oscillations: one is the Zitterbewegung oscillation of the pointlike charge (which is a circular oscillation in a plane) while the other is the oscillation of the plane itself. The illustration below is a bit horrendous (I am not so good at drawings) but might help you to get the point. The plane of the Zitterbewegung (the plane of the proton ring current, in other words) may oscillate itself between +90 and −90 degrees. If so, the effective magnetic moment will differ from the theoretical magnetic moment we calculated, and it will differ by that SQRT(2) factor.

Proton oscillation

Hence, we should rewrite our paper, but the logic remains the same: we just have a much better explanation now of why we should apply the energy equipartition theorem.

Mystery solved! 🙂

Post scriptum (9 August 2020): The solution is not as simple as you may imagine. When combining the idea of some other motion to the ring current, we must remember that the speed of light –  the presumed tangential speed of our pointlike charge – cannot change. Hence, the radius must become smaller. We also need to think about distinguishing two different frequencies, and things quickly become quite complicated.

Feynman’s religion

Perhaps I should have titled this post differently: the physicist’s worldview. We may, effectively, assume that Richard Feynman’s Lectures on Physics represent mainstream sentiment, and he does get into philosophy—less or more liberally depending on the topic. Hence, yes, Feynman’s worldview is pretty much that of most physicists, I would think. So what is it? One of his more succinct statements is this:

“Often, people in some unjustified fear of physics say you cannot write an equation for life. Well, perhaps we can. As a matter of fact, we very possibly already have an equation to a sufficient approximation when we write the equation of quantum mechanics.” (Feynman’s Lectures, p. II-41-11)

He then jots down that equation that Schrödinger has on his grave (shown below). It is a differential equation: it relates the wavefunction (ψ) to its time derivative through the Hamiltonian coefficients that describe how physical states change with time (Hij), the imaginary unit (i) and Planck’s quantum of action (ħ).

hl_alpb_3453_ptplr

Feynman, and all modern academic physicists in his wake, claim this equation cannot be understood. I don’t agree: the explanation is not easy, and requires quite some prerequisites, but it is not anymore difficult than, say, trying to understand Maxwell’s equations, or the Planck-Einstein relation (E = ħ·ω = h·f).

In fact, a good understanding of both allows you to not only understand Schrödinger’s equation but all of quantum physics. The basics are this: the presence of the imaginary unit tells us the wavefunction is cyclical, and that it is an oscillation in two dimensions. The presence of Planck’s quantum of action in this equation tells us that such oscillation comes in units of ħ. Schrödinger’s wave equation as a whole is, therefore, nothing but a succinct representation of the energy conservation principle. Hence, we can understand it.

At the same time, we cannot, of course. We can only grasp it to some extent. Indeed, Feynman concludes his philosophical remarks as follows:

“The next great era of awakening of human intellect may well produce a method of understanding the qualitative content of equations. Today we cannot. Today we cannot see that the water flow equations contain such things as the barber pole structure of turbulence that one sees between rotating cylinders. We cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality—or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way.” (Feynman’s Lectures, p. II-41-12)

I think that puts the matter to rest—for the time being, at least. 🙂

Complex beauty

This is an image from the web: it was taken by Gerald Brown in 2007—a sunset at Knysna, South Africa. I love the colors and magic.

800px-Knysnasunset

It is used in a Wikipedia article on Mie scattering, which summarizes the physics behind as follows:

“The change of sky colour at sunset (red nearest the sun, blue furthest away) is caused by Rayleigh scattering by atmospheric gas particles, which are much smaller than the wavelengths of visible light. The grey/white colour of the clouds is caused by Mie scattering by water droplets, which are of a comparable size to the wavelengths of visible light.”

I find it amazing how such simple explanation can elucidate such fascinating complexity and beauty. Stuff like this triggered my interest in physics—as a child. I am 50+ now. My explanations are more precise now: I now understand what Rayleigh and/or Mie scattering of light actually is.

The best thing about that is that it does not reduce the wonder. On the contrary. It is like the intricate colors and pattern of the human eye. Knowing that it is just “the pigmentation of the eye’s iris and the frequency-dependence of the scattering of light by the turbid medium in the stroma of the iris” (I am quoting from Wikipedia once more) does not make our eyes any less beautiful, isn’t it?

I actually do not understand why mainstream quantum physicists try to mystify things. I find reality mysterious enough already.

The nature of the matter-wave

Yesterday, I was to talk for about 30 minutes to some students who are looking at classical electron models as part of an attempt to try to model what might be happening to an electron when moving through a magnetic field. Of course, I only had time to discuss the ring current model, and even then it inadvertently turned into a two-hour presentation. Fortunately, they were polite and no one dropped out—although it was an online Google Meet. In fact, they reacted quite enthusiastically, and so we all enjoyed it a lot. So much that I adjusted the presentation a bit the next morning (which added even more time to it unfortunately) so as to add it to my YouTube channel. So this is the link to it, and I hope you enjoy it. If so, please like it—and share it! 🙂

Oh! Forgot to mention: in case you wonder why this video is different than others, see my Tweet on Sean Carroll’s latest series of videos hereunder. That should explain it.

Sean Carroll

Post scriptum: Of course, I got the usual question from one of the students: if an electron is a ring current, then why doesn’t it radiate its energy away? The easy answer is: an electron is an electron and so it doesn’t—for the same reason that an electron in an atomic orbital or a Cooper pair in a superconducting loop of current does not radiate energy away. The more difficult answer is a bit mysterious: it has got to do with flux quantization and, most importantly, with the Planck-Einstein relation. I will not be too long here (I cannot because this is just a footnote to a blog post) but the following elements should be noted:

1. The Planck-Einstein law embodies a (stable) wavicle: a wavicle respects the Planck-Einstein relation (E = h·f) as well as Einstein’s mass-energy equivalence relation (E = mc2). A wavicle will, therefore, carry energy but it will also pack one or more units of Planck’s quantum of action. Both the energy as well as this finite amount of physical action (Wirkung in German) will be conserved—cycle after cycle.

2. Hence, equilibrium states should be thought of as electromagnetic oscillation without friction. Indeed, it is the frictional element that explains the radiation of, say, an electron going up and down in an antenna and radiating some electromagnetic signal out. To add to this rather intuitive explanation, I should also remind you that it is the accelerations and decelerations of the electric charge in an antenna that generate the radio wave—not the motion as such. So one should, perhaps, think of a charge going round and round as moving like in a straight line—along some geodesic in its own space. That’s the metaphor, at least.

3. Technically, one needs to think in terms of quantized fluxes and Poynting vectors and energy transfers from kinetic to potential (and back) and from ‘electric’ to ‘magnetic’ (and back). In short, the electron really is an electromagnetic perpetuum mobile ! I know that sounds mystical (too) but then I never promised I would take all of the mystery away from quantum physics ! 🙂 If there would be no mystery left, I would not be interested in physics.

Amplitudes and probabilities

The most common question you ask yourself when studying quantum physics is this: what are those amplitudes, and why do we have to square them to get probabilities?

It is a question which cannot easily be answered because it depends on what we are modeling: a two-state system, electron orbitals, something else? And if it is a two-state system, is it a laser, some oscillation between two polarization states, or what? These are all very different physical systems and what an amplitude actually is in that context will, therefore, also be quite different. Hence, the professors usually just avoid the question and just brutally plow through all of it all. And then, by the time they are done, we are so familiar with it that we sort of forget about the question. [I could actually say the same about the concept of spin: no one bothers to really define it because it means different things in different situations.]

In fact, I myself sort of forgot about the question recently, and I had to remind myself of the (short) answer: probabilities are proportional to energy or mass densities (think of a charge spending more time here than there, or vice versa), and the energy of a wave or an oscillation – any oscillation, really – is proportional to the square of its amplitude.

Is that it? Yes. We just have to add an extra remark here: we will take the square of the absolute value of the amplitude, but that has got to do with the fact that we are talking oscillations in two dimensions here: think of an electromagnetic oscillation—combining an oscillating electric and magnetic field vector.

Let us throw in the equations here: the illustration below shows the structural similarity (in terms of propagation mechanism) between (1) how an electromagnetic wave propagates in space (Maxwell’s equations without charges) and (2) how amplitude waves propagate (Schroedinger’s equation without the term for the potential resulting from the positive nucleus). It is the same mechanism. That is actually what led me to the bold hypothesis, in one of my very first papers on the topic, that they must be describing the same thing—and that both must model the energy conservation principle—and the conservation of linear and angular (field) momentum!

Energy waves

Is that it? Is that all there is? Yep! We have written a lot of papers, but all they do is further detail this principle: probability amplitudes, or quantum-mechanical probability amplitudes in general, model an energy propagation mechanism.

Of course, the next question is: why can we just add these amplitudes, and then square them? Is there no interference term? We explored this question in our most recent paper, and the short answer is: no. There is no interference term. [This will probably sound weird or even counter-intuitive – it is not what you were thought, is it? – but we qualify this remark in the post scriptum to this post.]

Frankly, we would reverse the question: why can we calculate amplitudes by taking the square root of the probabilities? Why does it all work out? Why is it that the amplitude math mirrors the probability math? Why can we relate them through these squares or square roots when going from one representation to another? The answer to this question is buried in the math too, but is based on simple arithmetic. Note, for example, that, when insisting base states or state vectors should be orthogonal, we actually demand that their squared sum is equal to the sum of their squares:

(a + b)2 = a2 + b2 ⇔ = a2 + b2 = a2 + b2 + 2a·ba·b = 0

This is a logical or arithmetic condition which represents a physical condition: two physical states must be discrete states. They do not overlap: it is either this or that. We can then add or multiply these physical states – mix them so as to produce logical states, which express the uncertainty in our mind (not in Nature!), but we can only do that because these base states are, effectively, independent. That is why we can also use them to construct another set of (logical) base vectors, which will be (linearly) independent too! This will all sound like Chinese to you, of course. Any case, the basic message is this: behind all of the hocus-pocus, there is physics behind—and so that’s why I writing all of this. I write for people like me and you: people who want to truly understand what it is all about.

My son had to study quantum physics to get a degree in engineering—but he hated it: reproducing amplitude math without truly understanding what it means is no fun. He had no time for any of my detailed analysis and interpretation, but I think I was able to help him by telling him it all meant something real. It helped him to get through it all, and he got great marks in the end! And, yes, of course, I told him not to bother the professor with all of my weird theories! 🙂

I hope this message encourages you in very much the same way. I sometimes think I should try to write some kind of Survival Guide to Quantum Physics, but then my other – more technical – blog is a bit of that, so that should do! 🙂

PS: As for there being no interference term, when adding the amplitudes (without the 2a·term), we do have interference between the a and b terms, of course: we use boldface letters here but these ‘vectors’ are quantum-mechanical amplitudes, so they are complex-valued exponentials. However, we wanted to keep the symbolism extremely simple in this post, and so that’s what we did. Those wave equations look formidable enough already, don’t they? 🙂

Do we only see what we want to see?

I had a short but interesting exchange with a student in physics—one of the very few who actually reads (some of) the stuff on this and my other blog (the latter is more technical than this one).

It was an exchange on the double-slit experiment with electrons—one of these experiments which is supposed to prove that classical concepts and electromagnetic theory fail when analyzing the smallest of small things and that only an analysis in terms of those weird probability amplitudes can explain what might or might not be going on.

Plain rubbish, of course. I asked him to carefully look at the pattern of blobs when only one of the slits is open, which are shown in the top and bottom illustrations below respectively (the inset (top-left) shows how the mask moves over the slits—covering both, one or none of the two slits respectively).

Interference 1

Of course, you see interference when both slits are open (all of the stuff in the middle above). However, I find it much more interesting to see there is interference too (or diffraction—my preferred term for an interference pattern when there is only one slit or one hole) even if only one of the slits is open. In fact, the interference pattern when two slits are open, is just the pattern one gets from the superposition of the diffraction pattern for the two slits respectively. Hence, an analysis in terms of probability amplitudes associated with this or that path—the usual thing: add the amplitudes and then take the absolute square to get the probabilities—is pretty nonsensical. The tough question physicists need to answer is not how interference can be explained, but this: how do we explain the diffraction pattern when electrons go through one slit only?

[…]

I realize you may be as brainwashed as the bright young student who contacted me: he did not see it at first! You should, therefore, probably have another look at the illustrations above too: there are brighter and darker spots when one slit is open too, especially on the sides—a bit further away from the center.

[Just do it before you read on: look, once more, at the illustration above before you look at the next.]

[…]

The diffraction pattern (when only one slit is open) resembles that of light going through a circular aperture (think of light going through a simple pinhole), which is shown below: it is known as the Airy disk or the Airy pattern. [I should, of course, mention the source of my illustrations: the one above (on electron interference) comes from the article on the 2012 Nebraska-Lincoln experiment, while the ones below come from the articles on the Airy disk and the (angular) resolution of a microscope in Wikipedia respectively. I no longer refer to Feynman’s Lectures or related material because of an attack by the dark force.]

Beugungsscheibchen

When we combine two pinholes and move them further or closer to each other, we get what is shown below: a superposition of the two diffraction patterns. The patterns in that double-slit experiment with electrons look what you would get using slits instead of pinholes.

airy_disk_spacing_near_rayleigh_criterion

It obviously led to a bit of an Aha-Erlebnis for the student who bothered to write and ask. I told him a mathematical analysis using classical wave equations would not be easy, but that it should be possible. Unfortunately, mainstream physicists − academic teachers and professors, in particular – seem to prefer the nonsensical but easier analysis in terms of probability amplitudes. I guess they only see what they want to see. :-/

Note: For those who would want to dig a bit further, I could refer to them to a September 20, 2014 post as well as a successor post to that on diffraction and interference of EM waves (plain ‘light’, in other words). The dark force did some damage to both, but they are still very readable. In fact, the fact that one or two illustrations and formulas have been removed there will force you to think for yourself, so it is all good. 🙂

Uncertainty, quantum math, and A(Y)MS

This morning, one of my readers wrote me to say I should refrain from criticizing mainstream theory or – if I do – in friendlier or more constructive terms. He is right, of course: my blog on Feynman’s Lectures proves I suffer from Angry Young Man Syndrome (AYMS), which does not befit a 50-year old. It is also true I will probably not be able to convince those whom I have not convinced yet.

What to do? I should probably find easier metaphors and bridge apparent contradictions—and write friendlier posts and articles, of course! 🙂

In my last paper, for example, I make a rather harsh distinction between discrete physical states and continuous logical states in mainstream theory. We may illustrate this using Schrödinger’s thought experiment with the cat: we know the cat is either dead or alive—depending on whether or not the poison was released. However, as long as we do not check, we may describe it by some logical state that mixes the ideas of a dead and a live cat. This logical state is defined in probabilistic terms: as time goes by, the likelihood of the cat being dead increases. The actual physical state does not have such ambiguity: the cat is either dead or alive.

The point that I want to make here is that the uncertainty is not physical. It is in our mind only: we have no knowledge of the physical state because we cannot (or do not want to) measure it, or because measurement is not possible because it would interfere (or possibly even destroy) the system: we are usually probing the smallest of stuff with the smallest of stuff in these experiments—which is why Heisenberg himself originally referred to uncertainty as Ungenauigkeit instead of Unbestimmtheit.

So, yes, as long as we do not look inside of the box – by opening or, preferably, through some window on the side (the cat could scratch you or jump out when opening it) – we may think of Schrödinger’s cat-in-the-box experiment as a simple quantum-mechanical two-state system. However, it is a rather special one: the poison is likely to be released after some time only (it depend on a probabilistic process itself) and we should, therefore, model this time as a random variable which will be distributed – usually more or less normally – around some mean. The (cumulative) probability distribution function for the cat being dead will, therefore, resemble something like the curves below, whose shapes depend not only on the mean but also on the standard deviation from the mean.

1920px-Normal_Distribution_CDF

Schrödinger’s cat-in-the-box experiment involves a transition from an alive to a dead state: it is sure and irreversible. Most real-life quantum-mechanical two-state systems will look very different: they will not involve some dead-or-alive situation but two very different states—position states, or energy states, for example—and the probability of the system being in this or that physical state will, therefore, slosh back and forth between the two, as illustrated below.

Probabilities desmos

I took this illustration from the mentioned paper, which deals with amplitude math, so I should refer you there for an explanation of the rather particular cycle time (π) and measurement units (ħ/A). The important thing here – in the context of this blog post, that is – is not the nitty-gritty but the basic idea of a quantum-mechanical two-state system. That basic idea is needed because the point that I want to make here is this: thinking that some system can be in two (discrete) physical states only may often be a bit of an idealization too. The system or whatever is that we are trying to describe might be in-between two states while oscillating between the two states, for example—or we may, perhaps, not be able to define the position of whatever it is that we are tracking—say, an atom or a nucleus in a molecule—because the idea of an atom or a nucleus might itself be quite fuzzy.

To explain what fuzziness might be in the context of physics, I often use the metaphor below: the propeller of the little plane is always somewhere, obviously—but the question is: where exactly? When the frequency of going from one place to another becomes quite high, the concept of an exact position becomes quite fuzzy. The metaphor of a rapidly rotating propeller may also illustrate the fuzziness of the concept of mass or even energy: if we think of the propeller being pretty much everywhere, then it is also more useful to think in terms of some dynamically defined mass or energy density concept in the space it is, somehow, filling.

propeller This, then, should take some of the perceived harshness of my analyses away: I should not say the mainstream interpretation of quantum physics is all wrong and that states are either physical or logical: our models may inevitably have to mix a bit of the two! So, yes, I should be much more polite and say the mainstream interpretation prefers to leave things vague or unsaid, and that physicists should, therefore, be more precise and avoid hyping up stuff that can easily be explained in terms of common-sense physical interpretations.

Having said that, I think that only sounds slightly less polite, and I also continue to think some Nobel Prize awards did exactly that: they rewarded the invention of hyped-up concepts rather than true science, and so now we are stuck with these things. To be precise, I think the award of the 1933 Nobel Prize to Werner Heisenberg is a very significant example of this, and it was followed by others. I am not shy or ashamed when writing this because I know I am in rather good company thinking that. Unfortunately, not enough people dare to say what they really think, and that is that the Emperor may have no clothes.

That is sad, because there are effectively a lot of enthusiastic and rather smart people who try to understand physics but become disillusioned when they enroll in online or real physics courses: when asking too many questions, they are effectively told to just shut up and calculate. I also think John Baez’ Crackpot Index is, all too often, abused to defend mainstream mediocrity and Ivory Tower theorizing. At the same time, I promise my friendly critic I will think some more about my Angry 50-Year-Old Syndrome.

Perhaps I should take a break from quantum mechanics and study, say, chaos theory, or fluid dynamics—something else, some new math. I should probably also train to go up Mont Blanc again this year: I gained a fair amount of physical weight while doing all this mental exercise over the past few years, and I do intend to climb again—50-year-old or not. Let’s just call it AMS. 🙂 And, yes, I should also focus on my day job, of course! 🙂

However, I probably won’t get rid of the quantum physics virus any time soon. In fact, I just started exploring the QCD sector, and I am documenting this new journey in a new blog: Reading Einstein. Go have a look. 🙂

Post scriptum: The probability distribution for the cat’s death sentence is, technically, speaking a Poisson distribution (the name is easy to remember because it does not differ too much from the poison that is used). However, because we are modeling probabilities here, its parameters k and λ should be thought of as being very large. It, therefore, approaches a normal distribution. Quantum-mechanical amplitude math implicitly assumes we can use normal distributions to model state transitions (see my paper on Feynman’s Time Machine).

The metaphysics of physics

So if we take all of the scaffolding away, what concepts are we left with? What is substance and what is form? Forget about quarks and bosons: the concepts of a charge and fields are core. A force acts on a charge, and matter-particles carry charge. The charge comes in units: the elementary charge. Anti-matter carries an opposite charge—opposite as defined with respect to the matter-particle of which the antimatter-particle is the counterpart.

Pair creation-annihilation is mysterious but not incomprehensible. It happens when the two particles have the same structure and opposite spacetime signature: ++++ versus +–––. Electrons and positrons annihilate each other, and protons and antiprotons—but not electrons and protons. A neutron disintegrates into a proton and an electron outside of the nucleus. That is why a proton and a antineutron – or a neutron and a antiproton – will also vanish in a flash of energy: a neutron is a composite particle—it consists of an electron and a proton. The exact pattern of the dance between the electron and proton inside of a neutron has not been modeled yet—as opposed to the dance between electrons and the positively charged nucleus in an atom (Rutherford’s contribution to the 1921 Solvay Conference comes to mind here).

The elementary charge itself is mysterious: the charge (and mass) density of an electron is very different from that of a proton. Both elementary particles can be modeled as ring currents, however. The Planck-Einstein relation applies to both but with a different form factor.

Mass is mass without mass: a measure of the inertia of the Zitterbewegung motion of the charge. Einstein’s mass-energy equivalence relations models an oscillation in two dimensions. Euler’s wavefunction represents the same. A proton is very different than an electron: massive and small. The force that keeps the charge inside of a proton together suggests the force may be different: this is the idea of a strong force—strong as compared to the electromagnetic force.

This strong force is not Yukawa’s force, however: inter-nucleon forces – what keeps protons and neutrons together inside of a nucleus – can be explained by the coupling of the magnetic moments of the ring currents.

The Zitterbewegung of the charge explains the magnetic moment of matter-particles. The anomaly in the magnetic moment tells us more about the structure of the charge inside of matter-particles: the charge is pointlike but not dimensionless.

What else can we say? Charge is conservedalways, but we must allow for pair creation-annihilation. Energy and momentum are conserved too. Physical systems are either stable or unstable. Atoms, for example, are stable. They too can be modeled as an oscillation respecting the Planck-Einstein relation. Planck’s quantum of action is like a sum of money: you can spend it very quickly, or you can spend over a longer period of time. Same bang for the buck, but you can bang it fast or slow. 🙂 That is what the E·T = h expression of the Planck-Einstein relation tells us: the same amount of physical action (6.626×1034 N·m·s) can be associated with very different energies and, therefore, very different cycle times.

Photons do no carry charge but they carry energy. They carry energy as electromagnetic fields traveling in space. This begs the question: what is oscillating, exactly? We do not know: the concept of an aether has no use beyond the idea of mediating this oscillation—which is why we can forget about it.

Particle interactions involving the strong force also involve the emission and/or absorption of neutrinos. Neutrinos may, therefore, be thought of as the photons of the strong force: they ensure energy and momentum is being conserved for strong interactions too.

There is no weak force: unstable particles disintegrate because they are in a disequilibrium state: the Planck-Einstein relation does not apply and, hence, the unstable state is a transient oscillation only—or a very short-lived resonance.

The idea of virtual particles going back and forth between non-virtual particles to mediate the electromagnetic or strong force is like the 19th-century aether idea or, worse, a remnant of medieval scholastic thinking: all forces must be contact forces, right? No. Of course not. No force is a contact force: forces work through fields. That’s mysterious too, but it is much simpler than accounting for messenger particles: they must have momentum and energy too, right? It becomes hugely complicated. Just forget about it! No gluons, no W/Z bosons, and no Higgs particle either.

[…]

Is that it? Yes. That’s it! I could write some more but then I would exceed the self-allotted one-page for my summary of all of physics. 🙂 Is all of this important? Maybe. If you are reading this, then it is probably important to you. You want to know, right? The most remarkable thing of all of it is the order that emerges from it. Two electrons in the same atomic orbital will align their magnetic moments so as to lower their joint energy: single electrons are valence electrons, and explain why atoms will share them in molecules. Molecules themselves come together in larger and stabler structure. Now it is Darwin’s “survival of the fittest” that comes into play: weaker structures do not survive their environment. At some point in time – very long or not so long ago (depends on your time scale) – some macro-structures became organisms that interacted, in a extremely primitive but real way, with their environment to reproduce themselves. Darwin’s principle tells us that being strong and stable is good, but being able to multiply is even better. So these structures started consuming other structures to multiply. Complexity increased: order and entropy go hand in hand. And so here we are: thinking about where we came from – some raw and uncomplicated state – just before we go back to it. Soon enough. Too soon. As an individual, at least. :-/

That is the true Mystery: your mind—our Mind. Our understanding of things using a very limited number of concepts, equations and elementary constants: the electric charge, Planck’s quantum of action, and the speed of light. Nothing more, nothing less. At larger scales, it is systems interacting with each other and their environment. That’s it. You should think about it. Don’t get lost in math. And surely don’t get lost in amplitude math. It’s not worth it. 🙂

[…] One more thing, perhaps: please do enjoy thinking about this Mystery! A friend of mine once remarked this: “It is good you are studying physics only as a pastime. Professional physicists are often troubled people.” I found the observation strange and sad, but mostly true (think of what Paul Ehrenfest did, for example). There are exceptions, though (H.A. Lorentz and Richard Feynman were happier characters). In any case, if you study physics but you are troubled by it, then study something else: chemistry, biology, or evolutionary psychology, perhaps. Don’t worry about physics: unlike what some are trying to tell you, it all makes sense. 🙂

Where is the Paul Krugman of physics?

We were so busy deconstructing myths in the QED sector of physics that we hardly had time for the QCD sector—high-energy physics. In any case, we do not think quantum field theory or the quark hypothesis have much explanatory power, so it is not like I feel I missed anything. What Paul Dirac wrote about the sorry state of physics back in 1958 still rings very true today:

“Quantum mechanics may be defined as the application of equations of motion to particles. […] The domain of applicability of the theory is mainly the treatment of electrons and other charged particles interacting with the electromagnetic field—a domain which includes most of low-energy physics and chemistry. Now there are other kinds of interactions, which are revealed in high-energy physics and are important for the description of atomic nuclei. These interactions are not at present sufficiently well understood to be incorporated into a system of equations of motion. Theories of them have been set up and much developed and useful results obtained from them. But in the absence of equations of motion these theories cannot be presented as a logical development of the principles set up in this book. We are effectively in the pre-Bohr era with regard to these other interactions. It is to be hoped that with increasing knowledge a way will eventually be found for adapting the high-energy theories into a scheme based on equations of motion, and so unifying them with those of low-energy physics.” (Paul A.M. Dirac, The Principles of Quantum Mechanics, 4th edition (1958), p. 312)

Dirac did not find it necessary to change these words in his lifetime. Moreover, as he got older – he left Earth in 1984 – he became increasingly vocal about the New Nonsense which came out of various Departments of Theoretical Physics after WW II. His skepticism was grounded both in logic as well as in what has become increasingly rare among academics: a deep understanding of the physicality of whatever it is that physicists are studying. He must have felt pretty lonely, for example, when stating this at the occasion of one of his retirement talks:

“Most physicists are very satisfied with the situation. They say: “Quantum electrodynamics is a good theory, and we do not have to worry about it anymore.” I must say that I am very dissatisfied with the situation because this so-called ‘good theory’ [Dirac refers to perturbation and renormalization theory here] involves neglecting infinities. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small—not neglecting it just because it is infinitely great and you do not want it!”

He was over 70 then, but still very active: the Wikipedia article on him notes that, at that age, he was still walking about a mile each day, and swimming in nearby lakes! However, the same article also notes that “his refusal to accept renormalization resulted in his work on the subject moving increasingly out of the mainstream”, so nobody cares about his later writings. Too bad. I think Dirac died rather unhappily because of that, but at least he did not go crazy or, worse, commit suicide—like Paul Ehrenfest.

What Dirac refers to as the ‘pre-Bohr era’ in physics – so that is now – resembles the alchemical period in chemistry, which is usually defined as an era which preceded the era of fundamental understanding of chemical processes. Indeed, alchemy’s Elixir Vitae, the Grand Magisterium, and the Red Tincture have been substituted by the modern-day equivalents of quarks and gluons, W/Z bosons and various other condensates, including the Higgs field. All utter nonsense! We have found only one fundamental ‘condensate’ in Nature so far, and that is the electric charge. Indeed, the elementary charge presents itself in very different shapes and densities depending on whether we look at it as being the essence of an electron, a muon, a proton or as a constituent of a neutron. This remarkable malleability of the elementary charge, both as a physical quantity as well as a logical concept, is the one and only true mystery in quantum physics for me.

Can we analyze it any further going forward? Maybe. Maybe not. The dynamics of pair creation and annihilation for electron and positrons suggest our interpretation of antimatter having an opposite spacetime signature makes sense, but proton-antiproton annihilation and – perhaps the most interesting process in high-energy physics – antiproton-neutron annihilation shows the idea of a strong force is and remains very relevant. Mainstream physics just went into a blind alley with the quark hypothesis to try to understand it. We should reverse gear and get out of that. Anti-proton neutron annihilation proves a neutron consists of a proton and an electron because, while we do observe proton-antineutron annihilation, we do not observe proton-neutron annihilation—and then there are a zillion other arguments in favor of explaining a neutron as a composite particle consisting of a proton and a neutron: think of their mass, magnetic moment, the way a neutron disintegrates outside of a nucleus, proton-neutron interactions, etcetera. We have analyzed this elsewhere so we cannot dwell on this here. So where are we?

I cannot answer that question on  your behalf. I can only say where might be. I spent about ten years on what I think of as a sensible theory of quantum electrodynamics, so I am happy about that. So that’s done and now I should move to something new. However, I am not so motivated to go beyond electrodynamics and dig into the math one needs to analyze a force whose structure is totally unknown. Why not? Because it is like moving from linear to non-linear models in math and statistics, or like moving from analyzing the dynamics of equilibrium states to non-equilibrium processes. If you have ever done one of the two (I did as an econometrist long time ago), you will know that is a huge effort. But, of course, it is never about the effort: it is about the potential rewards. The investment might be worthwhile if the reward would be great.

Unfortunately, that is not the case: what do we gain by being able to explain what might or might not be going at the femtometer scale? The ring current model explains the Compton radius, mass, spin and the magnetic moment of electrons, protons and neutrons – and everything above that scale (call it the picometer scale) – perfectly well. Do we need to go beyond? We are, effectively, entering what Gerard ‘t Hooft referred to as the Great Desert. Indeed, I have always said we should not be discouraged by physicists who say we cannot possibly imagine the smallest of smallest scales and, therefore, meekly accept concepts like hidden or rolled-up dimensions without protest. However, I do have to admit that trying to imagine how one can possibly pack like 400,000 TeV into some femtometer-scale space may effectively be beyond my intellectual capabilities.

The proposition is outright unattractive because the complexity here is inversely proportional to the number of clues that we have got here. So I do not think we will get out of this alchemical period of high-energy physics any time soon. Mankind has constructed colliders whose powers we can double or triple over the next decades, perhaps, but those experiments still amount to studying the mechanics of a Swiss precision watch by repeatedly smashing it and studying the fragments. I think there are serious limitations to what we can learn from that. Oliver Consa is right in saying that more ‘US$ 600m budgets to continue the game’ will not solve the problem: “Maybe it is time to consider alternative proposals. Winter is coming.”

I am not so skeptical. I think summer is coming: after 100 years, this Bohr-Heisenberg revolution (or Diktatur, we should say, perhaps) is finally running out of steam. Why? Not because it failed to convince academics: that, it did all too well—some Nobel Prizes have obviously been awarded a bit prematurely. It is running out of steam because it failed to convince the masses: they are smarter and think for themselves now. It is e-democracy now, and does not stand for elite. Summer may not come with a lot of answers—but summers usually do come with a lot of fun. 🙂

Economics was once being described as the dismal science because it defended the wrong things. I am an economist myself, and I credit Paul Krugman by turning that perception around, almost single-handedly. Paul Krugman is extremely smart. More importantly, he had the guts. Who is going to be the Paul Krugman of physics? If they do not find him or her soon, the science of physics may evolve from a rotten state – hopelessly Lost in Math, as Hossenfelder writes – to something that is far worse: the obsolete state. The good thing about that is that the academics are leaving the space wide open for people like me: spacetime rebels and philosophers. 🙂

Post scriptum: In case you wonder where the 400,000 TeV and femto-meter reference would come from, I am effectively referring to a delightful popular book which was written by Gerard ‘t Hooft—a few years before he and his PhD thesis advisor (Martinus Veltman) got the 1999 Nobel Prize in Physics “for elucidating the quantum structure of the electroweak force”: In Search of the Ultimate Building Blocks (1996). Based on the quantum field theories he contributed so much to, he finds things start to go crazy when trying to imagine frequencies of 1032 Hz or higher.

We think things get crazy much sooner. The Planck-Einstein relation tells us that a frequency of 1032 Hz corresponds to an energy of E = h·f ≈ (4×10−15 eV·s)·(1032 Hz) = 4×1017 eV = 400,000 tera-electronvolt (1 TeV = 1012 eV). A photon with such energy would actually be much smaller than a femtometer (10−15 m): λ = c/f = 10−24 m. This, of course, assumes a one-cycle photon model—but such model is quite straightforward and logical. In-between the femtometer scale (10−15) and that 10−24 m scale, we are missing a 1,000,000,000 factor, so the question is actually this: how can one pack 400,000 TeV (think of the energy of 400,000,000 protons here) into something that is 1,000,0000 times smaller than a proton?

Of course, you might say that we should probably not use a photon-like model of a particle here: why would we assume all of the energy should be in some (linear) electromagnetic oscillation? If there is another force – another force than electromagnetic – then it will have a very different structure, right? Right. And so that should solve the problem, right? No. Not right. That makes things even worse: some non-linear multi-dimensional force or potential allows one to pack even more energy in much smaller spaces! So then things start looking crazy at much larger scales than that 10−24 m scale. When calculating the (strong) force inside of a proton using a two-dimensional oscillator model, for example, you get a number that is equal to 850,000 newton (N), more or less. For an electron, we get a much more modest value (0.115 N, to be precise) but it is still enormous at these small scales.

We should wrap up here—this is just a blog post and so we cannot be too long here. Just note it is already quite something to explain how a proton, whose radius is about 0.83-0.84 fm, can pack its energy—which is less than one GeV. We can do it, but calculations for smaller but more massive stuff (particles that would be smaller than protons, in other words) quickly give very weird results: mass or energy densities quickly approach those of black holes, in fact! Now, your imagination may be more flexible than mine, but I am not ready yet to think we all consist of black holes separated by nothingness.

Now you will ask: “So what is your theory then? And what is its comparative advantage to other theories?” It is rather simple: I think we consist of protons, neutrons and electrons separated by electromagnetic fields, and the main advantage of my theory is that I can explain what protons, neutrons and electrons actually are. One cannot say the same of quarks, gluons, W/Z bosons, Higgs particles and other New Nonsense concepts. Someone with more credentials than me wrote this a while ago:

“The state of the classical electromagnetic theory reminds one of a house under construction that was abandoned by its working workmen upon receiving news of an approaching plague. The plague was in this case, of course, quantum theory.” (Doris Teplitz, Electromagnetism: Paths to Research, 1982)

I’d rather help finish the house than entertain New Nonsense.

Dismantling myths

I just published a paper in which I show we do not need the machinery of state vectors and probability amplitudes to describe quantum-mechanical systems (think of a laser here, for example). We can describe these systems just as well in terms of a classical oscillation: the Planck-Einstein relation determines frequencies, which can then be used to determine the probabilities of the system being in this or that state.

The paper was quite an effort. The subject-matter is very abstract and the ruse and deceit in the quantum-mechanical argument (that basically assumes we do need all that humbug) is very subtle. It is, therefore, difficult to pinpoint exactly where the argument goes wrong. We managed to find and highlight the main deus ex machina moment, however, which is the substitution of real-valued coefficients by complex-valued functions.

That substitution is not innocent: it smuggles the Planck-Einstein relation in – through the backdoor, so to speak – and makes sure the amplitudes come out alright! The whole argument is, therefore, typical of other mainstream arguments in modern quantum mechanics: one only gets out what was already implicit or explicit in the assumptions, and those are rather random. In other contexts, this would be referred to as garbage in, garbage out.

The paper complements earlier logical deconstructions of some of these arguments, most notably those on the anomalous magnetic moment, the Lamb shift, 720-degree symmetries, the boson-fermion dichotomy and others (for an overview, see the full list of my papers). In fact, we have done so many now that we think we should stop: this last paper should conclude our classical or realist interpretation of quantum mechanics!

It has all been rather exhausting because we feel we had to cover pretty much everything from scratch. We did—and convincingly so, I think. Still, critics – I am quoting from one of the messages I got on ResearchGate here – still tell me that I should continue to “strengthen my arguments/proofs” so to “convince readers.” To those, I usually reply that I will never be able to convince them: if 60+ papers (with thousands of downloads) and a blog on physics (which also gets thousands of hits every month) is not sufficient, then what is? I should probably also refer them to a public comment on one of my papers—written by someone with (a lot) more credentials than me:

“The paper presents sound and solid reasoning. It is sobering and refreshing. The author is not only providing insight into central conceptual problems of modern physics but also recognizing the troubles that indoctrination causes in digesting this insight.”

Let us see how it all goes. I know I am an outsider and, therefore, totally insignificant. I should just stop writing and wait a bit now. This mysterious hyped-up Copenhagen interpretation should become irrelevant by itself: people will realize it is just hocus-pocus or, worse, A Bright Shining Lie.

That may take a long time, however, and I may not last long enough to see it happen. Mainstream physicists will soon be celebrating 100 years of what Paul Ehrenfest referred to as the ‘unendlicher Heisenberg-Born-Dirac-Schrödinger Wurstmachinen-Physik-Betrieb.

On the other hand, it is not because the indoctrination has, obviously, been very successful, that we should give up. An engineer, alumnus of the University of California, also encouraged me by sending me this quote:

“Few people are capable of expressing with equanimity opinions
which differ from the prejudices of their social environment. Most
people are even incapable of forming such opinions.” (Einstein: A Portrait, Pomegranate Artbooks, Petaluma, CA, 1984, p. 102).

That is as good as it gets, I guess. And if you read these words, it probably means you are part of that group of few people. We will not celebrate 100 years of metaphysical nonsense. We will keep thinking things through for ourselves and, thereby, find truth—even if only for ourselves.

That is enough as a reward for me. 🙂