Quantum Physics: A Survivor’s Guide

A few days ago, I mentioned I felt like writing a new book: a sort of guidebook for amateur physicists like me. I realized that is actually fairly easy to do. I have three very basic papers – one on particles (both light and matter), one on fields, and one on the quantum-mechanical toolbox (amplitude math and all of that). But then there is a lot of nitty-gritty to be written about the technical stuff, of course: self-interference, superconductors, the behavior of semiconductors (as used in transistors), lasers, and so many other things – and all of the math that comes with it. However, for that, I can refer you to Feynman’s three volumes of lectures, of course. In fact, I should: it’s all there. So… Well… That’s it, then. I am done with the QED sector. Here is my summary of it all (links to the papers on Phil Gibbs’ site):

Paper I: Quantum behavior (the abstract should enrage the dark forces)

Paper II: Probability amplitudes (quantum math)

Paper III: The concept of a field (why you should not bother about QFT)

Paper IV: Survivor’s guide to all of the rest (keep smiling)

Paper V: Uncertainty and the meaning of the wavefunction (the final!)

Jean Louis Van Belle, 21 October 2020

Note: As for the QCD sector, that is a mess. We might have to wait another hundred years or so to see the smoke clear up there. Or, who knows, perhaps some visiting alien(s) will come and give us a decent alternative for the quark hypothesis and quantum field theories. One of my friends thinks so. Perhaps I should trust him more. 🙂

As for Phil Gibbs, I should really thank him for being one of the smartest people on Earth – and for his site, of course. Brilliant forum. Does what Feynman wanted everyone to do: look at the facts, and think for yourself. 🙂

The concept of a field

I ended my post on particles as spacetime oscillations saying I should probably write something about the concept of a field too, and why and how many academic physicists abuse it so often. So I did that, but it became a rather lengthy paper, and so I will refer you to Phil Gibbs’ site, where I post such stuff. Here is the link. Let me know what you think of it.

As for how it fits in with the rest of my writing, I already jokingly rewrote two of Feynman’s introductory Lectures on quantum mechanics (see: Quantum Behavior and Probability Amplitudes). I consider this paper to be the third. 🙂

Post scriptum: Now that I am talking about Richard Feynman – again ! – I should add that I really think of him as a weird character. I think he himself got caught in that image of the ‘Great Teacher’ while, at the same (and, surely, as a Nobel laureate), he also had to be seen to a ‘Great Guru.’ Read: a Great Promoter of the ‘Grand Mystery of Quantum Mechanics’ – while he probably knew classical electromagnetism combined with the Planck-Einstein relation can explain it all… Indeed, his lecture on superconductivity starts off as an incoherent ensemble of ‘rocket science’ pieces, to then – in the very last paragraphs – manipulate Schrödinger’s equation (and a few others) to show superconducting currents are just what you would expect in a superconducting fluid. Let me quote him:

“Schrödinger’s equation for the electron pairs in a superconductor gives us the equations of motion of an electrically charged ideal fluid. Superconductivity is the same as the problem of the hydrodynamics of a charged liquid. If you want to solve any problem about superconductors you take these equations for the fluid [or the equivalent pair, Eqs. (21.32) and (21.33)], and combine them with Maxwell’s equations to get the fields.”

So… Well… Looks he too is all about impressing people with ‘rocket science models’ first, and then he simplifies it all to… Well… Something simple. 😊

Having said that, I still like Feynman more than modern science gurus, because the latter usually don’t get to the simplifying part. :-/

A new book?

I don’t know where I would start a new story on physics. I am also not quite sure for whom I would be writing it – although it would be for people like me, obviously: most of what we do, we do for ourselves, right? So I should probably describe myself in order to describe the audience: amateur physicists who are interested in the epistemology of modern physics – or its ontology, or its metaphysics. I also talk about the genealogy or archaeology of ideas on my ResearchGate site. All these words have (slightly) different meanings but the distinctions do not matter all that much. The point is this: I write for people who want to understand physics in pretty much the same way as the great classical physicist Hendrik Antoon Lorentz who, just a few months before his demise, at the occasion of the (in)famous 1927 Solvay Conference, wanted to understand the ‘new theories’:

“We are representing phenomena. We try to form an image of them in our mind. Till now, we always tried to do using the ordinary notions of space and time. These notions may be innate; they result, in any case, from our personal experience, from our daily observations. To me, these notions are clear, and I admit I am not able to have any idea about physics without those notions. The image I want to have when thinking physical phenomena has to be clear and well defined, and it seems to me that cannot be done without these notions of a system defined in space and in time.”

Note that H.A. Lorentz understood electromagnetism and relativity theory as few others did. In fact, judging from some of the crap out there, I can safely say he understood stuff as few others do today still. Hence, he should surely not be thought of as a classical physicist who, somehow, was stuck. On the contrary: he understood the ‘new theories’ better than many of the new theorists themselves. In fact, as far as I am concerned, I think his comments or conclusions on the epistemological status of the Uncertainty Principle – which he made in the same intervention – still stand. Let me quote the original French:

“Je pense que cette notion de probabilité [in the new theories] serait à mettre à la fin, et comme conclusion, des considérations théoriques, et non pas comme axiome a priori, quoique je veuille bien admettre que cette indétermination correspond aux possibilités expérimentales. Je pourrais toujours garder ma foi déterministe pour les phénomènes fondamentaux, dont je n’ai pas parlé. Est-ce qu’un esprit plus profond ne pourrait pas se rendre compte des mouvements de ces électrons. Ne pourrait-on pas garder le déterminisme en en faisant l’objet d’une croyance? Faut-il nécessairement ériger l’ indéterminisme en principe?”

What a beautiful statement, isn’t it? Why should we elevate indeterminism to a philosophical principle? Indeed, now that I’ve inserted some French, I may as well inject some German. The idea of a particle includes the idea of a more or less well-known position. Let us be specific and think of uncertainty in the context of position. We may not fully know the position of a particle for one or more of the following reasons:

  1. The precision of our measurements may be limited: this is what Heisenberg referred to as an Ungenauigkeit.
  2. Our measurement might disturb the position and, as such, cause the information to get lost and, as a result, introduce an uncertainty: this is what we may translate as an Unbestimmtheit.
  3. The uncertainty may be inherent to Nature, in which case we should probably refer to it as an Ungewissheit.

So what is the case? Lorentz claims it is either the first or the second – or a combination of both – and that the third proposition is a philosophical statement which we can neither prove nor disprove. I cannot see anything logical (theory) or practical (experiment) that would invalidate this point. I, therefore, intend to write a basic book on quantum physics from what I hope would be Lorentz’ or Einstein’s point of view.

My detractors will immediately cry wolf: Einstein lost the discussions with Bohr, didn’t he? I do not think so: he just got tired of them. I want to try to pick up the story where he left it. Let’s see where I get. 🙂

Bell’s No-Go Theorem

I’ve been asked a couple of times: “What about Bell’s No-Go Theorem, which tells us there are no hidden variables that can explain quantum-mechanical interference in some kind of classical way?” My answer to that question is quite arrogant, because it’s the answer Albert Einstein would give when younger physicists would point out that his objections to quantum mechanics (which he usually expressed as some new  thought experiment) violated this or that axiom or theorem in quantum mechanics: “Das ist mir wur(sch)t.

In English: I don’t care. Einstein never lost the discussions with Heisenberg or Bohr: he just got tired of them. Like Einstein, I don’t care either – because Bell’s Theorem is what it is: a mathematical theorem. Hence, it respects the GIGO principle: garbage in, garbage out. In fact, John Stewart Bell himself – one of the third-generation physicists, we may say – had always hoped that some “radical conceptual renewal”[1] might disprove his conclusions. We should also remember Bell kept exploring alternative theories – including Bohm’s pilot wave theory, which is a hidden variables theory – until his death at a relatively young age. [J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics. He was just 62 years old then.]

So I never really explored Bell’s Theorem. I was, therefore, very happy to get an email from Gerard van der Ham, who seems to have the necessary courage and perseverance to research this question in much more depth and, yes, relate it to a (local) realist interpretation of quantum mechanics. I actually still need to study his papers, and analyze the YouTube video he made (which looks much more professional than my videos), but this is promising.

To be frank, I got tired of all of these discussions – just like Einstein, I guess. The difference between realist interpretations of quantum mechanics and the Copenhagen dogmas is just a factor 2 or π in the formulas, and Richard Feynman famously said we should not care about such factors (Feynman’s Lectures, III-2-4). Modern physicists fudge them away consistently. They’ve done much worse than that, actually. :-/ They are not interested in truth. Convention, dogma, indoctrination – non-scientific historical stuff – seems to prevent them from that. And modern science gurus – the likes of Sean Carroll or Sabine Hossenfelder etc. – play the age-old game of being interesting: they pretend to know something you do not know or – if they don’t – that they are close to getting the answers. They are not. They have them already. They just don’t want to tell you that because, yes, it’s the end of physics.

[1] See: John Stewart Bell, Speakable and unspeakable in quantum mechanics, pp. 169–172, Cambridge University Press, 1987.

The End of Physics

There is an army of physicists out there – still – trying to convince you there is still some mystery that needs explaining. They are wrong: quantum-mechanical weirdness is weird, but it is not some mystery. We have a decent interpretation of what quantum-mechanical equations – such as Schrodinger’s equation, for example – actually mean. We can also understand what photons, electrons, or protons – light and matter – actually are, and such understanding can be expressed in terms of 3D space, time, force, and charge: elementary concepts that feel familiar to us. There is no mystery left.

Unfortunately, physicists have completely lost it: they have multiplied concepts and produced a confusing but utterly unconvincing picture of the essence of the Universe. They promoted weird mathematical concepts – the quark hypothesis is just one example among others – and gave them some kind of reality status. The Nobel Prize Committee then played the role of the Vatican by canonizing the newfound religion.

It is a sad state of affairs, because we are surrounded by too many lies already: the ads and political slogans that shout us in the face as soon as we log on to Facebook to see what our friends are up to, or to YouTube to watch something or – what I often do – listen to the healing sounds of music.

The language and vocabulary of physics are complete. Does it make us happier beings? It should, shouldn’t it? I am happy I understand. I find consciousness fascinating – self-consciousness even more – but not because I think it is rooted in mystery. No. Consciousness arises from the self-organization of matter: order arising from chaos. It is a most remarkable thing – and it happens at all levels: atoms in molecules, molecules forming cellular systems, cellular systems forming biological systems. We are a biological system which, in turn, is part of much larger systems: biological, ecological – material systems. There is no God talking to us. We are on our own, and we must make the best out of it. We have everything, and we know everything.

Sadly, most people do not realize.

Post scriptum: With the end of physics comes the end of technology as well, isn’t it? All of the advanced technologies in use today are effectively already described in Feynman’s Lectures on Physics, which were written and published in the first half of the 1960s.

I thought about possible counterexamples, like optical-fiber cables, or the equipment that is used in superconducting quantum computing, such as Josephson junctions. But Feynman already describes Josephson junctions in the last chapter of his Lectures on Quantum Mechanics, which is a seminar on superconductivity. And fiber-optic cable is, essentially, a waveguide for light, which Feynman describes in very much detail in Chapter 24 of his Lectures on Electromagnetism and Matter. Needless to say, computers were also already there, and Feynman’s lecture on semiconductors has all you need to know about modern-day computing equipment. [In case you briefly thought about lasers, the first laser was built in 1960, and Feynman’s lecture on masers describes lasers too.]

So it is all there. I was born in 1969, when Man first walked on the Moon. CERN and other spectacular research projects have since been established, but, when one is brutally honest, one has to admit these experiments have not added anything significant – neither to the knowledge nor to the technology base of humankind (and, yes, I know your first instinct is to disagree with that, but that is because study or the media indoctrinated you that way). It is a rather strange thought, but I think it is essentially correct. Most scientists, experts and commentators are trying to uphold a totally fake illusion of progress.

Explaining the proton mass and radius

Our alternative realist interpretation of quantum physics is pretty complete but one thing that has been puzzling us is the mass density of a proton: why is it so massive as compared to an electron? We simplified things by adding a factor in the Planck-Einstein relation. To be precise, we wrote it as E = 4·h·f. This allowed us to derive the proton radius from the ring current model:

proton radius This felt a bit artificial. Writing the Planck-Einstein relation using an integer multiple of h or ħ (E = n·h·f = n·ħ·ω) is not uncommon. You should have encountered this relation when studying the black-body problem, for example, and it is also commonly used in the context of Bohr orbitals of electrons. But why is n equal to 4 here? Why not 2, or 3, or 5 or some other integer? We do not know: all we know is that the proton is very different. A proton is, effectively, not the antimatter counterpart of an electron—a positron. While the proton is much smaller – 459 times smaller, to be precise – its mass is 1,836 times that of the electron. Note that we have the same 1/4 factor here because the mass and Compton radius are inversely proportional:


This doesn’t look all that bad but it feels artificial. In addition, our reasoning involved a unexplained difference – a mysterious but exact SQRT(2) factor, to be precise – between the theoretical and experimentally measured magnetic moment of a proton. In short, we assumed some form factor must explain both the extraordinary mass density as well as this SQRT(2) factor but we were not quite able to pin it down, exactly. A remark on a video on our YouTube channel inspired us to think some more – thank you for that, Andy! – and we think we may have the answer now.

We now think the mass – or energy – of a proton combines two oscillations: one is the Zitterbewegung oscillation of the pointlike charge (which is a circular oscillation in a plane) while the other is the oscillation of the plane itself. The illustration below is a bit horrendous (I am not so good at drawings) but might help you to get the point. The plane of the Zitterbewegung (the plane of the proton ring current, in other words) may oscillate itself between +90 and −90 degrees. If so, the effective magnetic moment will differ from the theoretical magnetic moment we calculated, and it will differ by that SQRT(2) factor.

Proton oscillation

Hence, we should rewrite our paper, but the logic remains the same: we just have a much better explanation now of why we should apply the energy equipartition theorem.

Mystery solved! 🙂

Post scriptum (9 August 2020): The solution is not as simple as you may imagine. When combining the idea of some other motion to the ring current, we must remember that the speed of light –  the presumed tangential speed of our pointlike charge – cannot change. Hence, the radius must become smaller. We also need to think about distinguishing two different frequencies, and things quickly become quite complicated.

Do we only see what we want to see?

I had a short but interesting exchange with a student in physics—one of the very few who actually reads (some of) the stuff on this and my other blog (the latter is more technical than this one).

It was an exchange on the double-slit experiment with electrons—one of these experiments which is supposed to prove that classical concepts and electromagnetic theory fail when analyzing the smallest of small things and that only an analysis in terms of those weird probability amplitudes can explain what might or might not be going on.

Plain rubbish, of course. I asked him to carefully look at the pattern of blobs when only one of the slits is open, which are shown in the top and bottom illustrations below respectively (the inset (top-left) shows how the mask moves over the slits—covering both, one or none of the two slits respectively).

Interference 1

Of course, you see interference when both slits are open (all of the stuff in the middle above). However, I find it much more interesting to see there is interference too (or diffraction—my preferred term for an interference pattern when there is only one slit or one hole) even if only one of the slits is open. In fact, the interference pattern when two slits are open, is just the pattern one gets from the superposition of the diffraction pattern for the two slits respectively. Hence, an analysis in terms of probability amplitudes associated with this or that path—the usual thing: add the amplitudes and then take the absolute square to get the probabilities—is pretty nonsensical. The tough question physicists need to answer is not how interference can be explained, but this: how do we explain the diffraction pattern when electrons go through one slit only?


I realize you may be as brainwashed as the bright young student who contacted me: he did not see it at first! You should, therefore, probably have another look at the illustrations above too: there are brighter and darker spots when one slit is open too, especially on the sides—a bit further away from the center.

[Just do it before you read on: look, once more, at the illustration above before you look at the next.]


The diffraction pattern (when only one slit is open) resembles that of light going through a circular aperture (think of light going through a simple pinhole), which is shown below: it is known as the Airy disk or the Airy pattern. [I should, of course, mention the source of my illustrations: the one above (on electron interference) comes from the article on the 2012 Nebraska-Lincoln experiment, while the ones below come from the articles on the Airy disk and the (angular) resolution of a microscope in Wikipedia respectively. I no longer refer to Feynman’s Lectures or related material because of an attack by the dark force.]


When we combine two pinholes and move them further or closer to each other, we get what is shown below: a superposition of the two diffraction patterns. The patterns in that double-slit experiment with electrons look what you would get using slits instead of pinholes.


It obviously led to a bit of an Aha-Erlebnis for the student who bothered to write and ask. I told him a mathematical analysis using classical wave equations would not be easy, but that it should be possible. Unfortunately, mainstream physicists − academic teachers and professors, in particular – seem to prefer the nonsensical but easier analysis in terms of probability amplitudes. I guess they only see what they want to see. :-/

Note: For those who would want to dig a bit further, I could refer to them to a September 20, 2014 post as well as a successor post to that on diffraction and interference of EM waves (plain ‘light’, in other words). The dark force did some damage to both, but they are still very readable. In fact, the fact that one or two illustrations and formulas have been removed there will force you to think for yourself, so it is all good. 🙂

Uncertainty, quantum math, and A(Y)MS

This morning, one of my readers wrote me to say I should refrain from criticizing mainstream theory or – if I do – in friendlier or more constructive terms. He is right, of course: my blog on Feynman’s Lectures proves I suffer from Angry Young Man Syndrome (AYMS), which does not befit a 50-year old. It is also true I will probably not be able to convince those whom I have not convinced yet.

What to do? I should probably find easier metaphors and bridge apparent contradictions—and write friendlier posts and articles, of course! 🙂

In my last paper, for example, I make a rather harsh distinction between discrete physical states and continuous logical states in mainstream theory. We may illustrate this using Schrödinger’s thought experiment with the cat: we know the cat is either dead or alive—depending on whether or not the poison was released. However, as long as we do not check, we may describe it by some logical state that mixes the ideas of a dead and a live cat. This logical state is defined in probabilistic terms: as time goes by, the likelihood of the cat being dead increases. The actual physical state does not have such ambiguity: the cat is either dead or alive.

The point that I want to make here is that the uncertainty is not physical. It is in our mind only: we have no knowledge of the physical state because we cannot (or do not want to) measure it, or because measurement is not possible because it would interfere (or possibly even destroy) the system: we are usually probing the smallest of stuff with the smallest of stuff in these experiments—which is why Heisenberg himself originally referred to uncertainty as Ungenauigkeit instead of Unbestimmtheit.

So, yes, as long as we do not look inside of the box – by opening or, preferably, through some window on the side (the cat could scratch you or jump out when opening it) – we may think of Schrödinger’s cat-in-the-box experiment as a simple quantum-mechanical two-state system. However, it is a rather special one: the poison is likely to be released after some time only (it depend on a probabilistic process itself) and we should, therefore, model this time as a random variable which will be distributed – usually more or less normally – around some mean. The (cumulative) probability distribution function for the cat being dead will, therefore, resemble something like the curves below, whose shapes depend not only on the mean but also on the standard deviation from the mean.


Schrödinger’s cat-in-the-box experiment involves a transition from an alive to a dead state: it is sure and irreversible. Most real-life quantum-mechanical two-state systems will look very different: they will not involve some dead-or-alive situation but two very different states—position states, or energy states, for example—and the probability of the system being in this or that physical state will, therefore, slosh back and forth between the two, as illustrated below.

Probabilities desmos

I took this illustration from the mentioned paper, which deals with amplitude math, so I should refer you there for an explanation of the rather particular cycle time (π) and measurement units (ħ/A). The important thing here – in the context of this blog post, that is – is not the nitty-gritty but the basic idea of a quantum-mechanical two-state system. That basic idea is needed because the point that I want to make here is this: thinking that some system can be in two (discrete) physical states only may often be a bit of an idealization too. The system or whatever is that we are trying to describe might be in-between two states while oscillating between the two states, for example—or we may, perhaps, not be able to define the position of whatever it is that we are tracking—say, an atom or a nucleus in a molecule—because the idea of an atom or a nucleus might itself be quite fuzzy.

To explain what fuzziness might be in the context of physics, I often use the metaphor below: the propeller of the little plane is always somewhere, obviously—but the question is: where exactly? When the frequency of going from one place to another becomes quite high, the concept of an exact position becomes quite fuzzy. The metaphor of a rapidly rotating propeller may also illustrate the fuzziness of the concept of mass or even energy: if we think of the propeller being pretty much everywhere, then it is also more useful to think in terms of some dynamically defined mass or energy density concept in the space it is, somehow, filling.

propeller This, then, should take some of the perceived harshness of my analyses away: I should not say the mainstream interpretation of quantum physics is all wrong and that states are either physical or logical: our models may inevitably have to mix a bit of the two! So, yes, I should be much more polite and say the mainstream interpretation prefers to leave things vague or unsaid, and that physicists should, therefore, be more precise and avoid hyping up stuff that can easily be explained in terms of common-sense physical interpretations.

Having said that, I think that only sounds slightly less polite, and I also continue to think some Nobel Prize awards did exactly that: they rewarded the invention of hyped-up concepts rather than true science, and so now we are stuck with these things. To be precise, I think the award of the 1933 Nobel Prize to Werner Heisenberg is a very significant example of this, and it was followed by others. I am not shy or ashamed when writing this because I know I am in rather good company thinking that. Unfortunately, not enough people dare to say what they really think, and that is that the Emperor may have no clothes.

That is sad, because there are effectively a lot of enthusiastic and rather smart people who try to understand physics but become disillusioned when they enroll in online or real physics courses: when asking too many questions, they are effectively told to just shut up and calculate. I also think John Baez’ Crackpot Index is, all too often, abused to defend mainstream mediocrity and Ivory Tower theorizing. At the same time, I promise my friendly critic I will think some more about my Angry 50-Year-Old Syndrome.

Perhaps I should take a break from quantum mechanics and study, say, chaos theory, or fluid dynamics—something else, some new math. I should probably also train to go up Mont Blanc again this year: I gained a fair amount of physical weight while doing all this mental exercise over the past few years, and I do intend to climb again—50-year-old or not. Let’s just call it AMS. 🙂 And, yes, I should also focus on my day job, of course! 🙂

However, I probably won’t get rid of the quantum physics virus any time soon. In fact, I just started exploring the QCD sector, and I am documenting this new journey in a new blog: Reading Einstein. Go have a look. 🙂

Post scriptum: The probability distribution for the cat’s death sentence is, technically, speaking a Poisson distribution (the name is easy to remember because it does not differ too much from the poison that is used). However, because we are modeling probabilities here, its parameters k and λ should be thought of as being very large. It, therefore, approaches a normal distribution. Quantum-mechanical amplitude math implicitly assumes we can use normal distributions to model state transitions (see my paper on Feynman’s Time Machine).