A Zitterbewegung model of the neutron

As part of my ventures into QCD, I quickly developed a Zitterbewegung model of the neutron, as a complement to my first sketch of a deuteron nucleus. The math of orbitals is interesting. Whatever field you have, one can model is using a coupling constant between the proportionality coefficient of the force, and the charge it acts on. That ties it nicely with my earlier thoughts on the meaning of the fine-structure constant.

My realist interpretation of quantum physics focuses on explanations involving the electromagnetic force only, but the matter-antimatter dichotomy still puzzles me very much. Also, the idea of virtual particles is no longer anathema to me, but I still want to model them as particle-field interactions and the exchange of real (angular or linear) momentum and energy, with a quantization of momentum and energy obeying the Planck-Einstein law.

The proton model will be key. We cannot explain it in the typical ‘mass without mass’ model of zittering charges: we get a 1/4 factor in the explanation of the proton radius, which is impossible to get rid of unless we assume some ‘strong’ force come into play. That is why I prioritize a ‘straight’ attack on the electron and the proton-electron bond in a primitive neutron model.

The calculation of forces inside a muon-electron and a proton (see ) is an interesting exercise: it is the only thing which explains why an electron annihilates a positron but electrons and protons can live together (the ‘anti-matter’ nature of charged particles only shows because of opposite spin directions of the fields – so it is only when the ‘structure’ of matter-antimatter pairs is different that they will not annihilate each other).

[…]

In short, 2021 will be an interesting year for me. The intent of my last two papers (on the deuteron model and the primitive neutron model) was to think of energy values: the energy value of the bond between electron and proton in the neutron, and the energy value of the bond between proton and neutron in a deuteron nucleus. But, yes, the more fundamental work remains to be done !

Cheers – Jean-Louis

The electromagnetic deuteron model

In my ‘signing off’ post, I wrote I had enough of physics but that my last(?) ambition was to “contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.” Well… The paper is there. And I am extremely pleased with the result. Thank you, Mr. Meulenberg. You sure have good intuition.

I took the opportunity to revisit Yukawa’s nuclear potential and demolish his modeling of a new nuclear force without a charge to act on. Looking back at the past 100 years of physics history, I now start to think that was the decisive destructive moment in physics: that 1935 paper, which started off all of the hype on virtual particles, quantum field theory, and a nuclear force that could not possibly be electromagnetic plus – totally not done, of course ! – utter disregard for physical dimensions and the physical geometry of fields in 3D space or – taking retardation effects into account – 4D spacetime. Fortunately, we have hope: the 2019 fixing of SI units puts physics firmly back onto the road to reality – or so we hope.

Paolo Di Sia‘s and my paper show one gets very reasonable energy and separation distances for nuclear bonds and inter-nucleon distances when assuming the presence of magnetic and/or electric dipole fields arising from deep electron orbitals. The model shows one of the protons pulling the ‘electron blanket’ from another proton (the neutron) towards its own side so as to create an electric dipole moment. So it is just like a valence electron in a chemical bond. So it is like water, then? Water is a polar molecule but we do not necessarily need to start with polar configurations when trying to expand this model so as to inject some dynamics into it (spherically symmetric orbitals are probably easier to model). Hmm… Perhaps I need to look at the thermodynamical equations for dry versus wet water once again… Phew ! Where to start?

I have no experience – I have very little math, actually – with modeling molecular orbitals. So I should, perhaps, contact a friend from a few years ago now – living in Hawaii and pursuing more spiritual matters too – who did just that long time ago: orbitals using Schroedinger’s wave equation (I think Schroedinger’s equation is relativistically correct – just a misinterpretation of the concept of ‘effective mass’ by the naysayers). What kind of wave equation are we looking at? One that integrates inverse square and inverse cube force field laws arising from charges and the dipole moments they create while moving. [Hey! Perhaps we can relate these inverse square and cube fields to the second- and third-order terms in the binomial development of the relativistic mass formula (see the section on kinetic energy in my paper on one of Feynman’s more original renderings of Maxwell’s equations) but… Well… Probably best to start by seeing how Feynman got those field equations out of Maxwell’s equations. It is a bit buried in his development of the Liénard and Wiechert equations, which are written in terms of the scalar and vector potentials φ and A instead of E and B vectors, but it should all work out.]

If the nuclear force is electromagnetic, then these ‘nuclear orbitals’ should respect the Planck-Einstein relation. So then we can calculate frequencies and radii of orbitals now, right? The use of natural units and imaginary units to represent rotations/orthogonality in space might make calculations easy (B = iE). Indeed, with the 2019 revision of SI units, I might need to re-evaluate the usefulness of natural units (I always stayed away from it because it ‘hides’ the physics in the math as it makes abstraction of their physical dimension).

Hey ! Perhaps we can model everything with quaternions, using imaginary units (i and j) to represent rotations in 3D space so as to ensure consistent application of the appropriate right-hand rules always (special relativity gets added to the mix so we probably need to relate the (ds)2 = (dx)2 + (dy)2 + (dz)2 – (dct)2 to the modified Hamilton’s q = a + ib + jckd expression then). Using vector equations throughout and thinking of h as a vector when using the E = hf and h = pλ Planck-Einstein relation (something with a magnitude and a direction) should do the trick, right? [In case you wonder how we can write f as a vector: angular frequency is a vector too. The Planck-Einstein relation is valid for both linear as well as circular oscillations: see our paper on the interpretation of de Broglie wavelength.]

Oh – and while special relativity is there because of Maxwell’s equation, gravity (general relativity) should be left out of the picture. Why? Because we would like to explain gravity as a residual very-far-field force. And trying to integrate gravity inevitable leads one to analyze particles as ‘black holes.’ Not nice, philosophically speaking. In fact, any 1/rn field inevitably leads one to think of some kind of black hole at the center, which is why thinking of fundamental particles in terms ring currents and dipole moments makes so much sense! [We need nothingness and infinity as mathematical concepts (limits, really) but they cannot possibly represent anything real, right?]

The consistent use of the Planck-Einstein law to model these nuclear electron orbitals should probably involve multiples of h to explain their size and energy: E = nhf rather than E = hf. For example, when calculating the radius of an orbital of a pointlike charge with the energy of a proton, one gets a radius that is only 1/4 of the proton radius (0.21 fm instead of 0.82 fm, approximately). To make the radius fit that of a proton, one has to use the E = 4hf relation. Indeed, for the time being, we should probably continue to reject the idea of using fractions of h to model deep electron orbitals. I also think we should avoid superluminal velocity concepts.

[…]

This post sounds like madness? Yes. And then, no! To be honest, I think of it as one of the better Aha! moments in my life. 🙂

Brussels, 30 December 2020

Post scriptum (1 January 2021): Lots of stuff coming together here ! 2021 will definitely see the Grand Unified Theory of Classical Physics becoming somewhat more real. It looks like Mills is going to make a major addition/correction to his electron orbital modeling work and, hopefully, manage to publish the gist of it in the eminent mainstream Nature journal. That makes a lot of sense: to move from an atom to an analysis of nuclei or complex three-particle systems, one should combine singlet and doublet energy states – if only to avoid reduce three-body problems to two-body problems. 🙂 I still do not buy the fractional use of Planck’s quantum of action, though. Especially now that we got rid of the concept of a separate ‘nuclear’ charge (there is only one charge: the electric charge, and it comes in two ‘colors’): if Planck’s quantum of action is electromagnetic, then it comes in wholes or multiples. No fractions. Fractional powers of distance functions in field or potential formulas are OK, however. 🙂

The complementarity of wave- and particle-like viewpoints on EM wave propagation

In 1995, W.E. Lamb Jr. wrote the following on the nature of the photon: “There is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the “aether” or “vacuum” to stand for empty space, even if no such thing existed. There are very good substitute words for “photon”, (e.g., “radiation” or “light”), and for “photonics” (e.g., “optics” or “quantum optics”). Similar objections are possible to use of the word “phonon”, which dates from 1932. Objects like electrons, neutrinos of finite rest mass, or helium atoms can, under suitable conditions, be considered to be particles, since their theories then have viable non-relativistic and non-quantum limits.”[1]

The opinion of a Nobel Prize laureate carries some weight, of course, but we think the concept of a photon makes sense. As the electron moves from one (potential) energy state to another – from one atomic or molecular orbital to another – it builds an oscillating electromagnetic field which has an integrity of its own and, therefore, is not only wave-like but also particle-like.

We, therefore, dedicated the fifth chapter of our re-write of Feynman’s Lectures to a dual analysis of EM radiation (and, yes, this post is just an announcement of the paper so you are supposed to click the link to read it). It is, basically, an overview of a rather particular expression of Maxwell’s equations which Feynman uses to discuss the laws of radiation. I wonder how to – possibly – ‘transform’ or ‘transpose’ this framework so it might apply to deep electron orbitals and – possibly – proton-neutron oscillations.


[1] W.E. Lamb Jr., Anti-photon, in: Applied Physics B volume 60, pages 77–84 (1995).

Signing off…

I have been exploring the weird wonderland of physics for over seven years now. At several occasions, I thought I should just stop. It was rewarding, but terribly exhausting at times as well! I am happy I did not give up, if only because I finally managed to come up with a more realist interpretation of the ‘mystery’ of matter-antimatter pair production/annihilation. So, yes, I think I can confidently state I finally understand physics the way I want to understand it. It was an extraordinary journey, and I am happy I could share it with many fellow searchers (300 posts and 300,000 hits on my first website now, 10,000+ downloads of papers (including the downloads from Phil Gibb’s site and academia.edu) and, better still, lots of interesting conversations.

One of these conversations was with a fine nuclear physicist, Andrew Meulenberg. We were in touch on the idea of a neutron (some kind of combination of a proton and a ‘nuclear’ electron—following up on Rutherford’s original idea, basically). More importantly, we chatted about, perhaps, developing a model for the deuterium nucleus (deuteron)—the hydrogen isotope which consists of a proton and a neutron. However, I feel I need to let go here, if only because I do not think I have the required mathematical skills for a venture like this. I feel somewhat guilty of letting him down. Hence, just in case someone out there feels he could contribute to this, I am copying my last email to him below. It sort of sums up my basic intuitions in terms of how one could possibly approach this.

Can it be done? Maybe. Maybe not. All I know is that not many have been trying since Bohr’s young wolves hijacked scientific discourse after the 1927 Solvay Conference and elevated a mathematical technique – perturbation theory – to the scientific dogma which is now referred to as quantum field theory.

So, yes, now I am really signing off. Thanks for reading me, now or in the past—I wrote my first post here about seven years ago! I hope it was not only useful but enjoyable as well. Oh—And please check out my YouTube channel on Physics ! 🙂

From: Jean Louis Van Belle
Sent: 14 November 2020 17:59
To: Andrew Meulenberg
Subject: Time and energy…

These things are hard… You are definitely much smarter with these things than I can aspire too… But I do have ideas. We must analyze the proton in terms of a collection of infinitesimally small charges – just like Feynman’s failed assembly of the electron (https://www.feynmanlectures.caltech.edu/II_28.html#Ch28-S3): it must be possible to do this and it will give us the equivalent of electromagnetic mass for the strong force. The assembly of the proton out of infinitesimally small charge bits will work because the proton is, effectively, massive. Not like an electron which effectively appears as a ‘cloud’ of charge and, therefore, has several radii and, yes, can pass through the nucleus and also ‘envelopes’ a proton when forming a neutron with it.

I cannot offer much in terms of analytical skills here. All of quantum physics – the new model of a hydrogen atom – grew out of the intuition of a young genius (Louis de Broglie) and a seasoned mathematical physicist (Erwin Schroedinger) finding a mathematical equation for it. That model is valid still – we just need to add spin from the outset (cf. the plus/minus sign of the imaginary unit) and acknowledge the indeterminacy in it is just statistical, but these are minor things.

I have not looked at your analysis of a neutron as an (hyper-)excited state of the hydrogen atom yet but it must be correct: what else can it be? It is what Rutherford said it should be when he first hypothesized the existence of a neutron.

I do not know how much time I want to devote to this (to be honest, I am totally sick of academic physics) but – whatever time I have – I want to contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.

JL

Hope

Those who read this blog, or my papers, know that the King of Science, physics, is in deep trouble. [In case you wonder, the Queen of Science is math.]

The problem is rather serious: a lack of credibility. It would kill any other business, but things work differently in academics. The question is this: how many professional physicists would admit this? An even more important question is: how many of those who admit this, would try to do something about it?

We hope the proportion of both is increasing – so we can trust that at least the dynamics of all of this are OK. I am hopeful – but I would not bet on it.

Post scriptum: A researcher started a discussion on ResearchGate earlier this year. The question for discussion is this: “In September 2019, the New York Times printed an opinion piece by Sean Carroll titled”Even Physicists Don’t Understand Quantum Mechanics. Worse, they don’t seem to want to understand it.” (https://www.nytimes.com/2019/09/07/opinion/sunday/quantum-physics.html) Is it true that physicists don’t want to understand QM? And if so then why?” I replied this to it:

“Sean Carroll is one of the Gurus that is part of the problem rather than the solution: he keeps peddling approaches that have not worked in the past, and can never be made to work in the future. I am an amateur physicist only, but I have not come across a problem that cannot be solved by ‘old’ quantum physics, i.e. a combination of Maxwell’s equations and the Planck-Einstein relation. Lamb shift, anomalous magnetic moment, electron-positron pair creation/annihilation (a nuclear process), behavior of electrons in semiconductors, superconduction etc. There is a (neo-)classical solution for everything: no quantum field and/or perturbation theories are needed. Proton and electrons as elementary particles (and neutrons as the bound state of an proton and a nuclear electron), and photons and neutrinos as lightlike particles, carrying electromagnetic and strong field energy respectively. That’s it. Nothing more. Nothing less. Everyone who thinks otherwise is ‘lost in math’, IMNSHO.”

Brutal? Yes. Very much so. The more important question is this: is it true? I cannot know for sure, but it comes across as being truthful to me.

The true mystery of quantum physics

In many of our papers, we presented the orbital motion of an electron around a nucleus or inside of a more complicated molecular structure[1], as well as the motion of the pointlike charge inside of an electron itself, as a fundamental oscillation. You will say: what is fundamental and, conversely, what is not? These oscillations are fundamental in the sense that these motions are (1) perpetual or stable and (2) also imply a quantization of space resulting from the Planck-Einstein relation.

Needless to say, this quantization of space looks very different depending on the situation: the order of magnitude of the radius of orbital motion around a nucleus is about 150 times the electron’s Compton radius[2] so, yes, that is very different. However, the basic idea is always the same: a pointlike charge going round and round in a rather regular fashion (otherwise our idea of a cycle time (T = 1/f) and an orbital would not make no sense whatsoever), and that oscillation then packs a certain amount of energy as well as Planck’s quantum of action (h). In fact, that’s just what the Planck-Einstein relation embodies: E = h·f. Frequencies and, therefore, radii and velocities are very different (we think of the pointlike charge inside of an electron as whizzing around at lightspeed, while the order of magnitude of velocities of the electron in an atomic or molecular orbital is also given by that fine-structure constant: v = α·c/n (n is the principal quantum number, or the shell in the gross structure of an atom), but the underlying equations of motion – as Dirac referred to it – are not fundamentally different.

We can look at these oscillations in two very different ways. Most Zitterbewegung theorists (or realist thinkers, I might say) think of it as a self-perpetuating current in an electromagnetic field. David Hestenes is probably the best known theorist in this class. However, we feel such view does not satisfactorily answer the quintessential question: what keeps the charge in its orbit? We, therefore, preferred to stick with an alternative model, which we loosely refer to as the oscillator model.

However, truth be told, we are aware this model comes with its own interpretational issues. Indeed, our interpretation of this oscillator model oscillated between the metaphor of a classical (non-relativistic) two-dimensional oscillator (think of a Ducati V2 engine, with the two pistons working in tandem in a 90-degree angle) and the mathematically correct analysis of a (one-dimensional) relativistic oscillator, which we may sum up in the following relativistically correct energy conservation law:

dE/dt = d[kx2/2 + mc2]/dt = 0

More recently, we actually noted the number of dimensions (think of the number of pistons of an engine) should actually not matter at all: an old-fashioned radial airplane engine has 3, 5, 7, or more cylinders (the non-even number has to do with the firing mechanism for four-stroke engines), but the interplay between those pistons can be analyzed just as well as the ‘sloshing back and forth’ of kinetic and potential energy in a dynamic system (see our paper on the meaning of uncertainty and the geometry of the wavefunction). Hence, it seems any number of springs or pistons working together would do the trick: somehow, linear becomes circular motion, and vice versa. But so what number of dimensions should we use for our metaphor, really?

We now think the ‘one-dimensional’ relativistic oscillator is the correct mathematical analysis, but we should interpret it more carefully. Look at the dE/dt = d[kx2/2 + mc2]/dt = = d(PE + KE)/dt = 0 once more.

For the potential energy, one gets the same kx2/2 formula one gets for the non-relativistic oscillator. That is no surprise: potential energy depends on position only, not on velocity, and there is nothing relative about position. However, the (½)m0v2 term that we would get when using the non-relativistic formulation of Newton’s Law is now replaced by the mc2 = γm0c2 term. Both energies vary – with position and with velocity respectively – but the equation above tells us their sum is some constant. Equating x to 0 (when the velocity v = c) gives us the total energy of the system: E = mc2. Just as it should be. 🙂 So how can we now reconcile this two models? One two-dimensional but non-relativistic, and the other relativistically correct but one-dimensional only? We always get this weird 1/2 factor! And we cannot think it away, so what is it, really?

We still don’t have a definite answer, but we think we may be closer to the conceptual locus where these two models might meet: the key is to interpret x and v in the equation for the relativistic oscillator as (1) the distance along an orbital, and (2) v as the tangential velocity of the pointlike charge along this orbital.

Huh? Yes. Read everything slowly and you might see the point. [If not, don’t worry about it too much. This is really a minor (but important) point in my so-called realist interpretation of quantum mechanics.]

If you get the point, you’ll immediately cry wolf and say such interpretation of x as a distance measured along some orbital (as opposed to the linear concept we are used to) and, consequently, thinking of v as some kind of tangential velocity along such orbital, looks pretty random. However, keep thinking about it, and you will have to admit it is a rather logical way out of the logical paradox. The formula for the relativistic oscillator assumes a pointlike charge with zero rest mass oscillating between v = 0 and v = c. However, something with zero rest mass will always be associated with some velocity: it cannot be zero! Think of a photon here: how would you slow it down? And you may think we could, perhaps, slow down a pointlike electric charge with zero rest mass in some electromagnetic field but, no! The slightest force on it will give it infinite acceleration according to Newton’s force law. [Admittedly, we would need to distinguish here between its relativistic expression (F = dp/dt) and its non-relativistic expression (F = m0·a) when further dissecting this statement, but you get the idea. Also note that we are discussing our electron here, in which we do have a zero-rest-mass charge. In an atomic or molecular orbital, we are talking an electron with a non-zero rest mass: just the mass of the electron whizzing around at a (significant) fraction (α) of lightspeed.]

Hence, it is actually quite rational to argue that the relativistic oscillator cannot be linear: the velocity must be some tangential velocity, always and – for a pointlike charge with zero rest mass – it must equal lightspeed, always. So, yes, we think this line of reasoning might well the conceptual locus where the one-dimensional relativistic oscillator (E = m·a2·ω2) and the two-dimensional non-relativistic oscillator (E = 2·m·a2·ω2/2 = m·a2·ω2) could meet. Of course, we welcome the view of any reader here! In fact, if there is a true mystery in quantum physics (we do not think so, but we know people – academics included – like mysterious things), then it is here!

Post scriptum: This is, perhaps, a good place to answer a question I sometimes get: what is so natural about relativity and a constant speed of light? It is not so easy, perhaps, to show why and how Lorentz’ transformation formulas make sense but, in contrast, it is fairly easy to think of the absolute speed of light like this: infinite speeds do not make sense, both physically as well as mathematically. From a physics point of view, the issue is this: something that moves about at an infinite speed is everywhere and, therefore, nowhere. So it doesn’t make sense. Mathematically speaking, you should not think of v reaching infinite but of a limit of a ratio of a distance interval that goes to infinity, while the time interval goes to zero. So, in the limit, we get a division of an infinite quantity by 0. That’s not infinity but an indeterminacy: it is totally undefined! Indeed, mathematicians can easily deal with infinity and zero, but divisions like zero divided by zero, or infinity divided by zero are meaningless. [Of course, we may have different mathematical functions in the numerator and denominator whose limits yields those values. There is then a reasonable chance we will be able to factor stuff out so as to get something else. We refer to such situations as indeterminate forms, but these are not what we refer to here. The informed reader will, perhaps, also note the division of infinity by zero does not figure in the list of indeterminacies, but any division by zero is generally considered to be undefined.]


[1] It may be extra electron such as in, for example, the electron which jumps from place to place in a semiconductor (see our quantum-mechanical analysis of electric currents). Also, as Dirac first noted, the analysis is actually also valid for electron holes, in which case our atom or molecule will be positively ionized instead of being neutral or negatively charged.

[2] We say 150 because that is close enough to the 1/α = 137 factor that relates the Bohr radius to the Compton radius of an electron. The reader may not be familiar with the idea of a Compton radius (as opposed to the Compton wavelength) but we refer him or her to our Zitterbewegung (ring current) model of an electron.

Electron propagation in a lattice

It is done! My last paper on the mentioned topic (available on Phil Gibbs’s site, my ResearchGate page or academia.edu) should conclude my work on the QED sector. It is a thorough exploration of the hitherto mysterious concept of the effective mass and all that.

The result I got is actually very nice: my calculation of the order of magnitude of the kb factor in the formula for the energy band (the conduction band, as you may know it) shows that the usual small angle approximation of the formula does not make all that much sense. This shows that some ‘realist’ thinking about what is what in these quantum-mechanical models does constrain the options: we cannot just multiply wave numbers with some random multiple of π or 2π. These things have a physical meaning!

So no multiverses or many worlds, please! One world is enough, and it is nice we can map it to a unique mathematical description.

I should now move on and think about the fun stuff: what is going on in the nucleus and all that? Let’s see where we go from here. Downloads on ResearchGate have been going through the roof lately (a thousand reads on ResearchGate is better than ten thousand on viXra.org, I guess), so it is all very promising. 🙂

Understanding lasers, semiconductors and other technical stuff

I wrote a lot of papers but most of them – if not all – deal with very basic stuff: the meaning of uncertainty (just statistical indeterminacy because we have no information on the initial condition of the system), the Planck-Einstein relation (how Planck’s quantum of action models an elementary cycle or an oscillation), and Schrödinger’s wavefunctions (the solutions to his equation) as the equations of motion for a pointlike charge. If anything, I hope I managed to restore a feeling that quantum electrodynamics is not essentially different from classical physics: it just adds the element of a quantization – of energy, momentum, magnetic flux, etcetera.

Importantly, we also talked about what photons and electrons actually are, and that electrons are pointlike but not dimensionless: their magnetic moment results from an internal current and, hence, spin is something real – something we can explain in terms of a two-dimensional perpetual current. In the process, we also explained why electrons take up some space: they have a radius (the Compton radius). So that explains the quantization of space, if you want.

We also talked fields and told you – because matter-particles do have a structure – we should have a dynamic view of the fields surrounding those. Potential barriers – or their corollary: potential wells – should, therefore, not be thought of as static fields. They result from one or more charges moving around and these fields, therefore, vary in time. Hence, a particle breaking through a ‘potential wall’ or coming out of a potential ‘well’ is just using an opening, so to speak, which corresponds to a classical trajectory.

We, therefore, have the guts to say that some of what you will read in a standard textbook is plain nonsense. Richard Feynman, for example, starts his lecture on a current in a crystal lattice by writing this: “You would think that a low-energy electron would have great difficulty passing through a solid crystal. The atoms are packed together with their centers only a few angstroms apart, and the effective diameter of the atom for electron scattering is roughly an angstrom or so. That is, the atoms are large, relative to their spacing, so that you would expect the mean free path between collisions to be of the order of a few angstroms—which is practically nothing. You would expect the electron to bump into one atom or another almost immediately. Nevertheless, it is a ubiquitous phenomenon of nature that if the lattice is perfect, the electrons are able to travel through the crystal smoothly and easily—almost as if they were in a vacuum. This strange fact is what lets metals conduct electricity so easily; it has also permitted the development of many practical devices. It is, for instance, what makes it possible for a transistor to imitate the radio tube. In a radio tube electrons move freely through a vacuum, while in the transistor they move freely through a crystal lattice.” [The italics are mine.]

It is nonsense because it is not the electron that is traveling smoothly, easily or freely: it is the electrical signal, and – no ! – that is not to be equated with the quantum-mechanical amplitude. The quantum-mechanical amplitude is just a mathematical concept: it does not travel through the lattice in any physical sense ! In fact, it does not even travel through the lattice in a logical sense: the quantum-mechanical amplitudes are to be associated with the atoms in the crystal lattice, and describe their state – i.e. whether or not they have an extra electron or (if we are analyzing electron holes in the lattice) if they are lacking one. So the drift velocity of the electron is actually very low, and the way the signal moves through the lattice is just like in the game of musical chairs – but with the chairs on a line: all players agree to kindly move to the next chair for the new arrival so the last person on the last chair can leave the game to get a beer. So here it is the same: one extra electron causes all other electrons to move. [For more detail, we refer to our paper on matter-waves, amplitudes and signals.]

But so, yes, we have not said much about semiconductors, lasers and other technical stuff. Why not? Not because it should be difficult: we already cracked the more difficult stuff (think of an explanation of the anomalous magnetic moment, the Lamb shift, or one-photon Mach-Zehnder interference here). No. We are just lacking time ! It is, effectively, going to be an awful lot of work to rewrite those basic lectures on semiconductors – or on lasers or other technical matters which attract students in physics – so as to show why and how the mechanics of these things actually work: not approximately, but how exactly – and, more importantly, why and how these phenomena can be explained in terms of something real: actual electrons moving through the lattice at lower or higher drift speeds within a conduction band (and then what that conduction band actually is).

The same goes for lasers: we talk about induced emission and all that, but we need to explain what that might actually represent – while avoiding the usual mumbo-jumbo about bosonic behavior and other useless generalizations of properties of actually matter- and light-particles that can be reasonably explained in terms of the structure of these particles – instead of invoking quantum-mechanical theorems or other dogmatic or canonical a priori assumptions.

So, yes, it is going to be hard work – and I am not quite sure if I have sufficient time or energy for it. I will try, and so I will probably be offline for quite some time while doing that. Be sure to have fun in the meanwhile ! 🙂

Post scriptum: Perhaps I should also focus on converting some of my papers into journal articles, but then I don’t feel like it’s worth going through all of the trouble that takes. Academic publishing is a weird thing. Either the editorial line of the journal is very strong, in which case they do not want to publish non-mainstream theory, and also insist on introductions and other credentials, or, else, it is very weak or even absent – and then it is nothing more than vanity or ego, right? So I think I am just fine with the viXra collection and the ‘preprint’ papers on ResearchGate now. I’ve been thinking it allows me to write what I want and – equally important – how I want to write it. In any case, I am writing for people like you and me. Not so much for dogmatic academics or philosophers. The poor experience with reviewers of my manuscript has taught me well, I guess. I should probably wait to get an invitation to publish now.

Quantum Physics: A Survivor’s Guide

A few days ago, I mentioned I felt like writing a new book: a sort of guidebook for amateur physicists like me. I realized that is actually fairly easy to do. I have three very basic papers – one on particles (both light and matter), one on fields, and one on the quantum-mechanical toolbox (amplitude math and all of that). But then there is a lot of nitty-gritty to be written about the technical stuff, of course: self-interference, superconductors, the behavior of semiconductors (as used in transistors), lasers, and so many other things – and all of the math that comes with it. However, for that, I can refer you to Feynman’s three volumes of lectures, of course. In fact, I should: it’s all there. So… Well… That’s it, then. I am done with the QED sector. Here is my summary of it all (links to the papers on Phil Gibbs’ site):

Paper I: Quantum behavior (the abstract should enrage the dark forces)

Paper II: Probability amplitudes (quantum math)

Paper III: The concept of a field (why you should not bother about QFT)

Paper IV: Survivor’s guide to all of the rest (keep smiling)

Paper V: Uncertainty and the geometry of the wavefunction (the final!)

The last paper is interesting because it shows statistical indeterminism is the only real indeterminism. We can, therefore, use Bell’s Theorem to prove our theory is complete: there is no need for hidden variables, so why should we bother about trying to prove or disprove they can or cannot exist?

Jean Louis Van Belle, 21 October 2020

Note: As for the QCD sector, that is a mess. We might have to wait another hundred years or so to see the smoke clear up there. Or, who knows, perhaps some visiting alien(s) will come and give us a decent alternative for the quark hypothesis and quantum field theories. One of my friends thinks so. Perhaps I should trust him more. 🙂

As for Phil Gibbs, I should really thank him for being one of the smartest people on Earth – and for his site, of course. Brilliant forum. Does what Feynman wanted everyone to do: look at the facts, and think for yourself. 🙂

The concept of a field

I ended my post on particles as spacetime oscillations saying I should probably write something about the concept of a field too, and why and how many academic physicists abuse it so often. So I did that, but it became a rather lengthy paper, and so I will refer you to Phil Gibbs’ site, where I post such stuff. Here is the link. Let me know what you think of it.

As for how it fits in with the rest of my writing, I already jokingly rewrote two of Feynman’s introductory Lectures on quantum mechanics (see: Quantum Behavior and Probability Amplitudes). I consider this paper to be the third. 🙂

Post scriptum: Now that I am talking about Richard Feynman – again ! – I should add that I really think of him as a weird character. I think he himself got caught in that image of the ‘Great Teacher’ while, at the same (and, surely, as a Nobel laureate), he also had to be seen to a ‘Great Guru.’ Read: a Great Promoter of the ‘Grand Mystery of Quantum Mechanics’ – while he probably knew classical electromagnetism combined with the Planck-Einstein relation can explain it all… Indeed, his lecture on superconductivity starts off as an incoherent ensemble of ‘rocket science’ pieces, to then – in the very last paragraphs – manipulate Schrödinger’s equation (and a few others) to show superconducting currents are just what you would expect in a superconducting fluid. Let me quote him:

“Schrödinger’s equation for the electron pairs in a superconductor gives us the equations of motion of an electrically charged ideal fluid. Superconductivity is the same as the problem of the hydrodynamics of a charged liquid. If you want to solve any problem about superconductors you take these equations for the fluid [or the equivalent pair, Eqs. (21.32) and (21.33)], and combine them with Maxwell’s equations to get the fields.”

So… Well… Looks he too is all about impressing people with ‘rocket science models’ first, and then he simplifies it all to… Well… Something simple. 😊

Having said that, I still like Feynman more than modern science gurus, because the latter usually don’t get to the simplifying part. :-/