Quaternions and the nuclear wave equation

In this blog, we talked a lot about the Zitterbewegung model of an electron, which is a model which allows us to think of the elementary wavefunction as representing a radius or position vector. We write:

ψ = r = a·e±iθ = a·[cos(±θ) + i · sin(±θ)]

It is just an application of Parson’s ring current or magneton model of an electron. Note we use boldface to denote vectors, and that we think of the sine and cosine here as vectors too! You should note that the sine and cosine are the same function: they differ only because of a 90-degree phase shift: cosθ = sin(θ + π/2). Alternatively, we can use the imaginary unit (i) as a rotation operator and use the vector notation to write: sinθ = i·cosθ.

In one of our introductory papers (on the language of math), we show how and why this all works like a charm: when we take the derivative with respect to time, we get the (orbital or tangential) velocity (dr/dt = v), and the second-order derivative gives us the (centripetal) acceleration vector (d2r/dt2 = a). The plus/minus sign of the argument of the wavefunction gives us the direction of spin, and we may, perhaps, add a plus/minus sign to the wavefunction as a whole to model matter and antimatter, respectively (the latter assertion is very speculative though, so we will not elaborate that here).

One orbital cycle packs Planck’s quantum of (physical) action, which we can write either as the product of the energy (E) and the cycle time (T), or the momentum (p) of the charge times the distance travelled, which is the circumference of the loop λ in the inertial frame of reference (we can always add a classical linear velocity component when considering an electron in motion, and we may want to write Planck’s quantum of action as an angular momentum vector (h or ħ) to explain what the Uncertainty Principle is all about (statistical uncertainty, nothing ontological), but let us keep things simple as for now):

h = E·T = p·λ

It is important to distinguish between the electron and the charge, which we think of being pointlike: the electron is charge in motion. Charge is just charge: it explains everything and its nature is, therefore, quite mysterious: is it really a pointlike thing, or is there some fractal structure? Of these things, we know very little, but the small anomaly in the magnetic moment of an electron suggests its structure might be fractal. Think of the fine-structure constant here, as the factor which distinguishes the classical, Compton and Bohr radii of the electron: we associate the classical electron radius with the radius of the poinlike charge, but perhaps we can drill down further.

We also showed how the physical dimensions work out in Schroedinger’s wave equation. Let us jot it down to appreciate what it might model, and appreciate why complex numbers come in handy:

Schroedinger’s equation in free space

This is, of course, Schroedinger’s equation in free space, which means there are no other charges around and we, therefore, have no potential energy terms here. The rather enigmatic concept of the effective mass (which is half the total mass of the electron) is just the relativistic mass of the pointlike charge as it whizzes around at lightspeed, so that is the motion which Schroedinger referred to as its Zitterbewegung (Dirac confused it with some motion of the electron itself, further compounding what we think of as de Broglie’s mistaken interpretation of the matter-wave as a linear oscillation: think of it as an orbital oscillation). The 1/2 factor is there in Schroedinger’s wave equation for electron orbitals, but he replaced the effective mass rather subtly (or not-so-subtly, I should say) by the total mass of the electron because the wave equation models the orbitals of an electron pair (two electrons with opposite spin). So we might say he was lucky: the two mistakes together (not accounting for spin, and adding the effective mass of two electrons to get a mass factor) make things come out alright. 🙂

However, we will not say more about Schroedinger’s equation for the time being (we will come back to it): just note the imaginary unit, which does operate like a rotation operator here. Schroedinger’s wave equation, therefore, must model (planar) orbitals. Of course, the plane of the orbital itself may be rotating itself, and most probably is because that is what gives us those wonderful shapes of electron orbitals (subshells). Also note the physical dimension of ħ/m: it is a factor which is expressed in m2/s, but when you combine that with the 1/m2 dimension of the ∇2 operator, then you get the 1/s dimension on both sides of Schroedinger’s equation. [The ∇2 operator is just the generalization of the d2r/dx2 but in three dimensions, so x becomes a vector: x, and we apply the operator to the three spatial coordinates and get another vector, which is why we call ∇2 a vector operator. Let us move on, because we cannot explain each and every detail here, of course!]

We need to talk forces and fields now. This ring current model assumes an electromagnetic field which keeps the pointlike charge in its orbit. This centripetal force must be equal to the Lorentz force (F), which we can write in terms of the electric and magnetic field vectors E and B (fields are just forces per unit charge, so the two concepts are very intimately related):

F = q·(E + v×B) = q·(E + c×E/c) = q·(E + 1×E) = q·(E + j·E) = (1+ j)·q·E

We use a different imaginary unit here (j instead of i) because the plane in which the magnetic field vector B is going round and round is orthogonal to the plane in which E is going round and round, so let us call these planes the xy– and xz-planes respectively. Of course, you will ask: why is the B-plane not the yz-plane? We might be mistaken, but the magnetic field vector lags the electric field vector, so it is either of the two, and so now you can check for yourself of what we wrote above is actually correct. Also note that we write 1 as a vector (1) or a complex number: 1 = 1 + i·0. [It is also possible to write this: 1 = 1 + i·0 or 1 = 1 + i·0. As long as we think of these things as vectors – something with a magnitude and a direction – it is OK.]

You may be lost in math already, so we should visualize this. Unfortunately, that is not easy. You may to google for animations of circularly polarized electromagnetic waves, but these usually show the electric field vector only, and animations which show both E and B are usually linearly polarized waves. Let me reproduce the simplest of images: imagine the electric field vector E going round and round. Now imagine the field vector B being orthogonal to it, but also going round and round (because its phase follows the phase of E). So, yes, it must be going around in the xz– or yz-plane (as mentioned above, we let you figure out how the various right-hand rules work together here).

Rotational plane of the electric field vector

You should now appreciate that the E and B vectors – taken together – will also form a plane. This plane is not static: it is not the xy-, yz– or xz-plane, nor is it some static combination of two of these. No! We cannot describe it with reference to our classical Cartesian axes because it changes all the time as a result of the rotation of both the E and B vectors. So how we can describe that plane mathematically?

The Irish mathematician William Rowan Hamilton – who is also known for many other mathematical concepts – found a great way to do just that, and we will use his notation. We could say the plane formed by the E and B vectors is the EB plane but, in line with Hamilton’s quaternion algebra, we will refer to it as the k-plane. How is it related to what we referred to as the i– and j-planes, or the xy– and xz-plane as we used to say? At this point, we should introduce Hamilton’s notation: he did write i and j in boldface (we do not like that, but you may want to think of it as just a minor change in notation because we are using these imaginary units in a new mathematical space: the quaternion number space), and he referred to them as basic quaternions in what you should think of as an extension of the complex number system. More specifically, he wrote this on a now rather famous bridge in Dublin:

i2 = -1

j2 = -1

k2 = -1

i·j = k

j·i= k

The first three rules are the ones you know from complex number math: two successive rotations by 90 degrees will bring you from 1 to -1. The order of multiplication in the other two rules ( i·j = k and j·i = –k ) gives us not only the k-plane but also the spin direction. All other rules in regard to quaternions (we can write, for example, this: i ·j·k = -1), and the other products you will find in the Wikipedia article on quaternions) can be derived from these, but we will not go into them here.

Now, you will say, we do not really need that k, do we? Just distinguishing between i and j should do, right? The answer to that question is: yes, when you are dealing with electromagnetic oscillations only! But it is no when you are trying to model nuclear oscillations! That is, in fact, exactly why we need this quaternion math in quantum physics!

Let us think about this nuclear oscillation. Particle physics experiments – especially high-energy physics experiments – effectively provide evidence for the presence of a nuclear force. To explain the proton radius, one can effectively think of a nuclear oscillation as an orbital oscillation in three rather than just two dimensions. The oscillation is, therefore, driven by two (perpendicular) forces rather than just one, with the frequency of each of the oscillators being equal to ω = E/2ħ = mc2/2ħ.

Each of the two perpendicular oscillations would, therefore, pack one half-unit of ħ only. The ω = E/2ħ formula also incorporates the energy equipartition theorem, according to which each of the two oscillations should pack half of the total energy of the nuclear particle (so that is the proton, in this case). This spherical view of a proton fits nicely with packing models for nucleons and yields the experimentally measured radius of a proton:

Proton radius formula

Of course, you can immediately see that the 4 factor is the same factor 4 as the one appearing in the formula for the surface area of a sphere (A = 4πr2), as opposed to that for the surface of a disc (A = πr2). And now you should be able to appreciate that we should probably represent a proton by a combination of two wavefunctions. Something like this:

Proton wavefunction

What about a wave equation for nuclear oscillations? Do we need one? We sure do. Perhaps we do not need one to model a neutron as some nuclear dance of a negative and a positive charge. Indeed, think of a combination of a proton and what we will refer to as a deep electron here, just to distinguish it from an electron in Schroedinger’s atomic electron orbitals. But we might need it when we are modeling something more complicated, such as the different energy states of, say, a deuteron nucleus, which combines a proton and a neutron and, therefore, two positive charges and one deep electron.

According to some, the deep electron may also appear in other energy states and may, therefore, give rise to a different kind of hydrogen (they are referred to as hydrinos). What do I think of those? I think these things do not exist and, if they do, they cannot be stable. I also think these researchers need to come up with a wave equation for them in order to be credible and, in light of what we wrote about the complications in regard to the various rotational planes, that wave equation will probably have all of Hamilton’s basic quaternions in it. [But so, as mentioned above, I am waiting for them to come up with something that makes sense and matches what we can actually observe in Nature: those hydrinos should have a specific spectrum, and we do not such see such spectrum from, say, the Sun, where there is so much going on so, if hydrinos exist, the Sun should produce them, right? So, yes, I am rather skeptical here: I do think we know everything now and physics, as a science, is sort of complete and, therefore, dead as a science: all that is left now is engineering!]

But, yes, quaternion algebra is a very necessary part of our toolkit. It completes our description of everything! 🙂

The Language of Physics

The meaning of life in 15 pages ! Or… Well… At least a short description of the Universe… Not sure it helps in sense-making. 🙂

Post scriptum (25 March 2021): Because this post is so extremely short and happy, I want to add a sad anecdote which illustrates what I have come to regard as the sorry state of physics as a science.

A few days ago, an honest researcher put me in cc of an email to a much higher-brow researcher. I won’t reveal names, but the latter – I will call him X – works at a prestigious accelerator lab in the US. The gist of the email was a question on an article of X: “I am still looking at the classical model for the deep orbits. But I have been having trouble trying to determine if the centrifugal and spin-orbit potentials have the same relativistic correction as the Coulomb potential. I have also been having trouble with the Ademko/Vysotski derivation of the Veff = V×E/mc2 – V2/2mc2 formula.”

I was greatly astonished to see X answer this: “Hello – What I know is that this term comes from the Bethe-Salpeter equation, which I am including (#1). The authors say in their book that this equation comes from the Pauli’s theory of spin. Reading from Bethe-Salpeter’s book [Quantum mechanics of one and two electron atoms]: “If we disregard all but the first three members of this equation, we obtain the ordinary Schroedinger equation. The next three terms are peculiar to the relativistic Schroedinger theory”. They say that they derived this equation from covariant Dirac equation, which I am also including (#2). They say that the last term in this equation is characteristic for the Dirac theory of spin ½ particles. I simplified the whole thing by choosing just the spin term, which is already used for hyperfine splitting of normal hydrogen lines. It is obviously approximation, but it gave me a hope to satisfy the virial theoremOf course, now I know that using your Veff potential does that also. That is all I know.” [I added the italics/bold in the quote.]

So I see this answer while browsing through my emails on my mobile phone, and I am disgusted – thinking: Seriously? You get to publish in high-brow journals, but so you do not understand the equations, and you just drop terms and pick the ones that suit you to make your theory fit what you want to find? And so I immediately reply to all, politely but firmly: “All I can say, is that I would not use equations which I do not fully understand. Dirac’s wave equation itself does not make much sense to me. I think Schroedinger’s original wave equation is relativistically correct. The 1/2 factor in it has nothing to do with the non-relativistic kinetic energy, but with the concept of effective mass and the fact that it models electron pairs (two electrons – neglect of spin). Andre Michaud referred to a variant of Schroedinger’s equation including spin factors.”

Now X replies this, also from his iPhone: “For me the argument was simple. I was desperate trying to satisfy the virial theorem after I realized that ordinary Coulomb potential will not do it. I decided to try the spin potential, which is in every undergraduate quantum mechanical book, starting with Feynman or Tippler, to explain the hyperfine hydrogen splitting. They, however, evaluate it at large radius. I said, what happens if I evaluate it at small radius. And to my surprise, I could satisfy the virial theorem. None of this will be recognized as valid until one finds the small hydrogen experimentally. That is my main aim. To use theory only as a approximate guidance. After it is found, there will be an explosion of “correct” theories.” A few hours later, he makes things even worse by adding: “I forgot to mention another motivation for the spin potential. I was hoping that a spin flip will create an equivalent to the famous “21cm line” for normal hydrogen, which can then be used to detect the small hydrogen in astrophysics. Unfortunately, flipping spin makes it unstable in all potential configurations I tried so far.”

I have never come across a more blatant case of making a theory fit whatever you want to prove (apparently, X believes Mills’ hydrinos (hypothetical small hydrogen) are not a fraud), and it saddens me deeply. Of course, I do understand one will want to fiddle and modify equations when working on something, but you don’t do that when these things are going to get published by serious journals. Just goes to show how physicists effectively got lost in math, and how ‘peer reviews’ actually work: they don’t. :-/

The nature of the nuclear force

I’ve reflected a while on my two last papers on the neutron (n = p + e model) and the deuteron nucleus (D = 2p + e) and made a quick YouTube video on it. A bit lengthy, as usual. I hope you enjoy/like it. 🙂

A Zitterbewegung model of the neutron

As part of my ventures into QCD, I quickly developed a Zitterbewegung model of the neutron, as a complement to my first sketch of a deuteron nucleus. The math of orbitals is interesting. Whatever field you have, one can model is using a coupling constant between the proportionality coefficient of the force, and the charge it acts on. That ties it nicely with my earlier thoughts on the meaning of the fine-structure constant.

My realist interpretation of quantum physics focuses on explanations involving the electromagnetic force only, but the matter-antimatter dichotomy still puzzles me very much. Also, the idea of virtual particles is no longer anathema to me, but I still want to model them as particle-field interactions and the exchange of real (angular or linear) momentum and energy, with a quantization of momentum and energy obeying the Planck-Einstein law.

The proton model will be key. We cannot explain it in the typical ‘mass without mass’ model of zittering charges: we get a 1/4 factor in the explanation of the proton radius, which is impossible to get rid of unless we assume some ‘strong’ force come into play. That is why I prioritize a ‘straight’ attack on the electron and the proton-electron bond in a primitive neutron model.

The calculation of forces inside a muon-electron and a proton (see ) is an interesting exercise: it is the only thing which explains why an electron annihilates a positron but electrons and protons can live together (the ‘anti-matter’ nature of charged particles only shows because of opposite spin directions of the fields – so it is only when the ‘structure’ of matter-antimatter pairs is different that they will not annihilate each other).

[…]

In short, 2021 will be an interesting year for me. The intent of my last two papers (on the deuteron model and the primitive neutron model) was to think of energy values: the energy value of the bond between electron and proton in the neutron, and the energy value of the bond between proton and neutron in a deuteron nucleus. But, yes, the more fundamental work remains to be done !

Cheers – Jean-Louis

The complementarity of wave- and particle-like viewpoints on EM wave propagation

In 1995, W.E. Lamb Jr. wrote the following on the nature of the photon: “There is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the “aether” or “vacuum” to stand for empty space, even if no such thing existed. There are very good substitute words for “photon”, (e.g., “radiation” or “light”), and for “photonics” (e.g., “optics” or “quantum optics”). Similar objections are possible to use of the word “phonon”, which dates from 1932. Objects like electrons, neutrinos of finite rest mass, or helium atoms can, under suitable conditions, be considered to be particles, since their theories then have viable non-relativistic and non-quantum limits.”[1]

The opinion of a Nobel Prize laureate carries some weight, of course, but we think the concept of a photon makes sense. As the electron moves from one (potential) energy state to another – from one atomic or molecular orbital to another – it builds an oscillating electromagnetic field which has an integrity of its own and, therefore, is not only wave-like but also particle-like.

We, therefore, dedicated the fifth chapter of our re-write of Feynman’s Lectures to a dual analysis of EM radiation (and, yes, this post is just an announcement of the paper so you are supposed to click the link to read it). It is, basically, an overview of a rather particular expression of Maxwell’s equations which Feynman uses to discuss the laws of radiation. I wonder how to – possibly – ‘transform’ or ‘transpose’ this framework so it might apply to deep electron orbitals and – possibly – proton-neutron oscillations.


[1] W.E. Lamb Jr., Anti-photon, in: Applied Physics B volume 60, pages 77–84 (1995).

Electron propagation in a lattice

It is done! My last paper on the mentioned topic (available on Phil Gibbs’s site, my ResearchGate page or academia.edu) should conclude my work on the QED sector. It is a thorough exploration of the hitherto mysterious concept of the effective mass and all that.

The result I got is actually very nice: my calculation of the order of magnitude of the kb factor in the formula for the energy band (the conduction band, as you may know it) shows that the usual small angle approximation of the formula does not make all that much sense. This shows that some ‘realist’ thinking about what is what in these quantum-mechanical models does constrain the options: we cannot just multiply wave numbers with some random multiple of π or 2π. These things have a physical meaning!

So no multiverses or many worlds, please! One world is enough, and it is nice we can map it to a unique mathematical description.

I should now move on and think about the fun stuff: what is going on in the nucleus and all that? Let’s see where we go from here. Downloads on ResearchGate have been going through the roof lately (a thousand reads on ResearchGate is better than ten thousand on viXra.org, I guess), so it is all very promising. 🙂

Understanding lasers, semiconductors and other technical stuff

I wrote a lot of papers but most of them – if not all – deal with very basic stuff: the meaning of uncertainty (just statistical indeterminacy because we have no information on the initial condition of the system), the Planck-Einstein relation (how Planck’s quantum of action models an elementary cycle or an oscillation), and Schrödinger’s wavefunctions (the solutions to his equation) as the equations of motion for a pointlike charge. If anything, I hope I managed to restore a feeling that quantum electrodynamics is not essentially different from classical physics: it just adds the element of a quantization – of energy, momentum, magnetic flux, etcetera.

Importantly, we also talked about what photons and electrons actually are, and that electrons are pointlike but not dimensionless: their magnetic moment results from an internal current and, hence, spin is something real – something we can explain in terms of a two-dimensional perpetual current. In the process, we also explained why electrons take up some space: they have a radius (the Compton radius). So that explains the quantization of space, if you want.

We also talked fields and told you – because matter-particles do have a structure – we should have a dynamic view of the fields surrounding those. Potential barriers – or their corollary: potential wells – should, therefore, not be thought of as static fields. They result from one or more charges moving around and these fields, therefore, vary in time. Hence, a particle breaking through a ‘potential wall’ or coming out of a potential ‘well’ is just using an opening, so to speak, which corresponds to a classical trajectory.

We, therefore, have the guts to say that some of what you will read in a standard textbook is plain nonsense. Richard Feynman, for example, starts his lecture on a current in a crystal lattice by writing this: “You would think that a low-energy electron would have great difficulty passing through a solid crystal. The atoms are packed together with their centers only a few angstroms apart, and the effective diameter of the atom for electron scattering is roughly an angstrom or so. That is, the atoms are large, relative to their spacing, so that you would expect the mean free path between collisions to be of the order of a few angstroms—which is practically nothing. You would expect the electron to bump into one atom or another almost immediately. Nevertheless, it is a ubiquitous phenomenon of nature that if the lattice is perfect, the electrons are able to travel through the crystal smoothly and easily—almost as if they were in a vacuum. This strange fact is what lets metals conduct electricity so easily; it has also permitted the development of many practical devices. It is, for instance, what makes it possible for a transistor to imitate the radio tube. In a radio tube electrons move freely through a vacuum, while in the transistor they move freely through a crystal lattice.” [The italics are mine.]

It is nonsense because it is not the electron that is traveling smoothly, easily or freely: it is the electrical signal, and – no ! – that is not to be equated with the quantum-mechanical amplitude. The quantum-mechanical amplitude is just a mathematical concept: it does not travel through the lattice in any physical sense ! In fact, it does not even travel through the lattice in a logical sense: the quantum-mechanical amplitudes are to be associated with the atoms in the crystal lattice, and describe their state – i.e. whether or not they have an extra electron or (if we are analyzing electron holes in the lattice) if they are lacking one. So the drift velocity of the electron is actually very low, and the way the signal moves through the lattice is just like in the game of musical chairs – but with the chairs on a line: all players agree to kindly move to the next chair for the new arrival so the last person on the last chair can leave the game to get a beer. So here it is the same: one extra electron causes all other electrons to move. [For more detail, we refer to our paper on matter-waves, amplitudes and signals.]

But so, yes, we have not said much about semiconductors, lasers and other technical stuff. Why not? Not because it should be difficult: we already cracked the more difficult stuff (think of an explanation of the anomalous magnetic moment, the Lamb shift, or one-photon Mach-Zehnder interference here). No. We are just lacking time ! It is, effectively, going to be an awful lot of work to rewrite those basic lectures on semiconductors – or on lasers or other technical matters which attract students in physics – so as to show why and how the mechanics of these things actually work: not approximately, but how exactly – and, more importantly, why and how these phenomena can be explained in terms of something real: actual electrons moving through the lattice at lower or higher drift speeds within a conduction band (and then what that conduction band actually is).

The same goes for lasers: we talk about induced emission and all that, but we need to explain what that might actually represent – while avoiding the usual mumbo-jumbo about bosonic behavior and other useless generalizations of properties of actually matter- and light-particles that can be reasonably explained in terms of the structure of these particles – instead of invoking quantum-mechanical theorems or other dogmatic or canonical a priori assumptions.

So, yes, it is going to be hard work – and I am not quite sure if I have sufficient time or energy for it. I will try, and so I will probably be offline for quite some time while doing that. Be sure to have fun in the meanwhile ! 🙂

Post scriptum: Perhaps I should also focus on converting some of my papers into journal articles, but then I don’t feel like it’s worth going through all of the trouble that takes. Academic publishing is a weird thing. Either the editorial line of the journal is very strong, in which case they do not want to publish non-mainstream theory, and also insist on introductions and other credentials, or, else, it is very weak or even absent – and then it is nothing more than vanity or ego, right? So I think I am just fine with the viXra collection and the ‘preprint’ papers on ResearchGate now. I’ve been thinking it allows me to write what I want and – equally important – how I want to write it. In any case, I am writing for people like you and me. Not so much for dogmatic academics or philosophers. The poor experience with reviewers of my manuscript has taught me well, I guess. I should probably wait to get an invitation to publish now.

Quantum Physics: A Survivor’s Guide

A few days ago, I mentioned I felt like writing a new book: a sort of guidebook for amateur physicists like me. I realized that is actually fairly easy to do. I have three very basic papers – one on particles (both light and matter), one on fields, and one on the quantum-mechanical toolbox (amplitude math and all of that). But then there is a lot of nitty-gritty to be written about the technical stuff, of course: self-interference, superconductors, the behavior of semiconductors (as used in transistors), lasers, and so many other things – and all of the math that comes with it. However, for that, I can refer you to Feynman’s three volumes of lectures, of course. In fact, I should: it’s all there. So… Well… That’s it, then. I am done with the QED sector. Here is my summary of it all (links to the papers on Phil Gibbs’ site):

Paper I: Quantum behavior (the abstract should enrage the dark forces)

Paper II: Probability amplitudes (quantum math)

Paper III: The concept of a field (why you should not bother about QFT)

Paper IV: Survivor’s guide to all of the rest (keep smiling)

Paper V: Uncertainty and the geometry of the wavefunction (the final!)

The last paper is interesting because it shows statistical indeterminism is the only real indeterminism. We can, therefore, use Bell’s Theorem to prove our theory is complete: there is no need for hidden variables, so why should we bother about trying to prove or disprove they can or cannot exist?

Jean Louis Van Belle, 21 October 2020

Note: As for the QCD sector, that is a mess. We might have to wait another hundred years or so to see the smoke clear up there. Or, who knows, perhaps some visiting alien(s) will come and give us a decent alternative for the quark hypothesis and quantum field theories. One of my friends thinks so. Perhaps I should trust him more. 🙂

As for Phil Gibbs, I should really thank him for being one of the smartest people on Earth – and for his site, of course. Brilliant forum. Does what Feynman wanted everyone to do: look at the facts, and think for yourself. 🙂

The concept of a field

I ended my post on particles as spacetime oscillations saying I should probably write something about the concept of a field too, and why and how many academic physicists abuse it so often. So I did that, but it became a rather lengthy paper, and so I will refer you to Phil Gibbs’ site, where I post such stuff. Here is the link. Let me know what you think of it.

As for how it fits in with the rest of my writing, I already jokingly rewrote two of Feynman’s introductory Lectures on quantum mechanics (see: Quantum Behavior and Probability Amplitudes). I consider this paper to be the third. 🙂

Post scriptum: Now that I am talking about Richard Feynman – again ! – I should add that I really think of him as a weird character. I think he himself got caught in that image of the ‘Great Teacher’ while, at the same (and, surely, as a Nobel laureate), he also had to be seen to a ‘Great Guru.’ Read: a Great Promoter of the ‘Grand Mystery of Quantum Mechanics’ – while he probably knew classical electromagnetism combined with the Planck-Einstein relation can explain it all… Indeed, his lecture on superconductivity starts off as an incoherent ensemble of ‘rocket science’ pieces, to then – in the very last paragraphs – manipulate Schrödinger’s equation (and a few others) to show superconducting currents are just what you would expect in a superconducting fluid. Let me quote him:

“Schrödinger’s equation for the electron pairs in a superconductor gives us the equations of motion of an electrically charged ideal fluid. Superconductivity is the same as the problem of the hydrodynamics of a charged liquid. If you want to solve any problem about superconductors you take these equations for the fluid [or the equivalent pair, Eqs. (21.32) and (21.33)], and combine them with Maxwell’s equations to get the fields.”

So… Well… Looks he too is all about impressing people with ‘rocket science models’ first, and then he simplifies it all to… Well… Something simple. 😊

Having said that, I still like Feynman more than modern science gurus, because the latter usually don’t get to the simplifying part. :-/

A new book?

I don’t know where I would start a new story on physics. I am also not quite sure for whom I would be writing it – although it would be for people like me, obviously: most of what we do, we do for ourselves, right? So I should probably describe myself in order to describe the audience: amateur physicists who are interested in the epistemology of modern physics – or its ontology, or its metaphysics. I also talk about the genealogy or archaeology of ideas on my ResearchGate site. All these words have (slightly) different meanings but the distinctions do not matter all that much. The point is this: I write for people who want to understand physics in pretty much the same way as the great classical physicist Hendrik Antoon Lorentz who, just a few months before his demise, at the occasion of the (in)famous 1927 Solvay Conference, wanted to understand the ‘new theories’:

“We are representing phenomena. We try to form an image of them in our mind. Till now, we always tried to do using the ordinary notions of space and time. These notions may be innate; they result, in any case, from our personal experience, from our daily observations. To me, these notions are clear, and I admit I am not able to have any idea about physics without those notions. The image I want to have when thinking physical phenomena has to be clear and well defined, and it seems to me that cannot be done without these notions of a system defined in space and in time.”

Note that H.A. Lorentz understood electromagnetism and relativity theory as few others did. In fact, judging from some of the crap out there, I can safely say he understood stuff as few others do today still. Hence, he should surely not be thought of as a classical physicist who, somehow, was stuck. On the contrary: he understood the ‘new theories’ better than many of the new theorists themselves. In fact, as far as I am concerned, I think his comments or conclusions on the epistemological status of the Uncertainty Principle – which he made in the same intervention – still stand. Let me quote the original French:

“Je pense que cette notion de probabilité [in the new theories] serait à mettre à la fin, et comme conclusion, des considérations théoriques, et non pas comme axiome a priori, quoique je veuille bien admettre que cette indétermination correspond aux possibilités expérimentales. Je pourrais toujours garder ma foi déterministe pour les phénomènes fondamentaux, dont je n’ai pas parlé. Est-ce qu’un esprit plus profond ne pourrait pas se rendre compte des mouvements de ces électrons. Ne pourrait-on pas garder le déterminisme en en faisant l’objet d’une croyance? Faut-il nécessairement ériger l’ indéterminisme en principe?”

What a beautiful statement, isn’t it? Why should we elevate indeterminism to a philosophical principle? Indeed, now that I’ve inserted some French, I may as well inject some German. The idea of a particle includes the idea of a more or less well-known position. Let us be specific and think of uncertainty in the context of position. We may not fully know the position of a particle for one or more of the following reasons:

  1. The precision of our measurements may be limited: this is what Heisenberg referred to as an Ungenauigkeit.
  2. Our measurement might disturb the position and, as such, cause the information to get lost and, as a result, introduce an uncertainty: this is what we may translate as an Unbestimmtheit.
  3. The uncertainty may be inherent to Nature, in which case we should probably refer to it as an Ungewissheit.

So what is the case? Lorentz claims it is either the first or the second – or a combination of both – and that the third proposition is a philosophical statement which we can neither prove nor disprove. I cannot see anything logical (theory) or practical (experiment) that would invalidate this point. I, therefore, intend to write a basic book on quantum physics from what I hope would be Lorentz’ or Einstein’s point of view.

My detractors will immediately cry wolf: Einstein lost the discussions with Bohr, didn’t he? I do not think so: he just got tired of them. I want to try to pick up the story where he left it. Let’s see where I get. 🙂