The nature of time: relativity explained

My manuscript offers a somewhat sacrilegious but intuitive explanation of (special) relativity theory (The Emperor Has No Clothes: the force law and relativity, p. 24-27). It is one of my lighter and more easily accessible pieces of writing. The argument is based on the idea that we may define infinity or infinite velocities as some kind of limit (or some kind of limiting idea), but that we cannot really imagine it: it leads to all kinds of logical inconsistencies.

Let me give you a very simple example here to illustrate these inconsistencies: if something is traveling at an infinite velocity, then it is everywhere and nowhere at the same time, and no theory of physics can deal with that.

Now, if I would have to rewrite that brief introduction to relativity theory, I would probably add another logical argument. One that is based on our definition or notion of time itself. What is the definition of time, indeed? When you think long and hard about this, you will have to agree we can only measure time with reference to some fundamental cycle in Nature, right? It used to be the seasons, or the days or nights. Later, we subdivided a day into hours, and now we have atomic clocks. Whatever you can count and meaningfully communicate to some other intelligent being who happens to observe the same cyclical phenomenon works just fine, right?

Hence, if we would be able to communicate to some other intelligent being in outer space, whose position we may or may not know but both he/she/it (let us think of a male Martian for ease of reference) and we/me/us are broadcasting our frequency- or amplitude-modulated signals wide enough so as to ensure ongoing communication, then we would probably be able to converge on a definition of time in terms of the fundamental frequency of an elementary particle – let us say an electron to keep things simple. We could, therefore, agree on an experiment where he – after receiving a pre-agreed start signal from us – would starting counting and send us a stop signal back after, say, three billion electron cycles (not approximately, of course, but three billion exactly). In the meanwhile, we would be capable, of course, to verify that, inbetween sending and receiving the start and stop signal respectively (and taking into account the time that start and stop signal needs to travel between him and us), his clock seems to run somewhat differently than ours.

So that is the amazing thing, really. Our Martian uses the same electron clock, but our/his motion relative to his/ours leads us to the conclusion his clock works somewhat differently, and Einstein’s (special) relativity theory tells us how, exactly: time dilation, as given by the Lorentz factor.

Does this explanation make it any easier to truly understand relativity theory? Maybe. Maybe not. For me, it does, because what I am describing here is nothing but the results of the Michelson-Morley experiment in a slightly more amusing context which, for some reason I do not quite understand, seems to make them more comprehensible. At the very least, it shows Galilean relativity is as incomprehensible – or as illogical or non-intuitive, I should say – as the modern-day concept of relativity as pioneered by Albert Einstein.

You may now think (or not): OK, but what about relativistic mass? That concept is, and will probably forever remain, non-intuitive. Right? Time dilation and length contraction are fine, because we can now somehow imagine the what and why of this, but how do you explain relativistic mass, really?

The only answer I can give you here it to think some more about Newton’s law: mass is a measure of inertia, so that is a resistance to a change in the state of motion of an object. Motion and, therefore, your measurement of any acceleration or deceleration (i.e. a change in the state of motion) will depend on how you measure time and distance too. Therefore, mass has to be relativistic too.

QED: quod erat demonstrandum. In fact, it is not a proof, so I should not say it’s QED. It’s SE: a satisfactory explanation. Why is an explanation and not a proof? Because I take the constant speed of light for granted, and so I kinda derive the relativity of time, distance and mass from my point of departure (both figuratively and literally speaking, I’d say).

Post scriptum: For the mentioned calculation, we do need to know the (relative) position of the Martian, of course. Any event in physics is defined by both its position as well as its timing. That is what (also) makes it all very consistent, in fact. I should also note this short story here (I mean my post) is very well aligned with Einstein’s original 1905 article, so you can (also) go there to check the math. The main difference between his article and my explanation here is that I take the constant speed of light for granted, and then all that’s relative derives its relativity from that. Einstein looked at it the other way around, because things were not so obvious then. 🙂

The End of Physics

There is an army of physicists out there – still – trying to convince you there is still some mystery that needs explaining. They are wrong: quantum-mechanical weirdness is weird, but it is not some mystery. We have a decent interpretation of what quantum-mechanical equations – such as Schrodinger’s equation, for example – actually mean. We can also understand what photons, electrons, or protons – light and matter – actually are, and such understanding can be expressed in terms of 3D space, time, force, and charge: elementary concepts that feel familiar to us. There is no mystery left.

Unfortunately, physicists have completely lost it: they have multiplied concepts and produced a confusing but utterly unconvincing picture of the essence of the Universe. They promoted weird mathematical concepts – the quark hypothesis is just one example among others – and gave them some kind of reality status. The Nobel Prize Committee then played the role of the Vatican by canonizing the newfound religion.

It is a sad state of affairs, because we are surrounded by too many lies already: the ads and political slogans that shout us in the face as soon as we log on to Facebook to see what our friends are up to, or to YouTube to watch something or – what I often do – listen to the healing sounds of music.

The language and vocabulary of physics are complete. Does it make us happier beings? It should, shouldn’t it? I am happy I understand. I find consciousness fascinating – self-consciousness even more – but not because I think it is rooted in mystery. No. Consciousness arises from the self-organization of matter: order arising from chaos. It is a most remarkable thing – and it happens at all levels: atoms in molecules, molecules forming cellular systems, cellular systems forming biological systems. We are a biological system which, in turn, is part of much larger systems: biological, ecological – material systems. There is no God talking to us. We are on our own, and we must make the best out of it. We have everything, and we know everything.

Sadly, most people do not realize.

Post scriptum: With the end of physics comes the end of technology as well, isn’t it? All of the advanced technologies in use today are effectively already described in Feynman’s Lectures on Physics, which were written and published in the first half of the 1960s.

I thought about possible counterexamples, like optical-fiber cables, or the equipment that is used in superconducting quantum computing, such as Josephson junctions. But Feynman already describes Josephson junctions in the last chapter of his Lectures on Quantum Mechanics, which is a seminar on superconductivity. And fiber-optic cable is, essentially, a waveguide for light, which Feynman describes in very much detail in Chapter 24 of his Lectures on Electromagnetism and Matter. Needless to say, computers were also already there, and Feynman’s lecture on semiconductors has all you need to know about modern-day computing equipment. [In case you briefly thought about lasers, the first laser was built in 1960, and Feynman’s lecture on masers describes lasers too.]

So it is all there. I was born in 1969, when Man first walked on the Moon. CERN and other spectacular research projects have since been established, but, when one is brutally honest, one has to admit these experiments have not added anything significant – neither to the knowledge nor to the technology base of humankind (and, yes, I know your first instinct is to disagree with that, but that is because study or the media indoctrinated you that way). It is a rather strange thought, but I think it is essentially correct. Most scientists, experts and commentators are trying to uphold a totally fake illusion of progress.

Mental categories versus reality

Pre-scriptum: For those who do not like to read, I produced a very short YouTube presentation/video on this topic. About 15 minutes – same time as it will take you to read this post, probably. Check it out: https://www.youtube.com/watch?v=sJxAh_uCNjs.

Text:

We think of space and time as fundamental categories of the mind. And they are, but only in the sense that the famous Dutch physicist H.A. Lorentz conveyed to us: we do not seem to be able to conceive of any idea in physics without these two notions. However, relativity theory tells us these two concepts are not absolute and we may, therefore, say they cannot be truly fundamental. Only Nature’s constants – the speed of light, or Planck’s quantum of action – are absolute: these constants seem to mix space and time into something that is, apparently, more fundamental.

The speed of light (c) combines the physical dimensions of space and time, and Planck’s quantum of action (h) adds the idea of a force. But time, distance, and force are all relative. Energy (force over a distance), momentum (force times time) are, therefore, also relative. In contrast, the speed of light, and Planck’s quantum of action, are absolute. So we should think of distance, and of time, as some kind of projection of a deeper reality: the reality of light or – in case of Planck’s quantum of action – the reality of an electron or a proton. In contrast, time, distance, force, energy, momentum and whatever other concept we would derive from them exist in our mind only.

We should add another point here. To imagine the reality of an electron or a proton (or the idea of an elementary particle, you might say), we need an additional concept: the concept of charge. The elementary charge (e) is, effectively, a third idea (or category of the mind, one might say) without which we cannot imagine Nature. The ideas of charge and force are, of course, closely related: a force acts on a charge, and a charge is that upon which a force is acting. So we cannot think of charge without thinking of force, and vice versa. But, as mentioned above, the concept of force is relative: it incorporates the idea of time and distance (a force is that what accelerates a charge). In contrast, the idea of the elementary charge is absolute again: it does not depend on our frame of reference.

So we have three fundamental concepts: (1) velocity (or motion, you might say: a ratio of distance and time); (2) (physical) action (force times distance times time); and (3) charge. We measure them in three fundamental units: c, h, and e. Che. 🙂 So that’s reality, then: all of the metaphysics of physics are here. In three letters. We need three concepts: three things that we think of as being real, somehow. Real in the sense that we do not think they exist in our mind only. Light is real, and elementary particles are equally real. All other concepts exist in our mind only.

So were Kant’s ideas about space and time wrong? Maybe. Maybe not. If they are wrong, then that’s quite OK: Immanuel Kant lived in the 18th century, and had not ventured much beyond the place where he was born. Less exciting times. I think he was basically right in saying that space and time exist in our mind only. But he had no answer(s) to the question as to what is real: if some things exist in our mind only, something must exist in what is not our mind, right? So that is what we refer to as reality then: that which does not exist in our mind only.

Modern physics has the answers. The philosophy curriculum at universities should, therefore, adapt to modern times: Maxwell first derived the (absolute) speed of light in 1862, and Einstein published the (special) theory of relativity back in 1905. Hence, philosophers are 100-150 years behind the curve. They are probably even behind the general public. Philosophers should learn about modern physics as part of their studies so they can (also) think about real things rather than mental constructs only.

Form and substance

Philosophers usually distinguish between form and matter, rather than form and substance. Matter, as opposed to form, is then what is supposed to be formless. However, if there is anything that physics – as a science – has taught us, is that matter is defined by its form: in fact, it is the form factor which explains the difference between, say, a proton and an electron. So we might say that matter combines substance and form.

Now, we all know what form is: it is a mathematical quality—like the quality of having the shape of a triangle or a cube. But what is (the) substance that matter is made of? It is charge. Electric charge. It comes in various densities and shapes – that is why we think of it as being basically formless – but we can say a few more things about it. One is that it always comes in the same unit: the elementary charge—which may be positive or negative. Another is that the concept of charge is closely related to the concept of a force: a force acts on a charge—always.

We are talking elementary forces here, of course—the electromagnetic force, mainly. What about gravity? And what about the strong force? Attempts to model gravity as some kind of residual force, and the strong force as some kind of electromagnetic force with a different geometry but acting on the very same charge, have not been successful so far—but we should immediately add that mainstream academics never focused on it either, so the result may be commensurate with the effort made: nothing much.

Indeed, Einstein basically explained gravity away by giving us a geometric interpretation for it (general relativity theory) which, as far as I can see, confirms it may be some residual force resulting from the particular layout of positive and negative charge in electrically neutral atomic and molecular structures. As for the strong force, I believe the quark hypothesis – which basically states that partial (non-elementary) charges are, somehow, real – has led mainstream physics into the dead end it finds itself in now. Will it ever get out of it?

I am not sure. It does not matter all that much to me. I am not a mainstream scientist and I have the answers I was looking for. These answers may be temporary, but they are the best I have for the time being. The best quote I can think of right now is this one:

‘We are in the words, and at the same time, apart from them. The words spin out, spin us out, over a void. There, somewhere between us, some words form some answer for some time, allowing us to live more fully in the forgetting face of nonexistence, in the dissolving away of each other.’ (Jacques Lacan, in Jeremy D. Safran (2003), Psychoanalysis and Buddhism: an unfolding dialogue, p. 134)

That says it all, doesn’t it? For the time being, at least. 🙂

Post scriptum: You might think explaining gravity as some kind of residual electromagnetic force should be impossible, but explaining the attractive force inside a nucleus behind like charges was pretty difficult as well, until someone came up with a relatively simple idea based on the idea of ring currents. 🙂

Explaining the proton mass and radius

Our alternative realist interpretation of quantum physics is pretty complete but one thing that has been puzzling us is the mass density of a proton: why is it so massive as compared to an electron? We simplified things by adding a factor in the Planck-Einstein relation. To be precise, we wrote it as E = 4·h·f. This allowed us to derive the proton radius from the ring current model:

proton radius This felt a bit artificial. Writing the Planck-Einstein relation using an integer multiple of h or ħ (E = n·h·f = n·ħ·ω) is not uncommon. You should have encountered this relation when studying the black-body problem, for example, and it is also commonly used in the context of Bohr orbitals of electrons. But why is n equal to 4 here? Why not 2, or 3, or 5 or some other integer? We do not know: all we know is that the proton is very different. A proton is, effectively, not the antimatter counterpart of an electron—a positron. While the proton is much smaller – 459 times smaller, to be precise – its mass is 1,836 times that of the electron. Note that we have the same 1/4 factor here because the mass and Compton radius are inversely proportional:

ratii

This doesn’t look all that bad but it feels artificial. In addition, our reasoning involved a unexplained difference – a mysterious but exact SQRT(2) factor, to be precise – between the theoretical and experimentally measured magnetic moment of a proton. In short, we assumed some form factor must explain both the extraordinary mass density as well as this SQRT(2) factor but we were not quite able to pin it down, exactly. A remark on a video on our YouTube channel inspired us to think some more – thank you for that, Andy! – and we think we may have the answer now.

We now think the mass – or energy – of a proton combines two oscillations: one is the Zitterbewegung oscillation of the pointlike charge (which is a circular oscillation in a plane) while the other is the oscillation of the plane itself. The illustration below is a bit horrendous (I am not so good at drawings) but might help you to get the point. The plane of the Zitterbewegung (the plane of the proton ring current, in other words) may oscillate itself between +90 and −90 degrees. If so, the effective magnetic moment will differ from the theoretical magnetic moment we calculated, and it will differ by that SQRT(2) factor.

Proton oscillation

Hence, we should rewrite our paper, but the logic remains the same: we just have a much better explanation now of why we should apply the energy equipartition theorem.

Mystery solved! 🙂

Post scriptum (9 August 2020): The solution is not as simple as you may imagine. When combining the idea of some other motion to the ring current, we must remember that the speed of light –  the presumed tangential speed of our pointlike charge – cannot change. Hence, the radius must become smaller. We also need to think about distinguishing two different frequencies, and things quickly become quite complicated.

Feynman’s religion

Perhaps I should have titled this post differently: the physicist’s worldview. We may, effectively, assume that Richard Feynman’s Lectures on Physics represent mainstream sentiment, and he does get into philosophy—less or more liberally depending on the topic. Hence, yes, Feynman’s worldview is pretty much that of most physicists, I would think. So what is it? One of his more succinct statements is this:

“Often, people in some unjustified fear of physics say you cannot write an equation for life. Well, perhaps we can. As a matter of fact, we very possibly already have an equation to a sufficient approximation when we write the equation of quantum mechanics.” (Feynman’s Lectures, p. II-41-11)

He then jots down that equation that Schrödinger has on his grave (shown below). It is a differential equation: it relates the wavefunction (ψ) to its time derivative through the Hamiltonian coefficients that describe how physical states change with time (Hij), the imaginary unit (i) and Planck’s quantum of action (ħ).

hl_alpb_3453_ptplr

Feynman, and all modern academic physicists in his wake, claim this equation cannot be understood. I don’t agree: the explanation is not easy, and requires quite some prerequisites, but it is not anymore difficult than, say, trying to understand Maxwell’s equations, or the Planck-Einstein relation (E = ħ·ω = h·f).

In fact, a good understanding of both allows you to not only understand Schrödinger’s equation but all of quantum physics. The basics are this: the presence of the imaginary unit tells us the wavefunction is cyclical, and that it is an oscillation in two dimensions. The presence of Planck’s quantum of action in this equation tells us that such oscillation comes in units of ħ. Schrödinger’s wave equation as a whole is, therefore, nothing but a succinct representation of the energy conservation principle. Hence, we can understand it.

At the same time, we cannot, of course. We can only grasp it to some extent. Indeed, Feynman concludes his philosophical remarks as follows:

“The next great era of awakening of human intellect may well produce a method of understanding the qualitative content of equations. Today we cannot. Today we cannot see that the water flow equations contain such things as the barber pole structure of turbulence that one sees between rotating cylinders. We cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality—or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way.” (Feynman’s Lectures, p. II-41-12)

I think that puts the matter to rest—for the time being, at least. 🙂

Complex beauty

This is an image from the web: it was taken by Gerald Brown in 2007—a sunset at Knysna, South Africa. I love the colors and magic.

800px-Knysnasunset

It is used in a Wikipedia article on Mie scattering, which summarizes the physics behind as follows:

“The change of sky colour at sunset (red nearest the sun, blue furthest away) is caused by Rayleigh scattering by atmospheric gas particles, which are much smaller than the wavelengths of visible light. The grey/white colour of the clouds is caused by Mie scattering by water droplets, which are of a comparable size to the wavelengths of visible light.”

I find it amazing how such simple explanation can elucidate such fascinating complexity and beauty. Stuff like this triggered my interest in physics—as a child. I am 50+ now. My explanations are more precise now: I now understand what Rayleigh and/or Mie scattering of light actually is.

The best thing about that is that it does not reduce the wonder. On the contrary. It is like the intricate colors and pattern of the human eye. Knowing that it is just “the pigmentation of the eye’s iris and the frequency-dependence of the scattering of light by the turbid medium in the stroma of the iris” (I am quoting from Wikipedia once more) does not make our eyes any less beautiful, isn’t it?

I actually do not understand why mainstream quantum physicists try to mystify things. I find reality mysterious enough already.

The nature of the matter-wave

Yesterday, I was to talk for about 30 minutes to some students who are looking at classical electron models as part of an attempt to try to model what might be happening to an electron when moving through a magnetic field. Of course, I only had time to discuss the ring current model, and even then it inadvertently turned into a two-hour presentation. Fortunately, they were polite and no one dropped out—although it was an online Google Meet. In fact, they reacted quite enthusiastically, and so we all enjoyed it a lot. So much that I adjusted the presentation a bit the next morning (which added even more time to it unfortunately) so as to add it to my YouTube channel. So this is the link to it, and I hope you enjoy it. If so, please like it—and share it! 🙂

Oh! Forgot to mention: in case you wonder why this video is different than others, see my Tweet on Sean Carroll’s latest series of videos hereunder. That should explain it.

Sean Carroll

Post scriptum: Of course, I got the usual question from one of the students: if an electron is a ring current, then why doesn’t it radiate its energy away? The easy answer is: an electron is an electron and so it doesn’t—for the same reason that an electron in an atomic orbital or a Cooper pair in a superconducting loop of current does not radiate energy away. The more difficult answer is a bit mysterious: it has got to do with flux quantization and, most importantly, with the Planck-Einstein relation. I will not be too long here (I cannot because this is just a footnote to a blog post) but the following elements should be noted:

1. The Planck-Einstein law embodies a (stable) wavicle: a wavicle respects the Planck-Einstein relation (E = h·f) as well as Einstein’s mass-energy equivalence relation (E = mc2). A wavicle will, therefore, carry energy but it will also pack one or more units of Planck’s quantum of action. Both the energy as well as this finite amount of physical action (Wirkung in German) will be conserved—cycle after cycle.

2. Hence, equilibrium states should be thought of as electromagnetic oscillation without friction. Indeed, it is the frictional element that explains the radiation of, say, an electron going up and down in an antenna and radiating some electromagnetic signal out. To add to this rather intuitive explanation, I should also remind you that it is the accelerations and decelerations of the electric charge in an antenna that generate the radio wave—not the motion as such. So one should, perhaps, think of a charge going round and round as moving like in a straight line—along some geodesic in its own space. That’s the metaphor, at least.

3. Technically, one needs to think in terms of quantized fluxes and Poynting vectors and energy transfers from kinetic to potential (and back) and from ‘electric’ to ‘magnetic’ (and back). In short, the electron really is an electromagnetic perpetuum mobile ! I know that sounds mystical (too) but then I never promised I would take all of the mystery away from quantum physics ! 🙂 If there would be no mystery left, I would not be interested in physics.

Amplitudes and probabilities

The most common question you ask yourself when studying quantum physics is this: what are those amplitudes, and why do we have to square them to get probabilities?

It is a question which cannot easily be answered because it depends on what we are modeling: a two-state system, electron orbitals, something else? And if it is a two-state system, is it a laser, some oscillation between two polarization states, or what? These are all very different physical systems and what an amplitude actually is in that context will, therefore, also be quite different. Hence, the professors usually just avoid the question and just brutally plow through all of it all. And then, by the time they are done, we are so familiar with it that we sort of forget about the question. [I could actually say the same about the concept of spin: no one bothers to really define it because it means different things in different situations.]

In fact, I myself sort of forgot about the question recently, and I had to remind myself of the (short) answer: probabilities are proportional to energy or mass densities (think of a charge spending more time here than there, or vice versa), and the energy of a wave or an oscillation – any oscillation, really – is proportional to the square of its amplitude.

Is that it? Yes. We just have to add an extra remark here: we will take the square of the absolute value of the amplitude, but that has got to do with the fact that we are talking oscillations in two dimensions here: think of an electromagnetic oscillation—combining an oscillating electric and magnetic field vector.

Let us throw in the equations here: the illustration below shows the structural similarity (in terms of propagation mechanism) between (1) how an electromagnetic wave propagates in space (Maxwell’s equations without charges) and (2) how amplitude waves propagate (Schroedinger’s equation without the term for the potential resulting from the positive nucleus). It is the same mechanism. That is actually what led me to the bold hypothesis, in one of my very first papers on the topic, that they must be describing the same thing—and that both must model the energy conservation principle—and the conservation of linear and angular (field) momentum!

Energy waves

Is that it? Is that all there is? Yep! We have written a lot of papers, but all they do is further detail this principle: probability amplitudes, or quantum-mechanical probability amplitudes in general, model an energy propagation mechanism.

Of course, the next question is: why can we just add these amplitudes, and then square them? Is there no interference term? We explored this question in our most recent paper, and the short answer is: no. There is no interference term. [This will probably sound weird or even counter-intuitive – it is not what you were thought, is it? – but we qualify this remark in the post scriptum to this post.]

Frankly, we would reverse the question: why can we calculate amplitudes by taking the square root of the probabilities? Why does it all work out? Why is it that the amplitude math mirrors the probability math? Why can we relate them through these squares or square roots when going from one representation to another? The answer to this question is buried in the math too, but is based on simple arithmetic. Note, for example, that, when insisting base states or state vectors should be orthogonal, we actually demand that their squared sum is equal to the sum of their squares:

(a + b)2 = a2 + b2 ⇔ = a2 + b2 = a2 + b2 + 2a·ba·b = 0

This is a logical or arithmetic condition which represents a physical condition: two physical states must be discrete states. They do not overlap: it is either this or that. We can then add or multiply these physical states – mix them so as to produce logical states, which express the uncertainty in our mind (not in Nature!), but we can only do that because these base states are, effectively, independent. That is why we can also use them to construct another set of (logical) base vectors, which will be (linearly) independent too! This will all sound like Chinese to you, of course. Any case, the basic message is this: behind all of the hocus-pocus, there is physics behind—and so that’s why I writing all of this. I write for people like me and you: people who want to truly understand what it is all about.

My son had to study quantum physics to get a degree in engineering—but he hated it: reproducing amplitude math without truly understanding what it means is no fun. He had no time for any of my detailed analysis and interpretation, but I think I was able to help him by telling him it all meant something real. It helped him to get through it all, and he got great marks in the end! And, yes, of course, I told him not to bother the professor with all of my weird theories! 🙂

I hope this message encourages you in very much the same way. I sometimes think I should try to write some kind of Survival Guide to Quantum Physics, but then my other – more technical – blog is a bit of that, so that should do! 🙂

PS: As for there being no interference term, when adding the amplitudes (without the 2a·term), we do have interference between the a and b terms, of course: we use boldface letters here but these ‘vectors’ are quantum-mechanical amplitudes, so they are complex-valued exponentials. However, we wanted to keep the symbolism extremely simple in this post, and so that’s what we did. Those wave equations look formidable enough already, don’t they? 🙂

Do we only see what we want to see?

I had a short but interesting exchange with a student in physics—one of the very few who actually reads (some of) the stuff on this and my other blog (the latter is more technical than this one).

It was an exchange on the double-slit experiment with electrons—one of these experiments which is supposed to prove that classical concepts and electromagnetic theory fail when analyzing the smallest of small things and that only an analysis in terms of those weird probability amplitudes can explain what might or might not be going on.

Plain rubbish, of course. I asked him to carefully look at the pattern of blobs when only one of the slits is open, which are shown in the top and bottom illustrations below respectively (the inset (top-left) shows how the mask moves over the slits—covering both, one or none of the two slits respectively).

Interference 1

Of course, you see interference when both slits are open (all of the stuff in the middle above). However, I find it much more interesting to see there is interference too (or diffraction—my preferred term for an interference pattern when there is only one slit or one hole) even if only one of the slits is open. In fact, the interference pattern when two slits are open, is just the pattern one gets from the superposition of the diffraction pattern for the two slits respectively. Hence, an analysis in terms of probability amplitudes associated with this or that path—the usual thing: add the amplitudes and then take the absolute square to get the probabilities—is pretty nonsensical. The tough question physicists need to answer is not how interference can be explained, but this: how do we explain the diffraction pattern when electrons go through one slit only?

[…]

I realize you may be as brainwashed as the bright young student who contacted me: he did not see it at first! You should, therefore, probably have another look at the illustrations above too: there are brighter and darker spots when one slit is open too, especially on the sides—a bit further away from the center.

[Just do it before you read on: look, once more, at the illustration above before you look at the next.]

[…]

The diffraction pattern (when only one slit is open) resembles that of light going through a circular aperture (think of light going through a simple pinhole), which is shown below: it is known as the Airy disk or the Airy pattern. [I should, of course, mention the source of my illustrations: the one above (on electron interference) comes from the article on the 2012 Nebraska-Lincoln experiment, while the ones below come from the articles on the Airy disk and the (angular) resolution of a microscope in Wikipedia respectively. I no longer refer to Feynman’s Lectures or related material because of an attack by the dark force.]

Beugungsscheibchen

When we combine two pinholes and move them further or closer to each other, we get what is shown below: a superposition of the two diffraction patterns. The patterns in that double-slit experiment with electrons look what you would get using slits instead of pinholes.

airy_disk_spacing_near_rayleigh_criterion

It obviously led to a bit of an Aha-Erlebnis for the student who bothered to write and ask. I told him a mathematical analysis using classical wave equations would not be easy, but that it should be possible. Unfortunately, mainstream physicists − academic teachers and professors, in particular – seem to prefer the nonsensical but easier analysis in terms of probability amplitudes. I guess they only see what they want to see. :-/

Note: For those who would want to dig a bit further, I could refer to them to a September 20, 2014 post as well as a successor post to that on diffraction and interference of EM waves (plain ‘light’, in other words). The dark force did some damage to both, but they are still very readable. In fact, the fact that one or two illustrations and formulas have been removed there will force you to think for yourself, so it is all good. 🙂