The Quantum Thomist

Musings about quantum physics, classical philosophy, and the connection between the two.
A Universe from Nothing? Part 1: Introduction


A Universe from Nothing? Part 2: Particle Physics
Last modified on Sat Jul 13 18:37:57 2019


This is the second post in a series discussing Professor Lawrence Krauss' work A Universe From Nothing. In the previous post, I gave an introduction to the underlying cosmology, and a quick overview of the rest of his work. Since his discussion of the cosmology was broadly correct, and very well presented, I plan to skip over those chapters and concentrate on those parts of the book which I find problematic.

Introduction

Chapter 1 discusses the Big Bang model and age of the universe. This emerged from Einstein's theory, Lemaitre's insights, and experimental evidence starting from Leavitt and Hubble and ending with more precise modern measurements estimating the distances of distant galaxies from us using supernovas. Chapter 2 discusses how gravitational lensing can be used to measure the mass distribution of galaxies, and how this (and the stability of galaxies) shows that there is a lot more mass out there than can be accounted for from the normal matter that makes up stars, planets and so on. The big bang model allows us to accurately predict the relative abundance of the lighter elements in the universe. Combined with observations of the amount of stellar matter, this allows us to calculate the total quantity of standard model matter (protons, neutrons and so on) in each galaxy. What we find is that there is a lot of stuff missing. This is known as dark matter. The precise amount of dark matter will determine whether the universe will continue expanding forever, or eventually collapse again. Chapter 3 discusses the cosmological microwave background, and measurements based on it to show that the universe is flat, on the knife edge between open and closed. The problem is that the known quantities of matter and dark matter imply that the universe should be open (expanding forever). This suggests that something else enters the picture. This is where Krauss' own research enters the story.

The definition of nothing

Moving into the start of chapter 4, we see that this can be explained if we suppose a small cosmological constant. This is an idea first introduced by Einstein, and then abandoned by him, before more recently being revived. The cosmological constant is what it says on the tin. The standard form of Einstein's field equation relates the curvature of space time on the left hand side of the equation with the Stress Energy tensor describing the total amount of matter and energy in the universe on the right hand side. However, it is also possible, and consistent with the underlying symmetries of general relativity, to add a constant term to the field equation. Einstein proposed doing this to try to keep a static universe. That didn't work, but it can be used to fix the equations to be consistent with the experimental observation of a flat universe. This is known as dark energy (to be pedantic, a cosmological constant is not the only possible solution to the flatness problem, though it is the one that both Krauss and I favour).

So what is the cosmological constant? In classical terms, it would represent a small repulsive gravitational force throughout the universe. Too small to be noticed at the level of the solar system, but more significant when you move to the scale of galaxy clusters. As the name "dark energy" suggests, it can be placed alongside the stress-energy tensor, where it will contribute to the energy and pressure of the universe.

Krauss then takes this suggestion literally, and describes the cosmological constant as an energy. But whenever we think of energy, it is carried by some sort of matter. But all the matter in the universe is already accounted for in the stress energy tensor. All that is left is the vacuum. So Krauss concludes that the stress energy tensor represents the energy of the vacuum.

He is, I think, jumping ahead of himself here. The question is how Einstein's field equation will emerge from a quantum theory of gravity. Right now, since we don't know what the quantum theory of gravity is, that is difficult to assess. Einstein's field equation (with and without the cosmological constant) is constructed to be consistent with local transformations of the coordinates. A symmetry in physics means that the action (which determines the physical content of the theory) is unchanged under a particular transformation. In this case, the transformation is a small shift in the space and time coordinates, which is different from one place to the next, combined with a local rotation of the gauge (this is the symmetry which drives the electromagnetic, weak and strong nuclear forces) to maintain the gauge invariance of the stress energy tensor.

Now if we perform this transformation naively, then the action appears to change. The transformed action is the original action plus something else. This change can be expressed in terms of a four by four dimensional matrix. The calculation is slightly simpler in classical physics, and here the matrix is known as the stress-energy tensor. If the action is unchanged under the symmetry (and consequently physics not dependent on the choice of coordinate system), then the additional term generated when we transform the coordinates has to be zero. This implies that the stress energy tensor has to satisfy a continuity equation: the change of stress-energy in a region in time is equal to the amount entering or leaving across the spatial boundaries of that region. The derivation of the stress energy tensor assumes the principle of least action, which is only valid in classical physics. In quantum physics, we get an additional term in addition to the stress energy tensor. But, focussing on classical physics for the moment, we can say that energy is never created or destroyed, but can only move from one place to another.

It is possible to make a simple addition to the action (the Einstein-Hilbert term), proportional to a scalar representation the curvature, and apply the same procedure. Then inn addition to the stress-energy tensor, we get the curvature part of Einstein's field equation. To get the cosmological term, we could slightly modify the Einstein-Hilbert action, and this is what is usually done. My own calculations suggest that this is not necessary. The way we derive Einstein's field equation from the action assumes classical physics. In quantum physics, there are small corrections to this result, and these can result in a cosmological constant term.

The procedure I described in the past few paragraphs is the simplest way to recover a theory of gravity consistent with experimental observation. We need not have a cosmological constant: there are other possibilities. These can be generated by alternative slight modifications to the action. But that need not concern us here.

Krauss' idea that the cosmological constant represents the energy of the vacuum is thus not the only option on the table. Dark energy might not be generated via a cosmological constant. Even if it is, then there are other explanations of how it could arise (such as my hypothesis of a quantum correction to the stress energy tensor). But let's let this pass, and move on. I don't think that he is right here, but he could be. We are missing the theory we would need to be sure one way or the other.

Krauss now defines what he means by nothing.

But what kind of stuff could contribute such a term?

The answer is nothing.

By nothing I do not mean nothing, but rather nothing -- in this case the nothingness we normally call empty space. That is to say, if I take a region of space and get rid of absolutely everything in it -- dust, gas, people, and even the radiation passing through, namely absolutely everything within that region -- if the remaining empty space weighs something, then that would correspond to the existence of a cosmological term such as Einstein invented.

Obviously, taken in isolation, the sentence By nothing I do not mean nothing, but rather nothing is just meaningless twaddle. But in context, we can see what he means. In Einstein's field equation, the curvature of space time is equal to the difference between two terms. The first is the Stress-Energy tensor, which is seen as being the sum total of the energy of all the stuff that he mentions, dust, gas, people, radiation and so on. The second part is the cosmological term. What Krauss is saying is take away the stress energy tensor, and you will still have curvature of space time caused by the cosmological term. So Krauss defines nothing as the absence of everything that contributes to the stress energy tensor, but the presence of whatever causes the cosmological term, and whatever is represented by the space time metric.

There are, of course, several issues with this. Firstly, Einstein's field equation is not the most fundamental physics. It likely arises from the classical limit of a quantum theory of gravity. We don't yet know what that theory is. There are a few candidates that have been proposed; as yet it has not been shown which if any of them are correct. We don't yet know what causes the cosmological term to arise. It might be that it comes from the same sort of stuff that gives rise to the stress energy tensor (perhaps via some quantum corrections to the naive classical expectation value). Unless Krauss can show that whatever leads to the cosmological term is truly independent of the usual matter fields, getting rid of the gas, radiation, and so on might also get rid of the cosmological term.

Secondly, we can take a look at Einstein's field equation:

Einstein's field equation

T represents the stress energy tensor, R the curvature, g the metric tensor, Λ the cosmological constant, c the speed of light and G is Newton's constant. This represents the classical limit of an equation that follows from the quantum theory of gravity. That means that each term in this equation will represent the vacuum expectation value of some quantum field. We already know how to represent the stress energy tensor in terms of the quantum fields that represent the ultimate building blocks of dust, gas, people, radiation and so on. The other terms must also be reducible to quantum fields (since the equation classical = quantum doesn't make any sense), which means that either the metric g (and consequently the curvature, which is a function of the metric) would be replaced by the expectation value of a quantum object, or the inverse of Newton's constant is the expectation value of some quantum field, or the cosmological constant, or some combination of these. Most approaches to quantum gravity attempt to quantise the metric, but my point doesn't depend on how it is done.

Particles represent excitations of these fields. An excitation is a higher energy state, and solving the eigenvalue equation for the Hamiltonian operator for those fields shows that the energy comes in discrete lumps. So the ground state is zero energy (no particles), then you have the first excited state (one particle), the next excited state (two particles) and so on. That's how it works for the particles that give rise to the Stress-Energy tensor. There must be some particle present to have a non-zero value of an expectation value (certainly in quantum field theory, where every operator is constructed from the particle creation and annihilation operators). Quantising the other two terms in the equation will lead to something similar. An underlying quantum field with energy excitations which represent particles.

Now Krauss has defined nothing so that in nothingness there are no excitations of the quantum fields which are the building blocks of the stress-energy tensor. But he allows excitations of the quantum fields which give rise to the other two terms in the equation. This division saying that some field excitations are something and others are nothing is obviously completely arbitrary. In a true quantum theory of gravity, there would be no ontological distinction between the quark, lepton, photon and so on fields on one hand and the graviton fields on the other. One should either exclude all quantum fields, or none of them. Of course, even if we get rid of all these excitations, we are still left with the fields themselves, which most people would still call something.

This is the main charge which people have raised against Krauss. He can only get a universe from nothing by redefining the meaning of the word nothing. Of course, he is free to define his terminology as he sees fit, when he is not interacting with anyone else. The problem is he is attempting to answer a question raised by philosophers and theologians Why is there something rather than nothing? Unless you use definitions which are equivalent to the original definition, or more general than it (and have the philosopher's definition as a special case), you are not answering the question. If you do propose an alternative definition (and there is nothing wrong with that in itself), you then have to show that it is equivalent to the original in that it points to the same things. That takes work; work which Krauss has not done.

For example, suppose that I had the following conversation with a zoologist.

Zoologist: We can confidently say that there are no dragons in Africa.

Me: What are you talking about? Africa is full of them.

Zoologist: We have done a complete survey --.

Me: A dragon is defined as a large grey herbivore mammal, with ivory tusks, big ears, and a long trunk.

Zoologist: That's an elephant.

Me: That's just semantics. They are all the same thing really. So do you deny that there are such animals in Africa?

Zoologist: There are certainly African elephants, albeit (thanks to poachers) fewer than I would like.

Me: Precisely. So there are dragons in Africa.

Zoologist: But a dragon is a large, fire breathing reptile. There aren't any of those in Africa.

Me: But we both know that, scientifically, fire breathing reptiles are impossible. So when people talk about dragons, they can't mean fire-breathing reptiles. Or if they do, its an irrational category. They must be referring to what you call elephants.

Zoologist: But elephant's aren't dragons.

Me: Stop avoiding the argument by continuing to peddle your irrational terminology. I'm with science on this one.

The point is that the zoologist is the person who made the statement I am disputing. He is therefore the one who decides how key terms are defined. I am not at liberty to change those definitions while pretending to dispute his statement. I have not shown that there are dragons in Africa according to his definition of the word dragon.

In the case of Krauss, the theologians and philosophers define nothing as the absence of anything physical. That is no excitations of any fields, and, indeed, no fields themselves or even space time. To answer their question, Krauss would need to show that his definition is equivalent to theirs. Which blatantly it isn't.

Bad Particle Physics

Krauss goes on:

Alas, most fourth graders have not taken quantum mechanics, nor have they studied relativity. For when one incorporates the results of Einstein's special theory of relativity into the quantum universe, empty space becomes much stranger than it was before.

This is a statement that I mostly agree with, with the exception of the words empty space. What Krauss means by that is the quantum vacuum. The quantum vacuum is strange. It is a mass of topological gluonic objects: solitons, monopoles, vortices, glueballs and so on. Each of these carries energy. So get rid of all the Fermionic matter (electrons, quarks, protons, neutrons, atoms and so on), and you will still have these objects there. What this means is that the quantum vacuum is not empty space: empty space would be the absence both of Fermionic matter, and these gluonic structures. The quantum vacuum plays an important role, because it is the background canvas against which we perform calculations. It doesn't have to be empty space to fulfil that role.

But I don't think that these topological structures are what Krauss means. He proceeds by giving a brief historical introduction to quantum mechanics, leading to the Dirac equation, and the discovery of the positron (anti-electron) and anti-particles. This is all good stuff, until we reach this point:

Legendary physicist Richard Feynman was the first person to provide an intuitive understanding of why relativity requires the existence of antiparticles, which also yielded a graphic demonstration that empty space is not quite so empty.

Feynman recognised that relativity tells us that observers moving at different speeds will make different measurements of quantities such as distance and time. For example, time will appear to slow down for objects moving very fast. If somehow objects could go faster than light, they would appear to go backward in time, which is one of the reasons that the speed of light is normally considered a cosmic speed limit.

Krauss was right that Feynman was a legend, but then he forgets basic undergraduate physics. Time dilation is described by the Lorentz transformation in physics, which relate the time and space coordinates in two different reference frames moving at a constant relative velocity to each other. Suppose that I have a friend on a fast rocket who blasts past me at a constant speed v and into space. We agree that the moment he passes me is time zero and location zero. We also agree that we will construct our own coordinate systems, with ourselves at the origin. We each track some other object, and note down it's position in our coordinate system. I record it as being at a location x and at a time t. My friend records its location as x' and time as t'. Clearly there will be some relationship between the numbers (x,t) and (x', t').

The natural thing to write down is the following:

Galilean relativity

This makes intuitive sense. I record the rocket as being as a location vt in my coordinate system at any given moment of time, and to get his coordinate we would just have to subtract this amount.

But Einstein realised that these intuitive equations are wrong. The correct equations are:

Lorentz transformations

Suppose (to make the equations easier), that my friend is travelling at four fifths of the speed of light. The object we are keeping track of is a clock resting on my friend's dashboard. So x' = 0, x = vt. That makes t' = 3 t/5. So for every five seconds that I experience, I see my friend's clock go forward by only three seconds. This is what we mean by time slowing down.

But notice what happens when the speed goes faster than the speed of light. It's not that time goes backwards, but time becomes an imaginary number. A similar equation applies to energy and momentum, so energy would also become an imaginary number. This is impossible, since energy is the eigenstate of the Hamiltonian operator, which is Hermitian, and always real.

So when Krauss discusses time going backwards, he is talking nonsense. But it gets worse.

A key tenet of quantum mechanics, however, is the Heisenberg uncertainty principle, which, as I have mentioned, states that it is impossible to determine, for certain pairs of quantities, such as position and velocity, exact values for a given system at the same time. Alternatively, if you measure a given system for only a fixed, finite time interval, you cannot determine its total energy exactly.

What all this implies is that, for very short times, so short that you cannot measure their speed with high precision, quantum mechanics allows for the possibility that these particles act as if they are moving faster than light! But, if they are moving faster than light, Einstein tells us they must be behaving as if they are moving backward in time.

The uncertainty principle is a relationship between the wavefunctions of a particle in different bases. Each wavefunction is a mathematical distribution. For example, for the location wavefunction, its value at any one point gives the amplitude that we will find the particle there. Given the location wavefunction, we can also change the basis to construct another distribution, which describes the amplitude for finding that the particle has a given momentum. We can calculate a mean value and standard deviation (a measure of how spread out the function is). The uncertainty principle is a mathematical result saying that the standard deviation of the location wavefunction multiplied by the standard deviation of the momentum wavefunction has to be greater than or equal to a particular number (π/2). In other words, if we can be confident of where the particle is, the location wavefunction will be sharply peaked. So it will have a small standard deviation. This means that the standard deviation of the momentum wavefunction will have to be large to compensate, and we will be very uncertain of the particle's momentum. There is a similar relationship between the wavefunctions representing the particle's energy and time.

Of course, what this means in practice depends on our interpretation of quantum physics, and whether we view the wavefunction as physical, or merely a parametrisation of our knowledge. If it is a parametrisation of our knowledge, then the uncertainty principle is what its name implies, and doesn't say anything about what's actually going on with the particle. If it represents something physical, then particles are spread out in space rather than at specific points, and the uncertainty principle just relates the spread in location and momentum. My own interpretation is a hybrid of the two approaches, treating it as purely about our knowledge for some parameters, and partly physical and partly about our knowledge for others. For location, momentum, energy and time I regard it as describing our knowledge of the particles.

Krauss is a little unclear at this point in this passage. seem to act implies that he views the wavefunction as being about our knowledge of the particle. But as he argues from his conclusion that the particles act as though they are travelling backwards in time, he goes to they really are travelling back in time, so he must think that the wavefunction describes the actual state of the particle.

Krauss makes a number of basic mistakes in these two paragraphs:

  1. The uncertainty principle links position and momentum, not position and velocity. Although the two are linked by a simple equation in classical physics, in special relativity it is more complex. An infinite momentum implies a velocity equal to the speed of light. In quantum physics, it is even more complex: there is no unambiguous definition of velocity. Momentum is the key observable that describes a particle's state.
  2. The time/energy uncertainty principle links uncertainty in time with uncertainty in energy. Krauss implies that it links time with velocity, saying that for short periods of time, the particle can borrow a large amount of velocity.
  3. Even if we accept Krauss' interpretation of the uncertainty principle, it does him no good. The more energy a particle has (as measured in a given reference frame), the faster it goes, but that velocity tends towards a maximum of the speed of light. Infinite energy implies a particle moving at the speed of light. So even if the particle does borrow a bit of energy over short time periods, that's not going to push it past light-speed, and certainly not to go faster in time.
  4. Even if the particle was pushed beyond lightspeed, that wouldn't make it go backwards in time.

Krauss interprets particles going backward in time as antiparticles. So he sees a particle as travelling along, it gains a bit of velocity due to quantum uncertainty, goes past lightspeed and turns into an anti-particle, and then drops back down to normal speed and becomes a particle travelling forwards in time again. Thus for a short period of time, we have two particles and one anti-particle. He calls these particles appearing for a short time period virtual particles.

Of course, particle/anti-particle creation and annihilation is seen in physics. You do have cases where before you had one particle, and then a little later you have three. This can happen, for example, in quantum mechanics, when a particle encounters a large potential barrier. But this has nothing whatsoever to do with the uncertainty principle. The generation of a Particle/anti-particle pair is always associated with the destruction of a gauge Boson such as a photon. Its destruction of the pair is always associated with the generation of a gauge Boson. Krass' illustrations make no mention of these photons. If he did, then his picture would have a far simpler explanation. We start with a photon and electron. The photon decays into an electron/anti-electron pair. We now have three electron type particles. Then the anti-electron annihilates with the electron, creating a photon. We now are back to the photon and electron. Nowhere have we had particles going backwards in time for short periods, or appearing out of nowhere.

And, of course, even Krauss' picture doesn't involve particles arising from nothing. There always has to be an initial particle.

Indeed, Krauss writes as though the three particle state can only exist for short time periods (the vacuum can borrow a little energy, as long as the energy borrowed multiplied by the time period is of the order of Planck's constant). That's not the case. We can have an electron emitting a photon, which decays into an electron/anti-electron pair. Our initial state was one fermion; our final state is three.

Suppose we are given an initial state and a final state, and are asked to calculate the amplitude that one will become the other. In the process, many additional particles could be generated and annihilated. Physicists tend to call the initial and final states, which we measure, real particles, and the stuff in the middle, which we don't, virtual particles, but the difference between them is more a matter of semantics than anything else.

Krauss justifies the existence of virtual particles by describing a predictive success of the theory that depends on them: the Lamb shift correction to the energy levels of the hydrogen atom. The energy levels of the hydrogen atom are described very well by non-relativistic quantum mechanics. Add in various spin/orbit interactions, and we do better. Add in relativistic effects from Dirac quantum mechanics, and the calculation gets better still. But still not quite perfect. To get experiment and theory to agree perfectly (or at least to an absurdly high precision), we need to include the effects of particle creation and annihilation and quantum field theory: in Krauss' terminology, the virtual particles. Krauss is quite correct here. But he is quite wrong to say that this supports his interpretation of the physics. The calculation does not rely on electrons breaking the light speed barrier and becoming positrons, but on the processes of photon decay and electron positron annihilation into photons. None of the virtual particles emerges from nothing: they all come from the decay of some other particle. The experiment is a powerful vindication of quantum field theory, and therefore it shows Krauss' explanation of the physical processes to be wrong. Remember that Krauss' virtual particles are things that seem to pop into and out of existence from nothing.

Krauss next discusses numerical calculations of the mass of the proton, and now he really is moving onto my home ground. A proton is made up of three quarks. The proton's mass is about 940MeV. The mass of the quarks are between about 2MeV and 5MeV. Clearly there is a lot of mass missing. It comes from the binding energy holding the quarks together; mostly gluon fields emitted by one quark and absorbed by another (some of which decay into quark/anti-quark pairs, which consequently annihilate each other again, and so on). Once again, Krauss tries to use this as justification that particles can just pop into and out of nothing, while the actual calculation says the opposite.

Now, if we can calculate the effects of virtual particles on the otherwise empty space in and around atoms, and we can calculate the effects of virtual particles on the otherwise empty space inside of protons, then shouldn't we be able to calculate the effects of virtual particles on truly empty space?

Well, this calculation is actually harder to do.

Indeed it is, because, in reality, virtual particles only emerge from the decay of something else. There are no virtual particles in truly empty space. If there were, then that space wouldn't be empty. That's not Krauss' explanation, though.

This is because, when we calculate the effect of virtual particles on atoms or on the proton mass, we are actually calculating the total energy of the atom or proton including virtual particles; then we calculate the total energy that the virtual particles would contribute without the atom or proton present (i.e. in empty space); and then we subtract the two numbers in order to find the net impact upon the atom or proton. We do this because it turns out that each of these two energies is formally infinite when we attempt to solve the appropriate equations, but when we subtract the two quantities, and moreover one that agrees precisely with the measured value.

Having done some of these calculations myself, I am unsure what Krauss is referring two here. Firstly, it is fairly obvious that what we measure is the total energy of the atom or proton including virtual particles, the final result is not infinite: it is what we measure. In the proton, the three quarks have a mass of about 10MeV. The total proton mass in about 940MeV. That remaining 930MeV comes from the virtual particles, the gluons, photons (plus the weak force carriers), and the various quark and electrons that emerge briefly from gluon or photon decay and then annihilate each other again. So the number we calculate, which corresponds to the measured value, is what you get when you include virtual particles. And it isn't infinite.

There are two things which Krauss could be discussing. The first is that the amplitude for any quantum process is the ratio of two numbers. Both the numerator and denominator contain various operators which combine to act on a vacuum state, and then are compared against a vacuum state. In the numerator, we have creation and annihilation operators representing the initial and final states, plus the Hamiltonian operator which describes how matter evolves. The Hamiltonian operator is made up of additional creation and annihilation operators, and it is these which represent the emergence of virtual particles. The denominator just has the Hamiltonian operator acting on the vacuum state.

That denominator could represent the sort of processes that Krauss is discussing: particle/antiparticle pairs emerging from the vacuum and then immediately being absorbed into the vacuum. So when we expand the operators in the denominator, we could get nothing happening, or an electron/anti-electron pair emerging from the vacuum and then being absorbed again, or two of them, and so on. The denominator is the sum of all these possibilities. But if such processes are possible (remember -- the calculation maintains the conservation of energy, so things don't pop out from nothing. But also remember that the vacuum state isn't nothing, but contains topological gluonic structures, so such effects could happen), then the same processes occur in the numerator. But their contribution multiplies the numerator. So if the numerator describes two electrons interacting, then the total amplitude will be all the stuff involving the electron interaction with nothing happening in the vacuum, plus the electron interaction with an electron/anti-electron emerging from the vacuum, and so on. So we get the electron interaction multiplied by the same series that we see in the denominator of the expression. The two terms cancel, and have no effect.

This raises the question of whether these disconnected diagrams are merely an artefact of the mathematics, or something physical. My own feeling is that they are just an artefact: they don't interact with anything, or have any effect on any observable quantity. Neither would they contribute to gravity, since energy is conserved in any quantum process, and thus there is nothing new to warp space time. So such an effect doesn't affect matter, isn't subtracted, and doesn't introduce infinities, so doesn't seem to be what Krauss is referring to.

The second possibility is that he is thinking of renormalization. Once again, let's use an electron/electron scattering as an example.

The simplest interaction between the two particles is that the one electron emits a photon, which is absorbed by the next electron. In this case, since we have an initial state where the two electrons have a particular energy and momentum (pa and pb), and are targetting a state where the electrons have another given energy and momentum (p1 and p2. The photon energy must then be the difference between the initial and final state energies, (pa - p1).

The next simplest interaction is where the electron emits a photon, which decays into an electron and anti-electron, which then annihilate each other back into a photon, and which is absorbed by the second electron. This is known as a loop diagram, because when you draw a picture of it you get a little loop in the middle representing that electron/ anti-electron pair which briefly emerge and then annihilate each other. The momentum of the photons are fixed, but only the total momentum of the electron/anti-electron is determined. Each particle could by themselves have any momentum. So we have to integrate or sum over all the possibilities. Since there are an infinite number of possibilities, if one does the calculation naively, this leads to an infinite result for the amplitude. Which is clearly nonsense.

The ultimate reason why we get an infinite result is because there is no single unique way to represent the particle creation and annihilation operators. One of those representations represents the physical basis. When writing down the theory, we use the representation where everything is as simple as possible. Unfortunately the simplest representation of these operators is not the physical representation: that would only be the case with an outstanding piece of luck. The creation operators used for the virtual particles are not in the same basis as the operators we ought to be using for the physical initial and final states. So we would expect a nonsense result from the naive calculation with this inconsistency. The difference between the bases is just some momentum dependent factors which multiply the fermion field, particle mass, and coupling with the photons. So what we need to do is find the correct multiplicative factors, and put them into the equations. We probably ought to do this at the start of the calculation; however the mathematics allows us to adjust the masses, charges etc. after we have performed the integration; which is fortunate, because that's much easier. The process of finding these factors is known as renormalization. We don't know a priori what these factors would be, but we do have one clue. Using the renormalised parameters ought to give a finite result for the amplitude.

How we do the calculation is to first of all regulate the theory, by artificially slightly modifying the integral over momenta so it isn't infinite (but, since it is not the right integral, gets the wrong result), then to find the correct multiplicative factors which leave the result finite when we remove the regulator, and then extrapolate to what we would get if we used the unregulated integral.

For example, the most common regulator used to perform these calculations is dimensional regularisation. It turns out that the calculation is only divergent in four or more dimensions (three space and one time dimension). So we do the calculation in 4-ε dimensions, which, instead of infinities, leads to terms containing log ε and 1/ε appearing in the result of the calculation. We can then re-define the variables for the masses, couplings, and particle normalisation which precisely cancel out all these divergent factors (for example, we can say that the mass of the particle in the simplest representation of the action is something plus 1/ε, i.e. infinite when we take ε to zero; the adjustment we make to convert to obtain the physical, renormalised, mass turns that infinite number into something finite). This gives us an expression which converges when we send ε to zero. We then systematically improve the calculation of the masses and so on as we consider more complicated diagrams. It is basically just a mathematical trick used to systematically correct for the original inconsistency in the basis.

A second, less commonly used but still mathematically consistent regularisation methods is Pauli-Villars regularisation. The infinities appear in the calculation at very large momenta, where the particle mass is an irrelevance. Equally, an infinitely massive particle can't interact with anything (in quantum field theory). So every time we face one of these divergent integrals, we artificially subtract from it a similar integral for a particle with a very large mass, M. This leads to terms proportional to M appearing in the expression. We again adjust the charge, mass and fermion normalisation to compensate for these factors of M, and can then remove the artificially introduced integral by sending M to infinity.

So the process of renormalization involves subtracting infinities from infinities. But it is not as Krauss describes. It is not subtracting the virtual particle contribution from the physical particle contribution, but converting from a representation of the fermion creation and annihilation operators where the particle masses are infinite to one where they are finite.

But whatever Krauss means, it is clear that his discussion doesn't describe the physics. We are not subtracting the effects of the virtual particles from the physical particles. The virtual particles still contribute to the calculation. We are correcting the parameters of the unrenormalised (and unphysical) theory to mimic the calculation we would have got if we had started from the renormalised (and physical) theory. The infinities are caused because of a mismatch between the basis needed by our initial and final states and the basis used in the unrenormalised Hamiltonian. These physical particles define the correct basis to use in the calculation. Try to perform a calculation without initial and final states, and can't make this correction because we don't know which basis we are targetting at.

So Krauss' attempt to calculate the effect of virtual particles on empty space is doomed to failure, because virtual particles only emerge when there is some other matter present. It is based on both bad physics and bad philosophy.

He can't rely on renormalization to remove his infinities, because renormalization requires an initial (physical) state. So he has to wave his hands around. We know, of course, that these calculations are performed in the context of the standard model of particle physics, which excludes gravity. However, once we reach the very large particle momenta which lead to the infinities, gravitational effects are certainly going to be important. So the theory is going to break down there anyway. We know what energies gravitational effects will become important, and can use this as a cut-off for the integral.

Recall that the purpose of this chapter is that Krauss believes that the cosmological constant arises from virtual particles spontaneously popping in and out of existence in the vacuum. His goal is to calculate the magnitude of the cosmological constant based on this theory. If he does the calculation naively, since he can't renormalise, he gets an infinite result. So instead he is regulating the theory by appealing to quantum gravity, which allows him to get a finite result, which is better than infinite. His result is, however, 120 orders of magnitude too large.

For most people, this would be discouraging. The obvious conclusion to draw would be that the cosmological constant doesn't arise from virtual particles in the vacuum. This is further enhanced by the observation is that in every QFT calculation, virtual particles only emerge from the decay of some other particle: they don't emerge from nothing. But Krauss takes a different view. He makes a hand waving appeal to some symmetry cancelling out this effect, and carries on regardless.

Conclusion

This chapter is crucial to Krauss' overall thesis. He needs the production of virtual particles to emerge from the vacuum for his idea of the universe arising from nothing to succeed. The chapter also contains his definition of nothing.

However, the physics is very poor. He makes mistakes in special relativity, in the interpretation of the uncertainty principle, in how virtual particles arise, the quantum field theory vacuum, and what virtual particles are.

It will be a while before he makes use of these ideas. Krauss now returns to his home territory of cosmology for several chapters, and, as I mentioned in my introduction to this series, when he focusses on cosmology he writes very well. I'll briefly glance at those chapters in the next post, and then pick up the story in chapter 8.



A Universe from Nothing? Part 3: Fine Tuning


Reader Comments:

1. James
Posted at 02:19:19 Wednesday May 29 2019



@Nigel Cundy

I've been enjoying this series so far but recently I came across something and was wondering what your view/response to it.

https://www.reddit.com/r/classicaltheists/comments/aghkmq/objections_against_an_essentially_ordered_series/

Here is the link. It is about whether hierarchical series exist, but since it seems to use scientific knowledge to discredit their existence, I was wondering how you viewed it/thought if it was correct or wrong. (if you have the time).

Thanks

2. Nigel Cundy
Posted at 08:29:25 Thursday May 30 2019



Thanks for your comment James. I've had a look at the article, and will try to respond in the next few days.

3. Scott Lynch
Posted at 22:15:14 Thursday May 30 2019

More comments on your book

Dr. Cundy,

I have finished Chapter 11 of your book (working on Chapter 12). It is still very good, although now the mathematics is very above my level. I will have to dig into operator algebra and bracket notation one of these days.

I have one suggestion for a footnote for Chapter 11:

Do you think that mechanism is still trying to sneak itself into Quantum Field Theory as it did with Quantum Mechanics (even though it has been made untenable as a philosophy for a few decades by the time QFT has matured)? You pointed out Everett’s Multiple Worlds interpretation of QFT as a way to try to preserve mechanism. I think your counter points are well-taken.

I would also suggest that the very language of “creation” and “annihilation” operators implies mechanism. Obviously you are stuck with the terminology, so I do not fault you for using it, but the idea that a change in form (for example from an electron-positron pair to a photon) implies creation or annihilation seems to assume that Aristotelian substantial change is impossible. Aristotle and the Scholastics would have used the term “generation” and “corruption” operators. That is what your description of particle decay seems to imply. Nothing is being created or destroyed (per conservation of energy, momentum, and quantum number). Rather things are being substantially changed into new substances (generation) and from new substances (corruption). To mechanist, however, substantial change looks like creation and annihilation, and so perhaps that is why the terminology developed that way. I would love to hear your thoughts on that.

Also, a bit of nit-picking, you describe true creation and annihilation (of something from and into nothingness, respectively) as “change”. I cannot speak to all of the Scholastics, but Thomas Aquinas would not consider this as technically true. He would define “change” as the actualization of a potential. However, “nothingness” is the absence of anything, even potentia. When God creates a particle, he creates its potentia along with its actual state simultaneously. It is not as if there is a bunch of prime matter existing for an indefinite period of time with no form waiting to be actualized. There is no time before the existence of actualized matter.

Granted, this does not really hurt the points you are trying to make. I am not looking at the book right now, but I recall you making the point that a potential cannot actualize itself. Of course, if a potential cannot actualize itself, then a fortiori, “nothingness” cannot actualize itself (if you accept the PSR, which you have already established).

Finally, this is a little in the weeds, but I like how you qualify that the Hamiltonian is not the whole picture, but it is at least part of the picture. Many Scholastics would say that qualia (the way sensations appear to our common sense) are a feature of material reality. The Enlightenment philosophers tended to relegate that to the soul (Cartesian Dualism) or deny reality altogether (Idealism) or deny consciousness altogether (which is a bit more modern).

Do you think that we will ever be able to express qualia mathematically? Or do you believe it is one of those features that is intrinsically non-mathematical, even if features of consciousness can be expressed in an abstract way? (I tend to take the latter view). I think (along with the traditional hylemorphic philosophical position) that the unity of conscious experience tends to reinforce the idea that there is an ontological difference between an object and the sum of its parts. This tends to reinforce your hylemorphic interpretation of the fact that the Hamiltonian of water is not a linear combination the Hamiltonians of oxygen and hydrogen. Therefore, it is encouraging that your interpretation of QFT has precedent in common sense.

4. Scott Lynch
Posted at 02:24:38 Friday May 31 2019

Reply to James

Dr. Cundy,

I also wanted to ask you some questions about hierarchical causal series. I wanted to wait until I read chapters 12 and 13 of your book first though, as I think it may address my questions. But I would love to hear your response to the Reddit post as well.

5. Nigel Cundy
Posted at 19:19:13 Sunday June 2 2019

Response to Scott

Thanks for your comments.

I think you have mentioned "generation and corruption" against "creation and annihilation" before. I take your point; I am stuck with the language since it is standard, but I ought to add a footnote to say why it is misleading. I doubt that a conscious mechanism inspired the terminology, since it came in in the late 40s and 50s, when physicists had started to be more philosophically uninformed (the analogues in quantum mechanics, constructed by philosophically informed physicists, are called "raising and lowering operators", which describe transformations from one energy state to another). But it could well be an unconscious bias. I do see examples of people saying that these particles are pulled in from the vacuum, or go back into the vacuum. There is hint of mechanism in this interpretation -- the denial of substantial change, but the belief that particles have to go somewhere or come from somewhere.

Do you think that we will ever be able to express qualia mathematically?

I would guess (and it is a guess) that the answer is probably to a certain extent yes, but not fully. I don't know how far neuroscience has proceeded along these lines, but I would expect that it would be possible to representations of the various brain states associated with these sensations in an abstract and thus potentially mathematical way. I would also say that these brain states would in some way mirror at least partially the states of the thing we are looking at (so when I glance out of my window at a red rose, the brain states that give rise to my perception of redness is a way of representing the energy eigenstates of the surface of the rose petal; in this way the form of the rose is, at least partially, in my mind -- the same raw data, just stored in a different format). However, these representations would not be reality. There would still have to be something concrete that makes the qualia and sensations real and not just an abstract representation, and this is the sort of thing that cannot be captured in an abstract form. This is analogous to the distinction between form and matter. We can represent the form at least partially in an abstract formulation, but not the matter.

6. Nigel Cundy
Posted at 21:00:35 Sunday June 2 2019

Essential Series examples

My initial thoughts to that reddit post (I can't claim that this is the perfect response, but simply what came to me this evening as I thought about it):

1) It is good to see an atheist tackle this subject and argument.

2) It would also be useful to have a genuine philosopher comment on this for another and possibly more philosophically rigorous discussion. But I'll do my best.

3) The post mentions 3 examples of essentially ordered series. 1) book sits on table, which sits on floor, etc. 2) stone moved by stick moved by arm. 3) a gear is moved by another gear, which is moved by another gear.

4) The objection is not that these are not essentially ordered series (i.e. he agrees that they terminate), but he disagrees that they terminate with God.

5) For example, in the stone moved by stick moved by arm, the sequence of motion ends with the person's neural processing. This is the termination of the essentially ordered series. However, the neural processing is explained entirely by past states of the universe, including past neural states. This is an accidentally ordered series.

6) Thus the objection is that an essentially ordered series need not terminate with God, but with a member of an accidentally ordered series, which can then in principle extend back to infinity.

7) An essentially ordered series is one in which its members lack the ability to move the next member in the series in themselves, but only because that power has been derived from a previous member of the series. The claim is that if the series is infinite, none of its members have the power to initiate the movement. So the series must be finite.

8) The neural state is not the only basis of the arm's motion. The bulk of the energy for the muscle's contraction, and the brain activity, comes from the release of the energy when a phosphate group breaks off from an ATP molecule (for example, when the molecule with 3 phosphate groups plus water becomes ADP with two phosphate groups plus orthophosphoric acid). This arises from a quantum process; there is a certain cross section for the reaction which initiates this process.

9) The ATP molecule by itself has a final cause Of an ADP molecule plus orthophosphoric acid, which requires the presence of water to come about. The physical series either ends here, or there are a few further steps. I don't know the chemistry and physics of this process, so I will assume that this is the last physical step in the series. However, my argument won't depend on this assumption.

10) But these two molecules do not have the power to initialise that reaction in themselves. This will be a quantum process, and therefore not determinate: there will be an amplitude for the reaction to happen, and another amplitude for it to not happen, and neither of the related probabilities will be 1 or 0. The argument here reduces to Aquinas' fifth way. The tendency needs to be actualised by some external power, and since we are at the end of the physical chain, that actualisation has to come from some supernatural power, i.e. God.

11) So I think that the objection wrongly identifies the nature of the series once we get to the arm.

12) The gear example will be similar: eventually it will reduce to the release of some chemical energy via a quantum process, and the one before that in the series will be God.

13) The objection related to the gravitational example (book resting on table resting on earth) has more force in my view. That's partly because we don't understand the quantum theory of gravity. Classical (Einstein's) gravity the tendency of things to move downwards is ultimately caused by the tendency of matter to curve space time. However, the space time metric is an abstraction, so I would need to think a bit more about how to best express this.

7. Scott Lynch
Posted at 09:49:28 Monday June 3 2019

Response on Hierarchical Series

Dr. Cundy,

I think another common objection to the essentially ordered series is that no causes are concurrent with a continued particle state. I remember reading in your book that particle interactions must be instantaneous in order to maintain the Conservation of Energy. With this in mind, the objection goes like this:

When a hand pushes a stick which pushes a stone, the hand is not actually concurrent with the stone. In fact, if the stick was one light-year long (and we had sufficient strength), we would not be able to violate Special Relativity by pushing the stick. The force would propagate through the stick at the speed of sound in the stick. So what you have is a compression wave of a long chain of molecules. The compression wave is actualized by the force carrying photons generated by the electrons in your hands which excite the electrons in the first molecules in the stick which then re-emit new photons to the next closest molecules and so on until they reach the stone. This process occurs at the speed of sound in the stick. If you were to push the stick, you could wait a few centuries until the hand goes out of existence, and the force would still be carried along to the stone (about 60,000 years later). Therefore the causal power of the hand is not concurrent with the effect of the stone being pushed. Due to instantaneous particle interaction, you can say that the interactions themselves are concurrent and thus essentially ordered, but the propagation of the force carriers (photons, etc.) travel and maintain their properties without any concurring cause. Thus all of the pertinent objections from Newtonian inertia can be applied to any causal series, even apparent essentially ordered series.

Of course, this still neglects the fact that a photon must have a concurrent cause to join its formal and material causes if the Principle of Sufficient Reason is to be preserved. Why do the photons have this set of potentia and exist in these particular states? Is it a geometrical or logical necessity? It does not seem so due to the finite nature of photons. Is it a brute fact? If it is a brute fact, why do all photons have the same sets of potentia? Since brute facts are not necessary, there is no reason to suppose that a photon from one electron must transfer a repulsive force to another electron.

However, this is still somewhat unsatisfying (at least to many atheists) since it rests on logical and metaphysical relations as opposed to empirical observations.

One question I would ask you is do you know of anything in nature that can be mathematically represented as a persisting concurrent cause (that is if you take away the cause you take away the effect)? Do you think that composite beings (protons, atoms, molecules, etc.) are an example of something with a concurrent cause that persists for a non-instantaneous duration? Obviously the material cause of a free proton is the quarks and gluons exchanged between them. The formal cause would be (possibly among other things) the Hamiltonian of the free proton.

It seems to me that the real issue is the debate between event causality and substance causality. People who deny essentially ordered series (or say that they are merely a summary of chains of accidental series) most likely subscribe to an event causality view (which is why many people with that mindset believe Quantum Field Theory destroys the Principle of Sufficient Reason). On the flip side, Scholastic philosophers would say that all accidentally ordered causal series are merely an incomplete subset of a temporally related essentially ordered causal series. I would think that event causality advocates would try to describe the proton as really a set of fundamental particles (quarks and gluons) and their interactions. This would be thought of in nominalist terms (there is no “proton”, just merely this set of more fundamental interactions that we conveniently label “proton”). However, your discussion of water molecules, if it applies here, would seem to refute that. Is it the case that mathematical description of a proton is not merely the linear sum of the mathematical descriptions of the quarks, gluons, and their interactions?

As a side note, would this imply that chemistry is not wholly reducible to physics?



Post Comment:

Some html formatting is supported,such as <b> ... <b> for bold text , < em>... < /em> for italics, and <blockquote> ... </blockquote> for a quotation
All fields are optional
Comments are generally unmoderated, and only represent the views of the person who posted them.
I reserve the right to delete or edit spam messages, obsene language,or personal attacks.
However, that I do not delete such a message does not mean that I approve of the content.
It just means that I am a lazy little bugger who can't be bothered to police his own blog.
Weblinks are only published with moderator approval
Posts with links are only published with moderator approval (provide an email address to allow automatic approval)

Name:
Email:
Website:
Title:
Comment:
What is 100-5?