This is the fifth post in a series
discussing Professor Lawrence Krauss' work *A Universe From Nothing*.
In the
first post, I gave an introduction to the underlying cosmology,
and a quick overview of the rest of his work. Since his discussion of
the cosmology was broadly correct, and very well presented, I am (largely)
skipping over those chapters and concentrating on those parts of the book which
I find problematic.

In the second post, I had a look at chapter 4, which is where things start to go awry. The chapter was about particle physics (my own speciality), and Krauss' claim was the uncertainty principle allowed matter to spontaneously emerge from the vacuum, with nothing before it, albeit only for very short periods. I suggested that his arguments for this were poor, and he made a number of mistakes with regards to the physics.

In the third post, I had a look at his chapter 8, which discussed the anthropic principle, which Krauss uses to support his contention of a multiverse.

The previous post looked at chapter 9, which discussed Krauss' definition
of *nothing* to be the quantum vacuum, and his belief that various
philosophers who specifically rejected his proposition would have agreed
with him.

Now we come to ask whether the quantum vacuum is unstable. Or, as Krauss
phrases it *Is Nothing Unstable?*

One would have thought that the answer to that question is obviously `No.' Stability in physics refers to (for example) a particle's propensity to decay, or to a system to move from a local minima of its energy to a global minimum. For example, one might have a particle in a particular potential energy, which varies from one location to another. For example, the gravitational potential energy in Newtonian mechanics is proportional to the inverse of the distance from the source of the gravitational field. One can also arrange to have the potential energy oscillating, with various minima and maxima. In classical mechanics, what happens is that the particle rolls down to a local minima of the potential energy, and it can't escape from it unless it is pushed with sufficient force to propel it over the next maxima. That local minima need not be the smallest energy around. It's like a ball sitting at the bottom of a valley. The next valley might be deeper, and if they were connected the ball might roll down there, but it can't because there is a mountain in the way. But in quantum physics, there is a small chance that the particle will tunnel through the potential barrier to the next minimum of the potential energy. So what we have here is a system that seems to be stable, but then suddenly jumps to a very different configuration.

But in both of these cases, you need to have something there in the first place. In the case of the decaying particle, there is the initial particle. In the case of the quantum tunnelling, you have the system in one local minima of the energy, but also, more importantly, whatever generates the potential energy in the first place. So the obvious answer to the question `Is Nothing Unstable?' is no, it isn't.

But, of course, Krauss has redefined the word nothing to mean the quantum
vacuum, which is most definitely something (as pretty much everyone except
him understands the words *nothing* and *something*),
so he might have a chance of pulling this one off. Let's see what he has
to say.

## Quantum fluctuations

Back in chapter 4, Krauss suggested that in quantum physics, the vacuum is a seething mass of particles and anti-particles, which can pop into and out of existence continually. As I suggested in my review of that chapter, that idea is incorrect. If the vacuum was like that, it would be observable in gravitation, and the observed effect is much tiny in comparison to the prediction. Neither is it required by the formalism of quantum field theory, which describes objects changing from one type of particle to another (a process of substantial change and generation and corruption of particles), but not particles emerging without a predecessor. But Krauss needs his model, and so he continues with the assumption that it is correct.

He offers two examples of particles appearing from nothing. The first is when you have two parallel plates carrying a large electromagnetic potential. Krauss claims that in this set up, we can see a electron positron pair pop into existence in the vacuum between the plates, and the electron will travel to the positively charged plate, the positron to the negatively charged plate, and this current can be measured. However, this set-up doesn't do what Krauss claims of it. The two charged plates will continually exchange photons. One of those photons decays into a electron-positron pair. This electron and positron then interact with further photons, some from the plates drawing them apart, and others from each other, drawing them together. On occasion, the photons from the plates will win, and we see the electron/positron pair drawn towards the plates. It is not necessary to invoke particles emerging from nothing to explain this phenomena. Indeed, calculating the amplitude for this current uses photon decay, not creation from nothing. The electron/positron don't emerge from nothing; they emerge from a photon.

Krauss' second example is Hawking radiation around a black hole. This is a similar set-up, except instead of two charged plates and a strong electromagnetic field, we have a black hole and a strong gravitational field. Hawking radiation hasn't yet (to my knowledge) been observed. The theory is based on quantum theory in curved space time rather than a full theory of quantum gravity. In particular, it involves the thermodynamics of the black hole, and the equilibrium condition that if it is to remain in equilibrium it must emit as much as it absorbs. Thus Hawking radiation would only occur if the black hole exists within a thermal background. That background is provided by the cosmic microwave background (CMB). So, Krauss' example occurs when a photon from the CMB decays into a particle/anti-particle pair. One of those is sucked into the black hole, while the other escapes. Again, no particle emerges from nothing or empty space. It is all photon decay.

Krauss concludes his discussion of these examples with the statement

Nevertheless, all these phenomena imply that, under the right conditions, not only can nothing become something, it is required to.

Not only has he got his physics wrong, he is brazen about it. Conditions only exist in the presence of something. Nothing, by definition, can't depend on external conditions, because to be affected with whatever those conditions are it would have to interact with them. And only something can interact. So what Krauss is saying is that if there is something present, then nothing is sometimes required to become something. But if there is something present interacting with it, then you haven't got nothing.

## CP violation

The next topic that Krauss discusses is the imbalance between matter and anti-matter in the universe. This is observed: our galaxy is obviously made of matter. But there is enough inter-galactic matter and interaction between galaxies in the early universe to convince us that the same must be true of all the other galaxies as well (of course, if the universe was predominately anti-matter, we wouldn't notice: we would just swap the labels around, and call positrons matter and electrons anti-matter). It is natural to think that there ought to be equal balance between the matter and anti-matter in the universe. After all, a photon decays into an electron and positron, or quark and anti-quark, and the same is true for the other forces. So the observed imbalance between matter and anti-matter is something of a conundrum.

In particle physics, as well as the continuous symmetries such as
rotational, translational or Lorentz symmetry, people also consider
how the action also transforms under various discrete symmetries:
*T*, or time reversal symmetry (changing the future into the past in
the equations), *P*, or parity symmetry (similar, but with the
spatial dimensions), and *C*, or Charge Conjugation symmetry
(swapping particles and anti-particles). These are all individually
symmetries of quantum electrodynamics and quantum chromodynamics
(the theory of the strong interaction). However, the weak nuclear force
violates charge conjugation and parity. It does, however, maintain the
combination between the two of them, known as CP. This is the more
technical statement of why we believe that matter and anti-matter ought to
be in balance. If the universe (together with its initial conditions)
were governed by CP symmetry, then that would imply that there ought to
be a complete balance between matter and anti-matter. Which there isn't.
The argument is a specific application of the general
principle that if a symmetry is observed in the action, then it should also
correspond to something in nature. (The principle breaks down if the
symmetry is broken by the vacuum configuration, called spontaneous
symmetry breaking, but that doesn't apply here.)

In the standard model, there is a small violation of CP symmetry. This occurs when we put together the theories of the electro-weak and strong nuclear forces. There are (we believe) three families of fermionic matter. You have the electron, neutrino, and up and down quarks. Then, for some reason, that pattern is repeated again at a higher mass. And then repeated once more at an even higher mass. So we have twelve fundamental fermionic particles in total. These particles are described by quantum states, and like all quantum states one can create superpositions of them. So we can describe a quantum state which is partly an electron and partly a muon, from the next family. To properly distinguish the particles, we need a preferred basis, and this basis is provided from the eigenstates of the operators representing the photon, W, Z and Higgs Bosons in the case of the electro-weak theory, and the gluons in the case of the strong nuclear force. But there is no reason to suppose that the mass eigenstates in the electro-weak theory should precisely coincide with those of the strong nuclear force. And they don't. What this means is that when we discuss up quarks in the nuclear force calculations and up quarks in electro-weak calculations, we are not actually referring to quite the same object. The nuclear force down-quark is a superposition of electro-weak down, strange and bottom quarks. The difference is small, but measurable, and this slight difference between the strong and electro-weak theories violates CP symmetry, and leads us to expect a slight imbalance between the matter and anti-matter in the universe. (As far as we know, CPT symmetry is not violated anywhere.)

However, the standard model CP violation is not enough to explain the observed matter/anti-matter imbalance. It is assumed that there is some additional beyond the standard model physics which will account for the missing CP violation. And this is quite plausible and possible. But it remains one of the biggest unsolved problems in particle physics.

Krauss states that if the universe began sensibly, with an equal amount of matter and anti-matter, then we wouldn't be around to ask the wherefores of it, since all the matter and anti-matter would annihilate each other, and all there would be would be a vast sea of photons. Nothing is obviously symmetric between matter and anti-matter, so for a universe from nothing, we might expect this to be case. One would think that this is a problem for Krauss' thesis. However, CP violation allows one to begin with equal amounts of matter and anti-matter, and end up with a matter dominated universe. All we need is a small imbalance between matter and anti-matter to arise, and then all the anti-matter will annihilate with some matter, leaving just the excess matter and a fistful of photons (the microwave background). Krauss relates the origin of this asymmetry as the moment of creation. This can't be the case: CP violation is an ongoing process, observed to this day, and therefore can't be considered to have happened in a moment. The asymmetry needs to correspond to approximately one part in a billion more matter than anti-matter. Known CP violations don't allow for this, but Krauss isn't overtly worried by this because the standard model is almost certainly incomplete (and here I agree with him).

The next question is why a matter dominated universe rather than an anti-matter generated universe? If we start with an equal amount of matter and anti-matter, then it could go either way. Imagine that you manage to balance a pencil on its point. It might be stable, but the smallest perturbation will cause it to fall down. But we can't predict which way it will fall down. Presumably, Krauss muses, even if the laws of physics are fixed, the ultimate direction of the asymmetry between matter and anti-matter was driven by some random initial condition. (Recall that randomness isn't a thing and therefore can't be a cause; when we describe something as random we are either being lazy or saying that we don't know the causes.)

This idea that an initially symmetric universe could collapse into one which is either matter or anti-matter dominated is Krauss' motivation for the title of the chapter: nothing is unstable. He assumes that the universe starts in a featureless state, where no matter existed: the universe is a vacuum. A second possible state is one where matter exists, with slightly less symmetry but lower energy. There is a quantum tunnelling to this second state, and the energy released emerges in the form of particles. Then, due to Bayron number and CP-violating processes, we get the matter/anti-matter imbalance.

This doesn't seem very plausible to me. A featureless state would have energy zero, so it can't tunnel into a lower energy state. Indeed, (the quantum mechanical definition of) energy only makes sense in the context of matter: it is a quantum number of a particular particle. It doesn't make sense to discuss the (quantum mechanical) energy of nothing or empty space, because nothing is not a particle in an eigenstate of the Hamiltonian and therefore can't have an energy associated with it. So this means that to have the initial symmetric system in a higher energy state than the one where the symmetry is broken, Krauss needs something there to carry the energy. Thus he is not starting from a state of nothingness.

Plus, of course, the mechanisms of CP violation involve particle interactions, such as kaon decay. To get from that to Bayrogenesis, you also need to make use of electroweak anomalies. Both of these interactions assume that you have some starting state; Baroygenesis can't start from nothing.

Mechanisms for CP violation and Baryogenesis are certainly known. They can explain matter/anti-matter asymmetry. But they don't allow one to conclude that nothing is unstable because all known mechanisms for this assume that you have to start with something. The best that they can say is that certain somethings are unstable. But then we already knew that.

## Quantum gravity

Not much is known about the quantum theory of gravity, or how to quantise general relativity. I personally believe that we should be focussing more attention on quantum field theory in curved space time. The more conventional approach, however, is to introduce a new quantum particle, the graviton, as the conveyor of the gravitational force, just as the photon carries the electromagnetic force. The naive way of doing this is not mathematically self-consistent (not renormalisable), but there are various more sophisticated models which might work.

The naive way of performing quantum gravity is to expand the space time metric as the Minkowski metric plus a perturbation. This perturbation is then quantised and called the graviton, the mediator of gravity. The process of quantisation involves upgrading the graviton to a creation operator, which (when placed into a time evolution operator) will be used in the description of how gravitons are emitted from other matter, or decay into other matter.

String theory supposes that our universe is embedded in a higher (10 or 11 dimensional) space, and that the fundamental objects are one dimensional strings. The particles we know about are vibrational modes of those strings. In addition, there are higher dimensional surfaces called branes, and massless spin 2 particles emerge which are associated with the graviton. These are in turn quantised.

Loop quantum gravity sticks with our four dimensional universe. It treats the affine connection (which is the object used to transform the differential operator so that it can be expressed in a different reference frame) as the thing to quantise. So, again, it is upgraded into a creation or annihilation operator

I list these to make a point: in none of these approaches, nor any others that I know of (although I admit that I am not an expert on quantum gravity) is space time itself replaced by a creation or annihilation operator. I am not sure even what doing so would imply. One's location in space is a property of matter; as such it is something which subsits in a material substance, not a substance itself. Key to quantum physics are the energy and momentum operators, related to the differential operators in time and space. For a quantum particle to have energy, it presupposes that you can differentiate the field representing that particle with respect to time. Yet if time itself were a quantum field (capable of undergoing creation and annihilation), then you couldn't differentiate it with respect to itself. That quantum field would not be able to carry energy, leaving us with something entirely different from the sort of quantum field theory that describes matter. But if it is very different, the arguments from analogy which Krauss uses (since he doesn't have a theory of quantum gravity, he can only use analogy) break down. An analogy relies on similarity, not difference. And if something other than space and time are the quantum field, then how can you use an analogy with quantum field theory to say that space and time emerge from nothing?

Krauss, however, disagrees,

General relativity, as a theory of gravity, is, at its heart a theory of space and time. As I discussed in the very beginning of this book, this means that it was the first theory that could address the dynamics not merely of objects moving through space, but also how space itself evolves.

Having a quantum theory of gravity would therefore mean that the rules of quantum mechanics would apply to the properties of space and not just to the properties of objects existing in space, as in conventional quantum mechanics.

I broadly agree with the first of these paragraphs. But the second one seems to be way off. Firstly, quantum field theory isn't fundamentally concerned with the properties of objects, but how those objects come in and out of existence during interactions with other objects. If we transpose this to space, Krauss' quantum theory of gravity would be mainly about a description of how space itself would come into and out of existence. And this is certainly how he wants to bring the argument forward in subsequent paragraphs.

In quantum field theory, there are two types of operators. Those
(inherited from quantum mechanics) which are used to describe properties
of matter, the location operator *x*, the momentum operator
*d/dx*, the spin operator and so on. Secondly, there are the creation
and annihilation operators for the fermions, photons and so on. Things
like location, momentum and spin are related to the indices on the
creation operators.
This distinction is of particular philosophical importance; it matches
the distinction between substances (which exist in themselves) and
properties (which subsist in substances). To create
space, one would have to convert *x* into a creation
operator. I know of no theory of quantum gravity that does this.

Krauss next discusses Feynman's path integral approach to quantum physics.
Here, one sums over all possible trajectories or paths for a particle,
even those which are classically forbidden. I'm not quite sure what
Krauss means by "classically forbidden." The natural interpretation is
that he is referring to those paths
which vary from the path designated by the principle of least action.
Another might be that it includes "off-shell" paths. In classical
special relativity, the energy and momentum of a particle are related by
the dispersion
relation *E ^{2} = m^{2} c^{4} + p^{2}
c^{2}*. In field theory, it is a bit more complicated: one
integrates over all energy and momentum, although the propagator has
poles (or divergences) at the energies implied by the dispersion relation,
meaning that those momenta dominate.

Feynman's approach has the advantage that the central feature in calculating the weights in the sum, the action, is invariant under coordinate transformations. What this means is that one can use the same basic approach in any reference frame; we don't have to specify how to label space time points before performing the calculation. This formulation is no doubt even more useful in general relativity, where we consider even more possible coordinate transformations and ways to label space time. In a quantum theory of gravity, we would presumably have to sum over, as well as all the possible particle trajectories, but also all the ways in which space time can be curved. So far, this seems reasonable to me.

But now Krauss starts move beyond reasonability. Using the analogy that particles can pop into existence in quantum field theory out of nothing (which I have repeatedly stated is untrue), doesn't this imply that in a quantum theory of gravity, space itself can pop into and out of existence from nothing? No, in a quantum theory of gravity, the graviton, or the particle representing the affine connection, would come into and out of existence (and not from nothing). Krauss asserts that unless one can come up with a good reason for excluding this possibility, then, since everything that can happen must eventually happen, we must expect that small regions of space must pop into and out of existence.

This argument has numerous flaws. To start with, he confuses two different types of possibility. Firstly, there are possible expressions of the laws of physics. Secondly, there are possible events given the truth of a particular expression of the laws of physics. The first of these represents our uncertainty -- one expression of the laws of physics is true, but we don't know which one. The second of these refers to the frequency at which events can occur. It is true that if the laws of physics were such that space could come into existence from nothing, then it would happen and, given an infinite time, it must happen. But it is not true that because such an expression of the laws of physics that might allow for this to occur is possible (in that we currently know no reason opposing it), then it must happen. It must happen only if there are an infinite number of universes each sampling each possible laws of physics. But we are not interested in what happened in some purported alternative universe completely disconnected from our own; we are interested in what is happening in this universe. And when there is only a sample of one object, one can't get from a premise that it's possible, to a conclusion that it must happen.

And I would be very surprised if it were possible. All we have got from Krauss is a suggestion which contradicts every model of quantum gravity I know of, no proposal for an alternative model which might explain it, and a complete misunderstanding of quantum field theory -- particles do not pop into existence from nothing. Krauss' speculations are of no more use than an idle daydream.

Krauss' next problem is that just as his virtual particles of quantum field theory pop into and out of existence for short time periods, the same will be true for his virtual areas of space. These areas of space would invariably immediately collapse -- not much use for explaining the origins of our universe. But he has a proposed solution.

He starts by asserting that a zero energy virtual photon is not subject to the uncertainty principle and its restrictions on how long the particle can pop into and out of existence: it can last forever. Of course, a zero energy photon (if any such object exists) wouldn't interact with anything, since every interaction involves the transferral of either energy or momentum, and most usually both. This is also a misuse of the uncertainty principle (which is not about the length of time that particles can exist, but the spread of the wavefunctions of existent particles); although in this case his conclusion is correct: a zero energy particle would endure forever.

His analogy is that a zero energy universe might emerge without collapsing immediately. Such a universe would be closed (i.e. eventually it will collapse in on itself). This is inconsistent with our flat universe, but if it suddenly shifts to a period of inflation, then that is enough to make anything flat. So closed universes popping in and out of existence plus inflation might give us the universe we observe.

There are, of course, various problems with this. Firstly, and most obviously, he requires that the universe would be both closed (zero total energy) and flat (non-zero total energy). Secondly, his argument relies on his interpretation of the uncertainty principle. The uncertainty principle is applicable to single particle wavefunctions. The universe is not a single particle. Thirdly, given that space and time are interconnected, it is not just space that he requires to pop into existence from nothing, but time. But the uncertainty principle is derived from the relationship between the energy and temporal wavefunctions. It is only applicable to particles in time; but not for time itself, as Krauss demands (if his universes are to obey Lorentz symmetry). Finally, he once again confuses energy as it is used in cosmology (the stress energy tensor) with energy as it is used in quantum physics (eigenvalue of the Hamiltonian operator). A closed universe is one with a zero cosmological energy. But he jumps from that to discussing quantum physics, i.e. he has transposed to a zero quantum mechanical energy. But energy is always positive in quantum physics. The only way for an universe to have total quantum mechanical energy of zero is if everything in it has energy of zero, which is clearly not the case.

So, by misusing of quantum field theory, inventing a new theory of quantum gravity, and despite claiming that the uncertainty principle allows something of zero energy to last for ever, he proposes that a zero-energy closed universe can pop into existence for short times. However, they would invariably collapse within a Planck time of emerging (what does he use as a scale for measuring time here, since time would come into existence with the universe). But add in inflation (where did the inflation, the hypothetical particle that drives inflation come from?) and he gets a universe which lasts a long time and has the appearance of being infinitely big and flat. So having spent the first part of the book laying out his case (and with good physics) that the universe is flat, he now demands that it isn't.

If inflation doesn't cut it, then Krauss has a couple of alternative
proposals. The first is a proposal by Vilenkin, the second the
Hartle-Hawking no boundary universe. These two approaches are similar.
I'll just discuss the earlier paper of Vilenkin.
Vilenkin's paper
starts from the standard first approximation model for
the universe. The Robertson-Friedmann-Walker metric describes a perfectly
homogeneous and isotropic universe, i.e. one which is exactly the same in
every place and direction. Obviously the universe isn't like this, but
this is actually a good approximation, and usually used as the starting
point of cosmological calculations. One key part of the solution
is the scale factor, denoted as *a(t)*, related to the distance
between two objects over time. Vilenkin noticed that a solution to
this universe led to a behaviour of *a(t)* which resembled that of a
classical particle approaching a potential barrier. This solution did not
permit values of *a* smaller than the inverse of Hubble's constant,
as long as time is treated as a real number.

One trick used by physicists is to rotate time to the imaginary axis,
replacing *t* with *it*, where *i ^{2} = -1*.
You might wonder what the point of doing this is. What it does is
convert the hyperbolic space time of special relativity to a Euclidean
space time, which tends to be an easier place to perform certain
calculations. So we rotate to Euclidean space, do the calculations, and
then analytically continue them back to Minkowski space. What Valenkin noticed was that
in this rotated time, the scale factor

*a*also resembled a classical particle that hit a potential barrier, only this time only values smaller than the inverse of Hubble's constant were permitted. Instead of a universe where space expanded in time, this universe would describe a system where space and time are described by a four dimensional sphere (recall that time is now equivalent to just another space dimension).

Valenkin's proposal, as far as I can make out, is that we treat *a*
as a quantum mechanical wave function. There are two stable states,
separated by a potential barrier. But quantum mechanical particles can
tunnel through potential barriers. Thus if we started with this static,
four space-dimensional spherical universe, there would be a small chance
that it
would suddenly switch to the three space and one time dimensional
expanding universe that we are all familiar with. And this, Valenkin
claimed, signified the emergence of the universe from nothing.

Of course, there are issues with this approach. Firstly, there is little
motivation to treat *a*, one part of the metric, as a
quantum mechanical particle. There are further problems when we try to
upgrade it to a quantum field theory, particularly concerning the
renormalisability (or lack of infinite cross sections). There are further
philosophical problems with switching to an imaginary time coordinate:
fine as a mathematical trick to ease computation, much harder to
accept as a feature of the universe. Time is, after all, an observable
coordinate of the universe, flowing in one direction. We would need a
tunnelling of time itself from imaginary to real, but time is not
a quantum mechanical particle and therefore cannot exhibit
quantum mechanical tunnelling. All observables are represented by
real numbers. Time is something we observable. And then there is
the major problem that the starting spherical universe in Euclidean space
time is not nothing. It is another universe with a different geometry.
So instead of a universe from nothing, we have a universe from another
universe.

So this picture manifestly fails to generate space time from nothing. Even if we grant its assumptions, it doesn't begin with nothing.

Krauss asserts that this mechanism will allow universes to be generated from nothing, which he now defines as an absence of space and time. But Valekin's spherical Euclidean universe which is the starting point of his tunnelling event is not the absence of space and time; space and time are still there, just with a different geometry.

Krauss admits that this proposal does not prove his thesis. But he thinks that it is a possibility. The problem is that there are too many assumptions, too much bad philosophy, and too much bad physics for me to take his belief that nothing is unstable seriously.

## Conclusion

I find Krauss' argument in this chapter to be really poor. He continues
to compound his error of redefining nothing, but goes further than this.
He builds on his errors in the interpretation of particle creation and
the uncertainty principle from chapter four. He relies on applying an
analogy from his interpretation of quantum physics to quantum gravity,
but his argument only makes sense if space and time are quantum fields
themselves, and no prototype quantum gravity theory I know of asserts
that. He then adopts the speculative theories of Valekin, which also rely
on an analogy and the idea that some specific parameter should be
quantised. But that parameter is not quantised in any quantum gravity
approach I know of, and even if it was, it would not show that nothing is
unstable, but that something which is most definitely a *something*
is unstable. And he tries to tie all this together with various arguments
from analogy and other logical fallacies. In short, the arguments of this
chapter are pretty much worthless.

Next time, Krauss tries to get the laws of physics from nothing.

**Reader Comments:**

**Reification of space and time**

Dear Scott,

thanks for your comment. I agree with it. Theoretical physicists work entirely with these abstractions. It is easy to forget that the abstractions are only (partial) representations of reality rather than reality itself.

The space time metric is a classic example. It plays a key role in geometry and thus general relativity. One can't make precise calculations in cosmology or even the orbits of planets without it (OK, there is Newton's theory of gravity, and Aristotle's theory of quintessence, but neither of them are as precise as GR). What we observe is the path of the planet. We can calculate that "If space and time can be represented by a geometry with this particular metric and a particularly coordinate system, then the path of the particle will follow a geodesic of the metric." And "If the distribution of matter is represented by this stress energy tensor in a geometrical system, then the metric of that geometry will be related to the stress energy tensor by this equation." And then we map back our predictions to physical space time. The metric plays a key role in the representation. As does the coordinate system (indeed the metric is dependent on the choice of coordinate system) -- which doesn't correspond to anything physical. The question is, is there something corresponding to the metric in physical space time? The first role of the metric in the geometry is to present a natural definition of distance, which is physical enough. Distance is a relationship between two different substances, but not the substance itself.

But then many physicists try to turn it into something which works as a substance does. Because we have one symbol to represent the electromagnetic field, and another symbol to represent the gravitational field, then let's forget what those symbols are meant to physically represent, and treat them as though they are the same sort of thing. Once the symbols become the only reality you are interested in, you can do whatever you like with them. Ultimately, it will lead you down a blind alley, but will nonetheless be a wonderful source of interesting mathematical papers. Certainly reification plays a central role in this line of thought.

**New Study Claims Measurements are Subjective**

Hello Dr. Cundy,

I recently saw an article appear in my news feed claiming this experiment proves that quantum facts are subjective. I thought you might be interested.

https://advances.sciencemag.org/content/5/9/eaaw9832

**Subjective quantum facts**

I had a glance over that paper. My reaction was that it is an interesting study, but doesn't really demonstrate anything new. The article is based around the "Wigner's friend" thought experiment. Here we have two observers, of an entangled system, previously in superposition. Person A observes the system, and thus has a definite experimental result.

Person B doesn't make a measurement, and thus for him the system remains in superposition,

with the property indeterminate. For Person B, Person A's knowledge becomes entangled with

the quantum state. In the experiment, Person A and Person B are replaced with various measurement devices, but (it is claimed) this doesn't affect the outcome. A Bell-type inequality is derived that would be satisfied in the classical case, and is shown to be violated, consistent with the predictions of quantum physics.

The problem with saying that this experiment represents anything new is that its results agree with the standard mathematical formulation of quantum physics. It just offers further evidence that quantum physics is correct. But all the standard interpretations of quantum physics are consistent with that mathematical framework. This includes those in which there is an objective reality behind the universe. The authors themselves admit this, when they state,

Modulo the potential loopholes and accepting the photons’ status as observers, the violation of inequality (2) implies that at least one of the three assumptions of free choice, locality, and observer-independent facts must fail. The related no-go theorem by Frauchiger and Renner (5) rests on different assumptions, which do not explicitly include locality. While the precise interpretation of (5) within nonlocal theories is under debate (21), it seems that abandoning free choice and locality might not resolve the contradiction (5). A compelling way to accommodate our result is then to proclaim that facts of the world can only be established by a privileged observer—e.g., one that would have access to the “global wavefunction” in the many worlds interpretation (22) or Bohmian mechanics (23). Another option is to give up observer independence completely by considering facts only relative to observers (24), or by adopting an interpretation such as QBism, where quantum mechanics is just a tool that captures an agent’s subjective prediction of future measurement outcomes (25). This choice, however, requires us to embrace the possibility that different observers irreconcilably disagree about what happened in an experiment.

Quantum Bayesianism (QBism) is closest to my own interpretation. Here the wavefunction represents knowledge of the underlying state of the system (albeit it parametrises uncertainty as an amplitude rather than a classical probability, and acknowledges that properties can be undetermined in certain eigenstates). I am not sure, however, that this position implies subjectivity. Amplitudes are always conditional (if these circumstances occur, then those outcomes might result). Any observer can calculate the same amplitude given a set of premises. The apparent subjectivity arises only from that person B has less knowledge than person A; thus a different set of premises, and he is less sure about the result.

This is no different from any description of a measurement, whether quantum or classical. It is always necessary to describe the circumstances in which the observable was measured alongside the observable. One person sees something as blue; the other sees it as red. But when we include the relative velocities of the observers to the object, we see that the measurements are consistent with each other. The situation is similar here. The statement is that the amplitude for the system calculated based on these premises (which include the result of person B's measurement) is this, while the amplitude calculated from those premises (which exclude the result of the measurement) is that. There is nothing subjective about this statement. Nor does it mean that the underlying objective reality is undefined: that still remains whatever it is. We can only narrow it down and make the amplitudes describing it more definite by performing the experiments.

I should add that the same is true for other interpretations of QM that hold to an objective reality, such as Bohm's or the many worlds interpretation. They also lead to the standard mathematical formulation, and thus predict this experimental result. Thus one can't say that the result shows the subjectivity of quantum facts; only the subjectivity of the knowledge of experimental measurements.

**Post Comment:**

**for bold text**, < em>... < /em>

*for italics*, and <blockquote> ... </blockquote> for a quotation

All fields are optional

Comments are generally unmoderated, and only represent the views of the person who posted them.

I reserve the right to delete or edit spam messages, obsene language,or personal attacks.

However, that I do not delete such a message does not mean that I approve of the content.

It just means that I am a lazy little bugger who can't be bothered to police his own blog.

Weblinks are only published with moderator approval

Posts with links are only published with moderator approval (provide an email address to allow automatic approval)

Reification of Space and TimeDr. Cundy,

Do you think one of the roots of Dr. Krauss’ flawed argumentation is his seeming reification of space and time (or more generally his continual committing of the Fallacy of Misplaced Concreteness). It seems like he is treating space and time as substances in and of themselves that could pop in and out of existence like particle. He even seems to reify energy in this way (as if pure energy could exist without an underlying substance).

On the other hand, your brief discussion of trying to come up with a QFT in curve space sounds more Aristotelian. Space is not itself a thing independent of the particles that exist in it. To study a QFT in curved space would simply be to acknowledge that particles do not interact in a way that is accurately represented by a purely Euclidean geometry. It seems that you are suggesting (correct me if I am wrong) that if we could abstract a curved space metric from particle interactions, we could come up with Gravitational QFT without necessarily positing new substances (such as graviton).

Dr. Feser talks a lot about this general fallacy of misplaced concreteness in Aristlotle’s Revenge. I’d be interested to hear your thoughts on its application here.