## Introduction

I am having a look at different philosophical interpretations of quantum physics. This is the fourth post in the series. The first post gave a general introduction to quantum wave mechanics, and presented the Copenhagen interpretations. I have subsequently looked at spontaneous collapse models and the Everett interpretation. Today it is the turn of the pilot wave interpretation.

The pilot wave model is often discussed by philosophers of physics, although has considerably less support among physicists themselves. It was first proposed by de Broglie in the 1920s, and then revived by David Bohm in the 1950s and taken up by Bell in the 1960s. Since then it has received various refinements and discussion, but it has also gained some criticism. People have tried to address the criticism. It still remains an active area of research, with its proponents. I cannot claim to be an expert on it, but will present here the model as best I can, and the main reasons why people both like it and reject it.

The pilot wave model, like the Everett model, is deterministic, and thus allows us to continue the philosophy of classical physics into quantum physics with only minor modifications. The underlying idea is that the beables in quantum physics can be split into two different classes of objects. Firstly there are the particles, and these are the things we observe: electrons, protons, and so on. Then we have an underlying pilot wave. This is not directly observed, but its effects can be seen through its interactions with the particles. Because the pilot wave cannot be directly observed, its initial state in any experiment is unknown. This leads to the apparent indeterminacy we observe in experiment. It is not an indeterminacy fundamental to physics, either through wave function collapse (or something similar) or an indeterminate dynamics, but merely arises due to our lack of knowledge of the full physical situation. This is similar to how uncertainty enters into classical physics. When we toss a Newtonian coin, the underlying physics is deterministic. But because the system is so finely balanced, and we don't know the details of the force applied to the coin, or the air movements, or maybe even the initial state, we can only assign probabilities for each possible result.

The main criticism of the pilot wave model, and the reason it is rejected out of hand by most physicists, is that it is apparently inconsistent with relativistic quantum physics. This is a big deal. A viable philosophy of quantum physics needs to fulfil two roles: to reproduce the best physical theories to within experimental tolerance (which, at the moment is the standard model of particle physics, a relativistic quantum field theory), and secondly to leave no be philosophically complete and coherent, so leave no gaping gaps in its explanations. I don't think anyone doubts that the pilot wave model satisfies the second criteria, which puts it, on that count, better than most suggested interpretations. But if it is inconsistent with the physics then it is a dead end. The formulation of the theory by Bohm is fairly obviously inconsistent with the physics, as it is based on the non-relativistic Schroedinger equation, which is not the correct model of nature. There have been attempts to modify the Pilot wave approach to bring it back into agreement with the physics, but so far (to my knowledge) none of these have yet gone far enough to actually reproduce the standard model. Most of these attempts call back to de Broglie's original pilot wave model.

## The non-relativistic formulation

The pilot wave interpretation is, at its heart, a reformulation of the Schroedinger equation to introduce a second dynamical variable. I here outline Bohm's approach, which shows both the advantages and disadvantages of the interpretation. We start with the non-relativistic wave mechanics Schroedinger equation.

The starting point of the interpretation is to parametrise the
wavefunction in terms of two real fields, *R* and *S*.

In terms of this decomposition, the left hand side of the Schroedinger equation then becomes

While the right hand side becomes

Comparing real and imaginary parts of the equations then gives two
equations, one describing the evolution of *R* in time,

The other gives the evolution of *S* in time,

Now we need to interpret these equations, and if we manipulate them
further we see a natural picture emerge. Firstly, we write a new variable
*P* as the square of *R*.

In the Copenhagen interpretation of the Schroedinger equation, this would
correspond to the probability density. Here, of course, it is just an
underlying real field, or potential. The time evolution equation for
*R* is re-written in a simple form in terms of *P*.

This takes the form of a conservation equation, with *P* representing a charge and *P∇S* as some sort of probability current.
For the equation describing the motion of *S*, we note that the term dependent on *R* can be expressed as a potential *Q*

This leads to an equation of motion for *S* which is starting to
look like a classical equation of motion.

If we identify *∇S* as a momentum,

then, using,

the equation describing the motion of *S* is then written as

And this is basically Newton's equation of motion, with the particle
momentum changing deterministically due under a potential. The potential
is split into two parts, one arising from the external potential input
into Schroedinger's equation at the very start, and the other arising
from the second field, *P*.

So far, I haven't actually done anything. This is just a reformulation of
the non-relativistic Schroedinger equation. It describes exactly the same
physics. The mathematics is entirely equivalent; it is just a different way of
expressing the same theory. One can still interpret *P* as a probability density
associated with the wavefunction, introduce wavefunction collapse, and
get back to the Copenhagen interpretation.

However, this reformulation opens up another possibility. What if we
interpret *R* or *P* as a classical field rather than being
related to a probability density? What if we sat that in addition to this
field, there is also a particle -- what we observe -- with a momentum
* p* and a location conjugate to it (following the approach
used in Hamilton's reformulation of classical mechanics)? Then we have
an entirely deterministic physics, which obeys equations of motion which
are similar to those used in Newton's mechanics. The

*R*field carries waves, which uniquely do not convey energy or momentum -- that is just transported by the particle. It is a little bit different from Newtonian mechanics, obviously, but sufficiently similar that the same philosophy would directly carry over. All we have done is introduce a new field which in part directs the motion of the particle, and whose changes are influenced by the motion of the particle, but classical mechanics has electric and magnetic fields without any problems, so there is nothing troublesome about this.

We need one more thing to create a full interpretation of quantum physics. This is to explain the apparent probabilistic nature of quantum measurements. The interpretation is the same as apparent randomness in classical physics, for example when we toss a coin. The result is unpredictable because we do not know all the underlying details of the coin toss. Variables such as the initial configuration of the coin, the precise force applied to it, and so on. What we have to do is create a realistic model of these hidden variables, assigning a probability to each of them, and solve the equations of motion for each possibility. That will give us a probability distribution for the final result.

Bohm's pilot wave model interprets the apparent probabilistic nature of
experiments in a similar way. There are hidden variables which we do
not know -- which arise from the precise functional form of the pilot
wave and
the initial conditions of the particle -- which means that we cannot
predict with certainty the precise final result of the measurement.
Instead, we assume the equilibrium condition -- that these hidden initial
conditions can be parametrised according to the probability distribution
*P*. If this is so, then the final results of the experiment will be
distributed according to the updated probability distribution -- this follows because
P evolves according to Schroedinger's equation, the same as the underlying pilot wave.
The equilibrium condition is not something arbitrary, but it can be
shown that over
time a Bohmian system would tend towards this equilibrium state. There
might be some differences to standard quantum physics in the early
universe as the system tends towards equilibrium, but today everything
is in equilibrium and we would never be able to measure any difference
between a world that matches the Pilot wave interpretation, and one
where Copenhagen was in fact the correct interpretation.

So it is easy to see why this interpretation is very appealing. It avoids all the philosophical problems of the Copenhagen, the spontaneous collapse, and the many-worlds interpretations. It allows us to carry over our interpretation and philosophy of classical physics practically unchanged. And it reproduces precisely the physics on non-relativistic quantum wave mechanics.

And I quite agree, that if non-relativistic quantum wave mechanics were the correct theory of nature, the Pilot wave model would be a very serious contender for the correct philosophy of physics. That is not as strong a statement as saying that it is true -- one would have to show that there are no other contenders which duplicate the physics and lack any obvious philosophical problems or gaps -- but it would be difficult to rule it out.

However, the model described above, while it has some support among philosophers of physics, is rejected by almost all physicists, myself included. The reason for this is straight-forward: the non-relativistic Schroedinger equation is not the correct theory of quantum physics. It is not even an approximation to the correct theory (in the way that Newtonian mechanics is an approximation to relativistic mechanics, valid in the limit that the speed of light becomes infinite). It is not even the correct non-relativistic limit of relativistic quantum field theory -- that is non-relativistic field theory, which is widely used in condensed matter physics. Quantum wave mechanics is just a theory that was a useful stepping stone to help us come to an understanding of relativistic quantum field theory, which replicates some but not all of the ways in which quantum field theory differs from classical physics.

So the question is: can the equations of relativistic quantum field theory be recast in a similar way to how Bohm treated non-relativistic wave mechanics, to make the equations of motion for the various elements of a pilot wave model obvious? If it can, then great. It is back on the table. But if not, then it has to be abandoned.

## The problem with relativity

Obviously, the approach described above is not going to be compatible with relativity because it
is based on the non-relativistic Schroedinger equation. There is an alternative, which I ought to mention.
The guiding equation for the pilot wave is derived from the appropriate Schroedinger equation
for the system. The change in the particle position ** q** is then described by the equation.

This approach is not derived from the non-relativistic Schroedinger equation, so it might be adaptable to relativistic theories. The problem for me is that its derivation is not as transparent as Bohm's, and thus it is harder to be sure that it is just a reformulation of the standard theory. The equation is derived from the standard quantum mechanical conservation of probability.

where the probability current is given by

This is a standard result which follows directly from the Schroedinger equation. The relation to the
guiding equation for the particle is obvious: the rate of change of the particle location is just the
probability current divided by the probability density. The claim is then that if we understand the
particles as being initially distributed according to a probability *ψ ^{†}ψ*, then it will continue to be so according
to this evolution equation. The difficulty I have with this is that it switches
the interpretation of

*ψ*from a probability amplitude to a pilot wave wavefunction. How is probability even meant to be understood for a single particle? Presumably in Bayesian terms. In this case, the probability would either have to be subjective (Bayesian proper) or conditional (the logical interpretation of probability), and in both cases there seems to be an issue in jumping between using

*ψ*to represent an objective and unconditional pilot wave wavefunction and representing a subjective or conditional probability.

Edit:In view of the debate in the comments below, I think it useful to show how de Broglie's equations preserve the frequency distribution of the particles. Here I use|Ψ|to represent the total probability distribution, and^{2}ψto represent the wavefunction for each individual particle. The expected number of particles in a given region_{i}Ωis given byThe rate of change of this over time is then given by

Substituting in the non-relativistic Schroedinger equation then gives

This simplifies to

Substituting in the equation of motion for the particle then gives

Which, via stokes theorem, where

Arepresents the area bounding the surface, leads toThe left hand side of this equation can then be interpreted as the rate of change of the total numner of particles in the region. The right hand side is interpreted as the flux of particles entering or leaving the region. The equality between the two means that the number of particles entering or leaving the region in a given moment of time is equal to the change in the number of particles in the region, i.e. that the frequency distribution of the particles remains proportional to the wavefunction over time.

The key equation in all this is equation (26). To derive equation (26), we needed to use the non-relativistic Schroedinger equation. So clearly the Pilot wave theory for these equations is only valid for that equation. There are, however, equivalent expressions for relativistic quantum mechanics, albeit that they get a bit more complicated (for example, the expression for the probability density and probability current changes). The equations of motion will be different, but you can still go through the same procedure and reach the same conclusion.

It becomes a bit trickier in quantum field theory, because the Schroedinger equation works differently. In all forms of quantum wave mechanics, the Schroedinger equation is a linear equation where some power of the time differential of the wavefunction is proportional to some power of the space differential of the wavefunction plus some stuff. The proof above relies on the equation being in that form. In quantum field theory, that is not the case. The space differential does not act directly on the Fock state, but on the creation and annihilation operators. For that reason, we generally do not discuss probability density or probability currents. So it is not clear to me that there is some equivalent to equation (26) in quantum field theory, with the right properties to allow you to derive the appropriate conclusion.

One needs to formally show the equivalence (in terms of experimental predictions) between the pilot wave model and the instrumentalist Copenhagen interpretation of quantum physics. Bohm achieved that for the non- relativistic Schroedinger equation, as described above. What is needed is to create an analogous model for relativistic quantum field theory.

The problem with combining the Bohm model with relativistic quantum theory is probably easiest to describe if we start by slightly reformulating the Schroedinger equation.

In the limit that the small time difference *δ t* this reduces
to the standard formulation of the equation listed above. One can still
perform the same decomposition as above, and work through the same
calculation, neglect terms of order *(δ t)^2* and above, and
one would end in the same place as the standard Bohm decomposition.

So why re-write the equation in this way? There are two reasons. Firstly, it is a natural way of formulating the Taylor series expansion in time.

This is equivalent to the standard differential form of the Schroedinger
equation as long as the
wave-function is continuous between time *t* and time
*t + δ t*.

Secondly, the time evolution operator in quantum field theory cannot be expressed in terms of a straight-forward differential equation, as in the standard formulation of the Schroedinger equation. But we can write it in this exponential form.

So the time evolution operator becomes

For the simplest quantum field theory that contributes to the standard model, Quantum Electro Dynamics, the Hamiltonian operator is given by

This needs some explanation. Firstly, *ψ* is no longer a
single particle wavefunction, but a Fock state. This basically counts how
many particles are in each possible state. *A* represents the
Photon. It is a four vector, with its time component, *A _{0}*
representing the electric potential, and the three spatial components the
magnetic potential. The operator with a hat over it,

combines the photon creation and annihilation operators. These represent the creation or destruction of a photon. Finally, we have the electron creation operator

and the electron annihilation operator

These represent the creation or annihilation of an electron. So if you apply a creation operator for an electron at a particular location and spin state to a Fock state without that electron, then it will output a Fock state with all the particles in the initial state plus that electron. If you apply an annihilation operator to a Fock state with that electron, then the output will be an otherwise identical Fock state but without that electron. Applying the annihilation operator to a Fock state without that electron gives zero, as does applying a Fermion creation operator to a state with that electron.

Finally, the *γ* in the equation above are matrices which
describe interactions between different spin states. *m* is the
electron mass. *e* is the electron change.

This is used to calculate amplitudes by which we can calculate probabilities for a particular outcomes given a particular initial state. For example, we might start with a state with an electron at a given location, and we want to calculate the amplitude that we will measure, after a set period of time, two electrons and a positron at their own locations. Usually we create the initial state by applying creation operators to a vacuum state with no particles (so, in this example, we would just apply a single electron creation operator). We then apply the time evolution operator to give the Fock state at the next instant of time. Then again to get the next instant of time. And so on until we reach the time we need. This will give a Fock state which is a superposition of various different possible outcomes. Then to calculate the amplitude for the particular outcome we are interested in, we apply annihilation operators for the particles in the end state (which will exclude any terms in the superposition which don't contain those particles), and contract against the vacuum state (which will exclude any terms in the superposition which contain additional particles), and this, after normalisation, gives us an amplitude for the outcome. This amplitude can then be converted into a probability, which is compared against experiment.

This discussion is a bit too simplistic (which I am sure will cause many readers to scream in terror at what the non-simple discussion might look like), as I have excluded a discussion of renormalisation, or a detailed discussion of spin and polarisation. I have also excluded the non-Abelian and Higgs fields, so this is just quantum electrodynamics rather than the full standard model. But it is good enough for my purposes here.

Why don't we simply expand the exponentials, as is usual in wave mechanics? Because such an expansion is only valid if the Fock state is continuous over the time interval. That is not the case, because each creation or annihilation event represents a discontinuous change to the Fock state. We also consider each time slice individually because the Fermion creation and annihilation operators are non-commutative, which makes combining exponentials, as one would do for exponentials of normal commutative functions, a non-trivial task.

So the challenge to adapt the Bohm interpretation to quantum electrodynamics can be simply expressed. How does one perform a similar decomposition that will reduce the time evolution operator for QED into a form that looks a bit like classical mechanics? To my knowledge (which admittedly is not complete) this has not been successfully achieved.

So the first question is do we need to do this? In my view, yes. Obviously the solution won't be Bohm's decomposition described above. There are other options for Pilot-Wave theories. For example, de Broglie's original approach was somewhat simpler in its construction and might be more directly applicable to a QFT. But ultimately to establish the interpretation as a viable theory, you need to come up with a set of equations which describe the dynamics of the particles and the underlying Pilot waves which both resemble the sort of equations you get in Newtonian mechanics and are equivalent to the QFT evolution (the equation above would be a starting point; in practice one would also need to incorporate non-Abelian gauge theories and renormalisation). As in the case of the Bohm theory, this would just be a reformulation of the mathematics, and doesn't actually change the theory. But it would be a reformulation that suggests the philosophical interpretation. Without this being done, it is not proven that the pilot wave interpretation can be applied to the standard model. When you have alternative interpretations which are clearly applicable to the standard model (even if they have philosophical problems), then the pilot wave interpretation will be the least appealing option for physicists. Remember, a viable philosophy of quantum physics has to both make sense philosophy, and actually reduce to our current best model of physics. Physicists care more about that second condition than the first, and a failure to show that is a big red flag.

Of course, that no solution to the problem is currently known does not mean that there is no solution. Maybe it just needs more research. But there are far more challenges which have to be overcome than compared to the non-relativistic Schroedinger equation.

The first problem is that both the non-relativistic Schroedinger equation and Newton's laws (whether applied to particles or wave equations) are described by linear differential equations. Quantum Field Theory cannot be expressed in that mathematical form. This is why I personally regard Quantum Field Theory as a different theory to non-relativistic quantum wave mechanics, rather than simply an adaptation of it to relativistic fields. Yes, Quantum Field Theory contains differential operators. But they are applied to creation and annihilation operators, not the Fock state itself, which is the generalisation of the wavefunction. Clearly, if the theory is not reducible to a linear differential equation, then it is not reducible to Newtonian mechanics.

The problem comes in that the creation and annihilation operators are
not expressible as differential operators in general. I should make a
caveat here. One of the toy models used to introduce QFT is the simple
harmonic oscillator in quantum wave mechanics. In the one dimensional
case, the potential is *ω ^{2}x^{2}*,
with

*ω*some constant value. The solution is a system with an infinite number of energy levels, from

*ℏω/2*with increments in steps of

*ℏω*. The energy levels can be found by solving the non-relativistic Schroedinger equation, but there is an alternative approach, which is to note that there is a pair of operators,

*a*and

*a*, such that

^{†}*a*acting on an eigenstate of energy

^{†}*E*gives an eigenstate of energy

_{n}*E*, while

_{n}+ 1*a*does the reverse. These operators are effectively the sum of the location operator and its differential (baring a few constants scattered around here and there). The analogy with creation and annihilation operators in quantum field theory is obvious. Can creation and annihilation operators be expressed in terms of differential operators in a similar way? If so, it would allow us to express the time evolution equation operator for quantum field theory to a straight-forward differential equation. However, this is not as easy as it would seem. Firstly, the raising and lowering operators correspond to a spin zero Boson field. Fermion fields and gauge fields would have to be represented in a different way. Secondly, the creation and annihilation operators in QFT represent a particle localised to a particular location or momentum (depending on the basis). The wavefunctions for the various excited states in the Harmonic operator are spread out over a region of space.

And how should we interpret the creation and annihilation operators in a pilot wave model? In the standard interpretations, they are associated with the creation and annihilation of a particle (or, at least, something along those lines, differing in details according to the interpretation). However, in the pilot wave model, the Schroedinger evolution describes the evolution of the underlying pilot wave. In this case presumably the theory would be interpreted with a field rather than particle ontology, and one might consider them as creation and annihilation of field excitations rather than in terms of particles. But these creation and annihilation events would act on the pilot wave, not the particle trajectories, because in de-Broglie's formulation of the pilot wave model the evolution equation is for the pilot wave and not the particle. Now we experimentally observe the creation and annihilation of particles. So the question is how a creation or annihilation of an excitation in the pilot wave is related to the creation or annihilation of a particle, whose motion is described by an equation derived from the classical Hamiltonian-Jacobi equations.

Secondly, there is the issue of how we might decompose the theory into a part that describes the particles and one that describes the pilot waves. Is that decomposition applied to the creation and annihilation operators, or to the Fock state itself? In both cases, there are problems. If we apply the decomposition to the Fock state (so write it as a product of a pilot wavefunction and a Fock state describing the particles) and leave the operators as they are, then the end result will not be a linear differential equation. Furthermore, the initial state in the prescription is usually constructed by applying creation operators to the Fock state. If we decompose the operators into (say) the product of an operator that acts on the particles and another operator that acts on the pilot waves (while expanding the Fock state to count both particles and pilot wave excitations), then that is very different to what was done by Bohm and would not be mapped to a form similar to classical mechanics.

Next we have the problem of spontaneous creation and annihilation of particles. This is not something which can happen in classical mechanics, where you have a set of simultaneous differential equations which describe a fixed number of particles. The non-relativistic Schroedinger equation likewise does not permit the particle number to change. In Quantum field theory, it obviously does. The solution to this problem would be to regard Quantum Field Theory as a theory of fields rather than a theory of particles, so you have a single differential equation describing each field. (I personally prefer a particle ontology for QFT, as that is more easily mappable to what we observe, but this is just a personal preference and might not mean anything in practice.) Particles would then correspond to localised excitations of the fields. But then you encounter a different problem. A particle creation or annihilation event would be a discontinuous change in the field. So, for example, when an electron emits a photon, positron-electron pair, before you would have no excitations in the photon field, and then you would have one. It is an instantaneous change (unless we want to change the predictions of standard model physics), and thus a discontinuous change to the field. But this cannot be the case if the field dynamics are described by a linear differential equation, as would be required if the Pilot wave model can be mapped to a version of classical physics.

There are a few proposals to overcome this problem which I am aware of. For example, this work (expanded on here) proposed introducing spontaneous jumps into Bohmian mechanics to correspond to each creation and annihilation event. The problem with this is that it loses its philosophical attractiveness. You no longer have a classical dynamics, but a classical dynamics plus spontaneous events which modify the pilot wave, which seems no better to me than a spontaneous wavefunction collapse model. The Dirac sea approach assumes a constant Fermion number (particles minus anti-particles), which is inconsistent with various weak decays. Field ontology approaches to Bohmian mechanics have difficulties defining the equilibrium measure for Fermion fields, and so far have not quantised the gauge fields. Another approach hides the problem of particle creation and destruction in some hand-waving over entanglement with the measuring device. This, in my view, just hides the problem rather than removing it.

So while there have been attempts to solve the problem of particle creation and annihilation, thus far none of them have been proved satisfactory. Work has been done here, and is continuing to be done, but all of the approaches being tried have gaps in their physics, and many of them require a modification in the philosophy, undermining the main advantage of the pilot wave interpretation.

Then there is another problem related to the interpretation of the apparent indeterminism of the experimental results. The quantum mechanical formulation of the pilot wave model depends on an equivalence between the square of the wavefunction and an epistemic probability for the original location of the particle. But there is no wavefunction in quantum field theory. What we have instead is a Fock State, but this is not directly translatable to the amplitude that a particle is in a particular location, or even the amplitude that there is a particular set of particles. To construct an amplitude, we need to take the inner product between the Fock state and another state (and then normalise). The Fock state is not a simple function of location. The initial Fock state cannot therefore be directly translated into an epistemic probability that the given particle is at a particular location. One can construct such an location dependent amplitude: perhaps by taking the inner product of the Fock state with a state representing the location of the desired particle, or by constructing a theory with a superposition of Fock states each multiplied by an appropriate amplitude. These amplitudes can then be converted to an epistemic probability for the initial location of the particle. But it is not these amplitudes which undergo Schroedinger evolution: it is the Fock state. Presumably, then, the Fock state would have to in some way represent the pilot wave. The amplitudes derived from taking the product of the Fock state with a target would correspond to the epistemic probability. But these are not the same thing, and they evolve according to different equations. Therefore it seems unlikely that we can make a mathematical equivalence between the modulus square of the pilot wave and the epistemic probability for the location of the particle, and without this the pilot wave interpretation breaks down. It is also not clear what it means to take the derivative of the Fock state, as required to construct the probability current, as, unlike the wavefunction of quantum mechanics, the Fock state is not a direct function of location.

Furthermore, amplitudes in quantum mechanics are absolute *ψ(x)* represents the amplitude that the particle
can be found at location *x*. This need not be conditional on anything, and so can be mapped to the absolute
value of a pilot wave. Amplitudes in quantum field theory are conditional. We calculate the amplitude for a final state
conditional on a particular initial state (and the correct formulation of the Hamiltonian). Because these amplitudes are
conditional on an initial state, they cannot be mapped to a non-conditional pilot wave. In the pilot wave interpretation, the
pilot wave takes on a particular value at any moment of time, and this is objective and not conditional on anything. Possibly
this can be avoided by the initial state of the universe as the initial state for the Fock state evolution. This would work, but only
if the dynamics is deterministic, which would rule out some of the models referenced above to deal with the creation and annihilation of
particles.

Possibly the most cited problem with the pilot wave interpretation, with
regards to its consistency with special relativity, is its non-locality.
The equation of motion for *S* is divided by the square of the
value of *R*. Dividing by something which could (in principle) be zero is
one problem, but the main one is that for systems with multiple particles, the divisor
in this equation depends on wavefunction at the locations of all the particles in the
system. The equation of motion for one particle depends on the value of the wavefunction
at other locations in space. This is a feature of the pilot wave model which cannot be avoided,
even if we replaced the non-relativistic formulation of Bohm with a relativistic
Schroedinger equation.
The response to this is that we know that quantum physics is not local in
any case, due to entanglement and EPR type experiments. And, I think,
that is correct -- there is non-locality in quantum physics which can't
be eliminated (at least without introducing something like backwards time
causation, or other ideas which are equally problematic). Does this
leave the pilot wave interpretation off the hook with this objection?

The first question is why might non-locality, or simultaneous action at a distance in any valid reference frame, a problem? The issue is that it is inconsistent with Lorentz symmetry, which is the driving force behind special relativity, and embedded in the standard model of particle physics. Lorentz symmetry, among other things, forbids non-local interactions, so if there were non-local interactions that would imply that the Lagrangian would not satisfy Lorentz symmetry in one way or another. As far as we can tell, and it has been measured to an incredible precision, Lorentz symmetry is exact. It is also one of the assumptions behind the construction of the standard model, and prevents a large number of additional terms from entering the Lagrangian. One can either say that there is Lorentz symmetry, or one needs to introduce those extra terms into the Lagrangian, multiplied by a suitable constant, and then come up with an ad-hoc explanation of why those constants are too small to be measured. One of the implications of Lorentz symmetry is that interactions between particles should be local. So a non-local interaction between two particles would break a lot of established physics. And if the equation of motion for the particle is non-local, then that implies a non-local interaction.

The problem with this argument, of course, is, as noted above, that in quantum physics we know that in EPR and related experiments there are non-local correlations between certain measurement events. The natural "classical" explanation -- that these events report the results of classical parameters describing the entangled particles established when they were in contact -- leads to different predictions to what is observed (albeit that this calculation has certain other assumptions which would cause even greater problems if broken). And this is also a direct consequence of the standard model of particle physics. So the standard model enforces locality in some contexts, and enforces its violation in others. So the question becomes whether the non-locality in the guiding equation in the pilot wave model is the sort of non-locality which is allowed in the standard model, or the sort of non-locality which is forbidden.

There is an important distinction to be made between what I call substance causality and event causality. Substance causality asks which particle state did this particle state emerge from. Event causality asks why this particular event with this particular outcome occurred. In terms of the Copenhagen interpretation applied to relativistic quantum field theory, substance causality describes particle creation and annihilation, and particle propagation. These are described by the Hamiltonian or Lagrangian of the theory, and Lorentz symmetry is conserved in the Lagrangian. Event causality is in play when there is a wavefunction collapse, for example caused by a measurement event. The process of wave function collapse is described by Born's rule (albeit something of a mystery philosophically), and unrelated to the Hamiltonian evolution of the wavefunction. And, of course, wavefunction collapse is indeterminate. There is no known physical cause (in the Copenhagen interpretation, and others which admit indeterminacy) why a particular outcome should happen. All we can do is assign a probability to each outcome, and use that to predict a frequency distribution. The non-local correlations occur in the outcomes of these events. In Bell's theorem, it is not even individual events which provide evidence for non-local correlations, but a comparison between two frequency distributions. If the detectors in a spin experiment are perfectly aligned, then the spins measured between those measurements for an individual pair of entangled particles would be correlated -- but this can be explained using a classical hidden variables theory. The classical hidden variables theory cannot explain the frequency distributions that arise when the two detectors measure spin or polarisation in different bases. This suggests that the correlations between the measurement events for individual entangled particles (as opposed to comparing frequency distributions of numerous independent events) are also non-local. But in any case, the non-locality arises from wavefunction collapse rather than any particle propagation or creation or annihilation, and as such does not conflict with the standard model's use of Lorentz symmetry, or special relativity in general.

The Pilot wave interpretation, however, lacks this clear distinction between measurement outcomes and particle propagation. The non-locality arises in the equations of motion for the particle. When converted to a quantum field theory, this would imply Lorentz symmetry breaking terms in the Hamiltonian and Lagrangian of the theory. This would imply that there are creation and annihilation events at a distance (i.e. a change in the pilot wave here would lead to a particle creation event over there). The Lagrangian would not be Lorentz invariant. Since the assumption of Lorentz symmetry is crucial in the construction of the Lagrangian (and explains why numerous possible terms describing potentially observable interactions in the Lagrangian are not present in reality). Even if one could construct a Pilot wave theory which preserved apparent Lorentz symmetry for the observed particles, this would be somewhat ad-hoc and artificially imposed. It lacks the naturalness we see in the conventional construction of the Lagrangian, where the symmetry is exact from the start. One would need a Lagrangian which is not Lorentz invariant, but fine tuned so that apparent Lorentz symmetry emerges from it. If you break a symmetry somewhere in the Lagrangian, there is no reason not to break it everywhere including in the emergent theory. Even if a version of the Pilot wave model were constructed from the standard model Hamiltonian, it would just create this new fine-tuning problem.

The technical term for having a privileged reference frame is foliation. Non-relativistic physics creates a natural foliation (due to its underlying Galilean relativity), so the problem of a preferred reference frame is not an issue. In special relativity it becomes unnatural, and as stated in quantum field theory it would destroy one of the primary motivations for restricting the Lagrangian to its standard form. All attempts to create a relativistic pilot wave theory that I am aware of introduce foliation. This work by Detlef Duerr and collaborators discusses that in detail. There are two models discussed; firstly where the foliation is introduced a priori, and secondly where it emerges from the wavefunction. The conclusions of that paper are that while pilot wave theory involves a foliation which naively breaks the symmetries of special relativity, the ability to construct a foliation dynamically from the wavefunction implies that this would be true for any formulation of quantum physics. So the pilot wave model is no worse than any other interpretation. I am not, however, convinced by this argument. In the pilot wave model, the foliation is required for the underlying ontology. In other models, it is merely an artefact of one way of looking at the system, with no fundamental dynamical or ontological relevance. The success of the standard model, which is consistent with relativity, and which does not require any foliation to calculate the Lagrangian or an amplitude, shows that a manifestly relativistic quantum theory is possible. But it would not be possible to create a manifestly relativistic pilot wave model, as the construction of the theory requires a privileged frame. In my view, the necessity of foliation remains a major problem for any attempt to create a pilot wave model. Even if a pilot wave model consistent with the empirical Lorentz invariance of the standard model could be produced, it would leave open the question of why there should be that symmetry in observation when it is not present in the underlying ontology. While the pilot wave model might solve the problem of wavefunction collapse, it would open up this (in my view even worse) problem. This undermines the argument for the pilot wave model that it is a clean understanding of quantum physics, without any philosophical loose ends.

I am outlining difficulties which stand in the way of constructing a viable Pilot wave model from the standard model Hamiltonian. The next one in my list concerns gauge symmetry. The gauge transformation can be thought of as a local rotation of the phase of the particle wavefunction (and it also modifies the gauge fields). Local here means that the variable parametrising the transformation varies from one place to another; in a global symmetry the variable is constant across all space. The gauge transformation also simultaneously modifies the gauge field -- again this can be expressed in terms of a rotation. The standard model Lagrangian is invariant under gauge transformations, and, once again, in the standard construction and motivation for the standard model, this is crucial in explaining why various gauge-symmetry breaking terms are not present in the Lagrangian.

However, when we look at the decomposition used to construct Bohm's pilot
wave model, we see that the particle variable *S* is just a phase
to the wavefunction. This means that it can be eliminated through a gauge
transformation. This is obviously troublesome, because this parameter
represents the observed particles, and parameters which can be absorbed
into gauge transformations are not observable (this explains why we only
see one Higgs Boson rather than four). This again does not rule out
the Pilot wave interpretation -- maybe one can use a different
decomposition -- but it shows that Bohm's method does not have a direct
analogue in QFT. One needs a different approach.

A similar difficulty applies to representing gauge fields in a pilot wave model. In the standard model, the representation of the gauge fields contains redundant variables due to the gauge symmetry. There are various configurations of gauge fields which are physically equivalent. This makes some calculations difficult. Integrating over these configurations in an incomplete series expansion of the theory leads to an infinite result, which is why in perturbation theory we tend to choose one particular gauge before proceeding. In practice, whichever gauge you choose will get the same results if you do it consistently. The process of gauge fixing in a non-Abelian theory (the weak interaction and QCD) does add an additional term into the path integral, which is usually interpreted in terms of pseudo-particles known as ghosts. These are just artefacts from the method of calculation, but contribute to the Feynman diagrams.

A pilot wave re-interpretation of the standard model would have to accommodate these additional redundant degrees of freedom. In classical mechanics, one does also have gauge symmetry in the electric and magnetic potentials, but here the observed qualities are the electric and magnetic fields. The potentials are not physical -- just an artefact of the calculation -- and so the redundancy caused by the gauge symmetry can be hidden away. In quantum physics, the potentials take the centre stage, and the electric and magnetic fields a more secondary role. In the pilot wave decomposition, there would (presumably) be a "particle" representing the photon, with an underlying unobservable pilot wave for the photon. This particle is in a determinate state, which would include being in a particular gauge. The only way I could see that this would work is for the pilot wave interpretation to privilege a particular gauge. In effect, there would be some natural gauge fixing before we start constructing the theory. The gauge symmetry of the apparent Lagrangian would then once again emerge from the theory. That would lead to another fine-tuning problem. If gauge symmetry is not built into reality, then why should the theory be tuned precisely so that it appears to emerges when we reconstruct the full standard model Lagrangian? And how does this natural gauge fixing avoid ghost fields?

Next, we have the problem of renormalisation. This is an essential part of the quantum field theory formula. So far I have been discussing the bare fields, as they appear in the simplest way of constructing the Lagrangian. The problem is that one can perform changes of basis, mixing the electron, quark, photon and gluon fields in the Lagrangian. The creation and annihilation operators that give us the initial and final states need to correspond to whatever basis represents the particles in reality. We have to change the basis used in the Lagrangian to match this. To fail to do so leads to inconsistencies in the calculation, which manifest themselves in infinities. In practice (in perturbation theory), the way this is done is to regulate the theory, and then modify the electron mass, normalisation, and electron-photon interaction at each term in perturbation theory so that the infinities disappear when the regulator is removed. The standard regulator used is dimensional regularisation, (i.e. represent the theory in 4-ε dimensions, which is sufficient to remove the infinities) as this is consistent with gauge symmetry.

I don't see this as a fundamental bar to deriving a pilot wave model from quantum field theory, but it adds an additional complication, which I have not seen addressed by those trying to reconstruct a field theory version of a pilot wave model. The starting action for the decomposition would not be bare action represented above, but the far more complicated action for the renormalised fields. And, indeed, the renormalised action is unknown -- one does not need to know it in the standard methods of computation, as one can renormalise after computing the Feynman diagrams in perturbation theory or after calculating the amplitudes for the bare fields in a non-perturbative lattice calculation. But, I think, when constructing a pilot wave model one would have to start with the renormalised fields, as there is no correspondence to the basis transformations needed for renormalisation in a classical system. One would have to construct something something similar to Newton's equations of motions for pilot wave "particles" that correspond to the renormalised fields. It is not obvious that doing so for the for the bare fields would allow you to construct a similar structure once you have applied the renormalisation procedure.

## Other objections to the Pilot wave model

There have been some attempts to mimic some particular pilot wave model in a laboratory experiment, for example using surface waves on a fluid of oil to represent the pilot waves and an silicone droplet to represent the particle. These models do have some success in producing similar results to those seen in some quantum systems. A study by Andersen, Bohr and others is significant because it shows that the double slit of quantum physics result cannot be replicated by a pilot wave model where the particle interferes with the wave. This is because the presence of the particle on one path or another breaks the symmetry of the system and consequently of the pilot wave. I am not fully convinced that these studies rule out pilot wave models in general: there are differences between their setup and the actual quantum mechanical models. It is not clear to me that this applies to all pilot wave models (the paper states that it applies to a de-Broglie style model, but not Bohm's which I illustrated above), but it would have to be borne in mind when constructing a pilot wave model for quantum field theory. The studies need to be mentioned, but I am not sure that they say anything conclusive.

## Conclusions

There are a lot of philosophers of physics with a fondness for the pilot wave model. And it is easy to see why: it promises to reproduce the results of quantum physics without having to make major changes to the philosophy behind Newtonian mechanics. However, it has not received wide acceptance among physicists. There were historical objections raised for this, but today the primary objection is the one I have given here: Bohm's approach was based on a reformulation of the non-relativistic Schroedinger equation. While it is possible to extend this argument to the Dirac equation, the structure of relativistic quantum field theory is very different to that of wave mechanics: it is not based around a simple linear differential equation. It seems fairly clear that a reformulation along the lines of that done by Bohm is not possible. Without such a reformulation, it is not proved that the theory is experimentally equivalent to the standard picture.

Having done research for this approach, that summary dismissal looks to be a little too quick. There is work being done on combining the pilot wave model with relativistic quantum field theory. However, this work has not progressed sufficiently to demonstrate that it is possible; and much of it undermines the philosophical simplicity which is the main appeal of the pilot wave interpretation.

And this is a big deal for the physicist. Any interpretation of quantum physics has to be shown to mathematically equivalent to the instrumental Copenhagen interpretation of the standard model, as that is what has been experimentally tested. Unlike most interpretations of quantum physics, this has not been proved for the pilot wave model. Now "not proved" is not the same as saying it is false; but there are strong reasons to suspect that it cannot be shown to be consistent with the standard model, at least not without (for example) violating Lorentz invariance at the ontological level (even if an apparent Lorentz invariance might emerge when we consider the experimental results), or introducing spontaneous jumps from one particle number sector to another which means that the dynamics is not a modified Newtonian, and we need to come up with a new philosophy after all. When there are other interpretations of quantum physics which lack these issues, then it is difficult for a physicist to get enthusiastic about this interpretation.

**Reader Comments:**

**Matt's response (part 2)**

**QFT is not fundamentally different from QM**

My responses to your main objections to PWT will depend in part on a claim that you evidently disagree with. I don't believe there is the kind of fundamental divide between quantum mechanics and quantum field theory that you describe in this post and in various other places on your blog; as a consequence, QFT is at least in some ways more amenable to a pilot-wave interpretation than it first seems.

The first point to this claim is that QFT can be done in the Schrodinger picture, with a quantum state (e.g., describing a superposition of Fock states) evolving linearly and continuously via a Schrodinger equation (with the relevant Hamiltonian operator, e.g., the QED Hamiltonian). QFT ladder operators are not relevantly different from QM operators in this regard; the fact that they "discontinously" change the state only means that the rate of change of the state points in a different direction from the state itself in the Hilbert space, which is entirely par for the course in QM (in fact, it has to be the case for unitarity). The fact that QFT is usually done in the Heisenberg or interaction pictures, or using propagators or path-integrals to calculate amplitudes between specific incoming and outgoing states, is entirely a matter of convenience, not because the Schrodinger picture is impossible.

(At this point I'm not addressing how the quantum state and the ladder operators are to be represented in a way amenable to PWT - I'll come back around to that - just that the ladder operators are no obstruction to having a quantum state continuously evolving according to a Schrodinger equation.)

The second point is that there really is no such thing as quantum field theory in the first place: instead you have a family of quantizations of discretized approximations to field theory. You can do calculations with this family of theories, and take the limit of the results of those calculations as the discretization gets finer and finer... but the continuum limit for the theory itself appears not to exist; it is not mathematically well-defined.

Now, I recognize that I have to back that claim up, so I'm calling to the stand David Wallace's "In defence of naivete: the conceptual status of Lagrangian QFT" [2] and Gerard 't Hooft's "The Conceptual Basis of Quantum Field Theory" [3] as my primary witnesses. Wallace outright states that QFT can only be put on firm conceptual footing if we take the momentum cutoff seriously (and it is also clear in his discussion that the usual Schrodinger evolution is valid in QFT). 't Hooft emphasizes the importance of restricting to a finite number of degrees of freedom in order to make the principles of quantum mechanics applicable, and he states that the continuum limit of most QFTs (specifically, non-asymptotically-free ones) is not well-defined - including the Standard Model. 't Hooft even says that dimensional regularization (which doesn't prima facie look like working with a finite momentum cutoff) is basically just a mathematical trick designed to separate out the ultraviolet behaviour, so conceptually, it is doing the same thing.

Both of them argue that despite this situation, QFT as internally consistent (at the very least if considered as an effective field theory, approximating some unknown, more fundamental theory) and completely workable as a physical theory. Which I would agree with - from an instrumentalist perspective. From a scientific realist perspective, trying to answer questions like "what is the world really made of?" and "what is really happening at atomic and subatomic scales?", it still leaves something to be desired.

Here is the relevance of all of this to pilot-wave theory: as a basis for a theory which is trying to posit a clear ontology and precise dynamics for that ontology, QFT as it is usually practiced is far from solid ground - it is a shifting sand of theories at different scales with different values of the fundamental constants. In lieu of a more fundamental theory (in relation to which the Standard Model stands as an effective field theory), what one could do instead is consider the easiest possible well-defined theory which reproduces the behaviour of the Standard Model - which is just a QFT wih a specific finite momentum cutoff - and build a pilot-wave theory based on that. This is inelegant, to say the least. But the inelegance is an artifact of the mathematical shiftiness of QFT, related to the fact that we ultimately have to view the Standard Model as an approximation to a deeper theory. Given the state of things, I think this is a legitimate goal. And if looking at QFT from a pilot-wave perspective helps to find that deeper theory, or helps to uncover new ways of dealing with the mathematical shiftiness, even better.

It is also, conceivably, an achievable goal. A regularized QFT has a finite number of degrees of freedom and can be cast in the Schrodinger formulation. If one can find an appropriate configuration space representation of the quantum state, the Schrodinger evolution of the state can be used to derive a guidance law for the configuration (a velocity field in configuration space, and/or probabilistic jump rates between discrete sectors of the configuration space, if it has them), and you have a de Broglie pilot-wave theory. This is basically what Bell suggested for extending PWT to cover QFT, and is the path Duerr et al followed in formulating Bell-type QFTs. (Also, Duerr et al's work has the potential to considerably alleviate the inelegance of this approach, especially if their "interior boundary condition" method works out. Duerr passed away in 2021, I believe; I hope others take up his work.)

**Matt's response (part 3)**

In you section headed "The problem with relativity" you basically put forward seven different objections or problems that PWT faces in achieving compatibility with QFT (only one of which has to do with relativity specifically; the rest are indifferent between relativistic and non-relativistic QFTs, but no matter). I'll respond to them in turn.

**(0) - the challenge**

Before you begin laying out specific difficulties, you state the challenge of decomposing the QED time evolution operator into "something that looks classical mechanics". But this isn't necessary, as de Broglie's 1st-order formulation of PWT demonstrates: it simply adds the guidance equation to the evolution of the quantum state (and discards the now unnecessary measurement axioms and Born rule). Empirical equivalence with the orthodox theory can be demonstrated (and I think is better demonstrated) in other ways; ultimately by showing that the "beables" of the theory form an image of the macroscopic world in a way that corresponds with the Born rule. Granted, this is going to be more difficult for QFT than for non-relativistic QM; in general, it will probably be necessary to closely analyse the theory to determine which beable states typically correspond to quantum states we would normally interpret as our everyday subatomic particles, for example. But this should be possible.

**(1) - representation of the quantum state**

The first problem you put forward is that the non-relativistic Schrodinger equation and classical mechanics are described by linear differential equations, and QFT can't be expressed this way. This seems to conflate a couple of things. Classical mechanics equations are not always linear, though they are differential equations. If only particles are involved, they are ODEs involving time derivatives; if fields are involved, they are PDEs involving derivatives in both time and physical 3-d space. On the other hand, as I've argued, every quantum theory can be expressed in the Schrodinger picture, seen as an ODE for the evolution of the quantum state over time. The real difference between NRQM and QFT is how the quantum state is represented, and concomitantly how the Hamiltonian operator that occurs in the Schrodinger equation is represented.

The ordinary pilot-wave theory (i.e., de Broglie-Bohm theory) relies on the fact that the quantum state is represented as a complex-valued (or complex-vector-valued) function of the positions of the particles, and that the Hamiltonian is represented in terms of derivatives of this function (which makes the Schrodinger equation into a PDE instead of an ODE). But care must be taken not to fall into confusion: the derivatives in the Hamiltonian operator are not derivatives in physical 3-d space. Rather, they are derivatives in configuration space, the space of possible coordinate values for the configuration of the system.

So what does this look like in QFT? One possibility is that we can represent the quantum state as a function on the space of possible field configurations (choosing a field ontology). The coordinates of the configuration space could either be the field values at each spatial point, or the values of its Fourier transform at each wavenumber (these are just two different coordinate systems on the configuration space). In this representation, creation and annihilation operators are represented with derivatives, analogously to the quantum harmonic oscillator; and quantum states are spread out in configuration space (not physical space). For obvious reasons, this works for bosonic fields but doesn't immediately seem possible for fermionic fields. Peter Holland suggested a way to do something similar for fermionic fields; however, it has problems.

A possibility that does work for fermions is the one suggested by Bell: discretize physical 3-d space and have particles jumping around in the lattice. The creation and annihilation operators have a discrete representation, but the configuration space, and motion within it, is discrete as well, so it works. (The guidance equation in this case gives the probabilities for jumps to occur between different configurations.) Bell-type QFTs smooth out Bell's suggestion by combining jumps between sectors of the configuration space (with different numbers of particles) with continuous change of the particle positions within a sector.

You question how creation and annihilation operators are to be interpreted in PWT, and the answer is simple: they are operators acting on the quantum state, same as in the orthodox theory. The difference is that, related to these operators, PWT can have actual particle creation and annihilation represented by the "hidden variables", the beables in addition to the quantum state. Whereas there really is nothing like this in the orthodox theory.

For example, in PWT with a particle ontology we could have a particle with a definite location decay at a definite time to other particles which shoot off on definite trajectories. In the orthodox theory, all you have is a gradual transition between superpositions of the state of the particle and the state of its decay products; there is no definite fact about when the decay occurs or how many particles are there at any point in time (or where they are, for that matter). The PWT has this gradually transitioning quantum state as well, and the quantum state drives the increasing probability of decay as the decayed state becomes more highly weighted in the superposition - which in turn makes the role of the quantum state, and the ladder operators acting on it, crystal clear. But in the orthodox theory, when all you have is the quantum state, no definite particle positions, not even a definite number of particles... what is the theory actually saying is there?

**(2) - wave/particle decomposition**

Your second objection asks how we can decompose the QFT evolution into a part that describes the pilot wave and the part that describes particles. But there is no such decomposition (not even in Bohm's version - the equation for the S-field must still be solved over the whole configuration space in order to solve the R-field, so together they are equivalent to solving the Schrodinger equation, while the identification of the particle momentum with the gradient of S acts just like the guidance equation in de Broglie's version). The Schrodinger evolution of the quantum state is unchanged, and the guidance law is postulated complementary to that. As for how to find the guidance law, the basic form is probability current divided by probability density (both in configuration space, not in physical 3-d space). There are methods for working out the probability current, and the probability density is usually immediately apparent from the form of the quantum state.

**(3) - particle creation and annihilation**

There is some overlap between this and the first objection, so hopefully I can be brief here. You make some erroneous assertions in your objection. In a field ontology, it is not true that a creation or annihilation event corresponds to a discontinous change in the field. It is important to recognise that a PWT with a field ontology means that, in addition to the quantum state (the analogue of the wavefunction in NRQM), there is a definite field configuration (the analogue of the particle positions in NRQM). States that we observe as particles would really be regions of the field having a certain configuration (e.g., a smooth "bump" in the field value). And the field can clearly evolve continuously between states corresponding to different numbers of particles (e.g., from one "bump" to two, or from none to one).

You assert that QFT predicts that particle creation and annihilation are instantaneous - I think this is false, and ask you to provide evidence. By your own description, it seems that what QFT can predict is the probability that some incoming state will (at a later time!) be observed to have evolved to some outgoing state; even for a state at a given time, the measurement process always takes a finite amount of time, and you cannot rule out that changes occurred over that time interval. Moreover, the quantum state itself does not change instantaneously, but evolves continously through superpositions of different states, as I've already argued.

Your criticism of the Bell-type QFT models with stochastic creation and annihilation events (that it "loses its philosophical attractiveness") is weak, and predicated on a misunderstanding - the pilot wave, i.e., the quantum state, is not affected by these events at all; this is not comparable to spontaneous collapse theory. The "discontinuities" are only in the particle motions, but understood properly as physical events, they are hardly objectionable! They are literally just the kind of processes you say QFT predicts: pair creation and annihilation, electrons emitting or absorbing photons, etc - only in PWT they are definitely and actually occuring in certain places at certain times, whereas for the orthodox theory any realistic description of this kind is of questionable legitimacy, given its paltry ontological resources.

**(4) - indeterminism and calculation of amplitudes**

As I understand things, the Fock state (or maybe this would be an arbitrary superposition of Fock states; I'm afraid I'm not clear on the terminology) can be represented as a function that assigns an amplitude to there being certain numbers of particles in certain regions. (That would be for a scalar particle without any internal symmetries, things are more complicated when there is higher spin or multiple components involved.) Basically, an amplitude for there to be zero particles, and then a 1-particle wavefunction weighted by the overall probability for there to be one particle, and then a 2-particle wavefunction, and so on. So I don't understand this objection: it looks to me like there is a perfectly good notion of a probability density in Fock space. (Perhaps you are talking about the probability amplitude that there is a particle in a given region of space regardless of however many other particles there are elsewhere? But that isn't the probability that PWT is concerned about, even in the non-relativistic theory - the relevant probability is the probability that the entire system is configured in a certain way - the probability of all the particles being where they are.)

Moreover, within each n-particle sector, it is perfectly possible to take derivatives of the quantum state in the directions of all the different particle coordinates, and so you can have probability currents derived from the quantum state; and between discrete sectors of the configuration space, the analogue of probability current is the stochastic jump rate. (The jumps are also associated with sources and sinks of probability within each sector, e.g., there could be a sink of probability at locations in the configuration space where an antiparticle is in the same location as a corresponding particle.)

Your final sub-point here is that amplitudes are conditional in QFT but need to be objective for PWT. But it is not true that amplitudes are necessarily conditional in QFT: that is just an artifact of the way QFT is normally done, for convenience. And yes, you could just take the input state to be the initial state of the universe and get the objective amplitudes: this will get the same answers for PWT, because yes, the quantum state evolution in PWT is deterministic.

**Matt's response (part 4)**

**(5) - locality**

Now we get to the locality objection. I appreciate that you've anticipated part of my response: orthodox quantum theory itself (QFT included) is non-local. (I encourage the interested reader to take a look at Travis Norsen's article "J. S. Bell's Concept of Local Causality" [4] for more on that.) Since you grant that, I'm a little puzzled by your paragraph about adding terms that violate Lorentz symmetry to the Lagrangian. Orthodox QFT is non-local without having any such terms; the non-locality is a quantum effect. Ultimately it comes from the fact that the ontology of QFT (such as it is) is non-local; the quantum state, the only reality unambiguously postulated by orthodox QM, cannot be located in space. In fact the quantum state is defined on a spacelike hypersurface, which is clear from the Schrodinger picture, so in a way orthodox QFT has a privileged foliation just as much as PWT does (at least if it treats the quantum state as ontic). This is even more clear if you consider wavefunction collapse as something real (where does the collapse happen? the quantum state does not have spatial parts); which is part of why most physicists don't consider wavefunction collapse as something real.

You can change to a different frame in QFT, but as far as I can tell, if the quantum state is ontic, this is actually a change in the ontology and not just a re-description. In the real world, one of the states would be the correct one. The only reason this isn't obvious is that QFT only gives probabilistic predictions rather than saying that anything definitely happens. You can change to a different frame in PWT and get all the statistics correct as well, but again, only one of the states would be correct, and again, this is more obvious in PWT because it has local beables and not just the quantum state.

Your attempt to get QFT off the non-locality hook is unconvincing. Whether it is substance or event causation, the causes and effects in question are at spacelike separation: from one frame you have A causing B, and from another frame you have B causing A. (Where, e.g., A and B are the measurement outcomes in an EPR-like experiment.) It cannot logically be both, which means you either have to deny causation happens between them at all (leaving the observed correlations unexplained), or only one of them is correct and thus QFT violates Lorentz symmetry.

So QFT is the quantization of a classical field theory which is local (i.e., a theory with a Lorentz-invariant Lagrangian), but it turns out that it is not a local theory. Specifically, the non-locality is made apparent when you try to describe what actually happens in space and time: when you look at not just the evolution of the quantum state, but observable events in the physical world. (And the fact that the non-locality is only revealed in the statistics of many events rather than in individual events is irrelevant; the statistics are aggregated from the individual events and they demonstrate that the individual events must have a certain character, as the EPR argument and Bell's theorem prove.)

Similarly for pilot-wave theory: it has the exact same Lagrangian, Hamiltonian operator, and quantum state as the orthodox theory, and that part of the theory is just as Lorentz-invariant as the orthodox theory. Specifically, it is completely false to say that PWT must include terms which violate Lorentz symmetry in the Hamiltonian and Lagrangian. Rather, the non-locality arises in the dynamics of what is happening in space and time: PWT gives the appearance of being "more non-local" than orthodox theory only because it actually has such dynamics, while the orthodox theory hides behind a veil of ontological obfuscation. (I realise I am starting to harp on this, my apologies!)

When you say "even if a pilot-wave model were constructed from the Standard Model Hamiltonian, it would just have this new fine-tuning problem", it isn't clear what supposed problem you are referring to. (Maybe it refers to the next paragraph about foliations?) Empirical Lorentz symmetry would be given without any special tuning as long as the PWT reproduces the predictions of QFT; in NRQM, there is no special tuning required for PWT to reproduce the predictions of quantum theory (it is basically a direct consequence of how PWT works), so it seems fair to me not to expect there to be any fine-tuning required to produce empirical Lorentz symmetry in a PWT that covers QFT.

And this partially answers the question of why the PWT produces empirical Lorentz symmetry when that symmetry is not present in the underlying ontology: Lorentz symmetry is present in part of the theory (the Hamiltonian) and a natural consequence of this turns out to be the reproduction of that symmetry at the empirical level. The remaining question is why that symmetry is present in the Hamiltonian (and one can equally ask this question about orthodox QM!). From one side, the answer to this question is that the evidence tells us to put it there. From another side, I would say it is because that is a good way to make a universe. God said, "Let there be light." (And it is hard to have light without Lorentz symmetry.)

Even if that isn't a satisfactory answer, the notion that this problem is even worse than the problems that afflict orthodox QM is at best a subjective opinion. From my perspective, since quantum theory is already demonstrably non-local, making the way that non-locality manifests clearer and less mysterious is an advantage of pilot-wave theory, not a liability.

**(6) - gauge symmetry**

Part of your objection from gauge symmetry rests on a conceptually problematic notion of gauge transformations: David Wallace argues convincingly that gauge transformations must be understood as acting on the field values, and not on the phase of the quantum state (see his "QFT, Antimatter, and Symmetry" [5]). So even if one were pursuing a Bohm-type theory, there would be no worry about the phase of the wavefunction (defined on the very high-dimensional configuration space) being absorbed by a gauge transformation (which changes the phase of the field values in physical space).

If the gauge fields are among the local beables I agree that there seems no way for PWT to proceed without choosing a particular gauge - one could hope that the results could be shown to be in independent of what gauge you choose (at least for all gauge-independent quantities) but it might be very hard to construct such a theory. Another response parallels the response to non-locality - choose a gauge (analogous to a privileged foliation) argue that the gauge symmetry is emergent because of the way the Hamiltonian is constructed - though the parallel is not exact, and this response seems much less compelling in the gauge symmetry case.

Another possibility would be to try to construct a PWT where the local beables only include gauge-invariant quantities. One option in this vein would be to not include values for the gauge fields among the hidden variables at all - Bell's suggestion was to just use the fermion numbers. (This is somewhat akin to viewing the electromagnetic field in classical mechanics as always being produced by charges, or even as just a calculation device for keeping track of the action-at-a-distance between charges.) In this case, the degrees of freedom of the gauge fields would only be present in the quantum state, which might make dealing with the gauge symmetry easier and closer to standard QFT. This is similar to how the particle spin degrees of freedom in the usual formulation of non-relativistic PWT are only present in the quantum state, not in the hidden variables.

Personally, I would find it much more aesthetically pleasing if photons and electrons were on equal ontological footing in the theory (especially if a field ontology for fermions ended up being possible after all), but I don't see anything logically or conceptually wrong with an approach that treats them differently in this way. (Note also that determinate values for the gauge field may potentially be derived in this approach from the quantum state and the hidden variables, to flesh out the image of the physical world, but those values would have something of an epiphenomenal character, similar to the local ontology in spontaneous collapse theories.)

**(7) - renormalisation**

Regarding renormalisation, as far as I can tell, the greatest difficulty would be in figuring out the relationship between the fundamental ontology and the behaviour at low energy where we make our observations. I think this is possible, however. Especially since I've argued that starting from a regularized QFT is a legitimate project, the technical details should be surmountable.

-------

And that's it! Obviously pilot-wave theory still has some difficulties, and has its work cut out for it. Ultimately just working with a regularized QFT is not fully satisfactory. Because of how bound up pilot-wave theory is with a scientific realist perspective, it cannot be completed before the deeper theory that underlies the Standard Model is found. (And maybe not even then; depends on what that theory looks like.) But I think it is a worthy perspective which still has the potential to be useful and fruitful.

**Matt's response (links)**

Here are links to the articles I referenced:

[1] Travis Norsen, "On the Explanation of Born-Rule Statistics in the de Broglie-Bohm pilot-wave theory" - https://www.mdpi.com/1099-4300/20/6/422

[2] David Wallace, "In defence of naivete: the conceptual status of Lagrangian QFT" - https://arxiv.org/abs/quant-ph/0112148

[3] Gerard 't Hooft, "The Conceptual Basis of Quantum Field Theory" - https://webspace.science.uu.nl/~hooft101/lectures/basisqft.pdf

[4] Travis Norsen, "J. S. Bell's Concept of Local Causality" - https://arxiv.org/abs/0707.0401

[5] David Wallace, "QFT, Antimatter, and Symmetry" - https://arxiv.org/abs/0903.3018

**Thanks to Matthew**

Thanks Matthew for your detailed reply. I will respond in due course, although as you can appreciate it will take me a while. But thanks anyway -- I recognise that you know more about the pilot wave interpretation than I do, so I appreciate the time and effort you have put in to respond to this. Even if you don't convince me of your position, you will at least force me to phrase my arguments better (and maybe qualify them).

Incidentally, you keep comparing the pilot wave interpretation against the orthodox interpretation. I quite agree that the orthodox interpretation is problematic, and I don't accept it myself. I use the instrumentalist variant of the Copenhagen interpretation as a benchmark. We know that it makes the correct experimental predictions, so it is useful as a judge of whether any alternative interpretation makes the correct predictions. If it is not at that level equivalent to the instrumentalist Copenhagen interpretation, then it is not (in my view) a live option. But clearly the Copenhagen interpretation is not a satisfactory account of what is really going on behind the scenes. We need a viable alternative. As stated, I am not convinced by the spontaneous collapse, or many world interpretations. I'll discuss a few other approaches in the next posts -- I'm just getting started with consistent histories -- before getting on to my own approach. The Pilot wave model would be good, if it can be reconcilled with relativistic QFT. I'll respond to you in detail over the next week or two as I get time.

**Response to Matthew Part 1**

Dear Matthew,

First of all I want to really thank you for your continued patience and interaction with me. You obviously are more familiar with the pilot wave interpretation than I am, and I am grateful to you for pointing me in the direction of the extra papers and resources. Sorry for taking time to respond, and I doubt that I will go over all your points in this one post. I will post further replies as I get the chance.

Thanks in particular for your clarifications and corrections in this first part of the response.

I should note that I have edited the post above to give a few more details on de-Broglie's version of the pilot wave model, which I think are going to be useful for our discussion. Once again let me know if you are happy with what I have to say (up to the last paragraph of the edit, which I expect you to disagree with).

The reason I stated that de Broglie's formulation is not as transparent as Bohm's is that Bohm's formulation is exactly equivalent to the non-relativistic Schroedinger equation. De Broglie's formulation isn't, and requires further assumptions, such as the equilibrium assumption. These assumptions might be justified, but they are still additional assumptions on top of what is required for quantum physics. Most presentations of it that I have seen have also given the particle equation and stated that it preserves the link between the particle distribution and the wavefunction, but without actually proving that. Hopefully my edit is good enough to provide the proof. Bohm's interpretation is also what I was introduced to in undergraduate physics (and I don't think that is just me -- if you look at Sabine Hossenfelder's presentation in you tube she also refernces Bohm's interpretation). So when I think of pilot wave theory, it is what first comes to mind. Perhaps physicists are taught to focus on this one because it is the weaker of the two interpretations, and it is not in favour with physicists. I also think that Bohm's version of the interpretation has a clearer analogy with classical mechanics. You obviously say that that analogy is not so important, but I will discuss that later.

I agree with you that in the Copenhagen interpretation there is a probablematic switch between an ontic and epistemic understanding of the wavefunction. This is one reason why I prefer psi-epistemic interpretations of quantum physics.

Thanks for reminding us that the pilot wave exists in configuration space rather than 3-d space. This is obviously true for all psi-ontic models of quantum mechanics.

I agree that the pilot wave is equivalent to the wavefunction in standard QM. This, I think, strengthens my case that it is never directly observed, since we never directly observe the wavefunction. We don't even directly observe the modulus square of the wavefunction. What we observe are individual particle states hitting detectors. It is only when we have a large number of those events that the pattern emerges that allows us to measure the modulus square of the wavefunction. But that is just an average over an ensemble. The actual value of the wavefunction (if we treat it as ontic) for any individual event is unknown (at least beyond that that we know it is not zero).

So now why I think the link to classical mechanics is important. The reason is that mechanistic philosophy -- not necessarily generated from Newton's equations -- is well known and understood by philosophers, and has been extensively studied. If you have an interpretation of quantum physics that is mechanistic, then essentially every other problem with your interpretation has already been solved. If, on the other hand, you are saying that pilot wave quantum physics is not mechanistic, then there is still work to do to work out the full philosophy and show it self consistent. I have no objections to abandoning mechanism -- I do that myself by linking my own interpretation with Aristotelian philosophy. But you need to show that there is a consistent and complete philosophy behind whatever your interpretation is. That might be Aristotelian -- if so, then good for you. But you still need to do the work of formally linking the physics with a particular metaphysics and showing their consistency. (Also, from my theist perspective, this is important because, at least historically, mechanism was key in developing deism and then atheism and is an important assumption behind at least the traditional forms of those ideologies. It is much easier for the atheist if he can find a mechanistic interpretation of quantum physics. This is why, I think, the many world interpretation is so appealing to atheists. Once they dismiss the pilot wave model, it is I think largely the only real option they have on the table.)

Take, for example, spontaneous creation and annihilation of particles, and suppose we adopt a version of the pilot wave . This obviously undermines determinism, which means that the theory is no longer mechanistic. It is also a significant departure from the philosophy for the quantum mechanics version of pilot wave theory, which is deterministic, and that (to my mind) is one of its selling points to philosophers. So in the deterministic interpretation of the pilot wave model, you have the evolution of the pilot wave and an evolution of the particle, and that provides a complete picture of the dynamics, with everything determined by the pilot wave and particle and their equations of motion. Add in spontaneous particle creation and annihilation events, though, and the evolution of the pilot wave and evolution of the particle are not sufficient to understand the physics. You also need whatever causes these creation and annihilation events, so there is some additional beable beyond the pilot wave and particle. Secondly, if the physics is not deterministic, you can't say this is the pilot wave at time t_0 and so this is what it is going to be at time t_1. You can only say that there is a particular amplitude for the pilot wave being something at time t_1 (you have, of course, to express your uncertainty in terms of amplitudes rather than probabilities because this is quantum physics and using probabilities will give you the wrong answer). This effectively means that you need an epistimic wavefunction to understand the possible future states of the pilot wave. And then I don't understand what advantage the pilot wave model has over a fully psi-epistemic interpretation. You are right, of course that this is also a problem for the Copenhagen interpretation and its derivatives: one of my main objections to Copenhagen was its incompleteness, in that it had both Schroedinger evolution of the ontic-wavefunction (no problem with that) and then a second mechanism of indeterminate wavefunction collapse which cannot be explained in terms of the physical beables (so what precisely causes that)?

I agree that the pilot wave model avoids the problems of the Copenhagen interpretation. My concern is just that it introduces new problems of its own.

I think that this is an appropriate place for me to stop for tonight. I'll be working on my consistent histories post tomorrow, and am busy on Monday and Tuesday evening. Hopefully part 2 of my reply will come on Wednesday or Thursday (UK time).

**Thanks and brief comment on the edit**

Dear Dr Cundy,

I will wait to respond to your response in full once you have had a chance to complete it. Just wanted to add that I meant to say in my earlier comments, thanks again for these posts and the discussion! It is engaging and helps me sharpen my own ideas. No rush in responding, either; please take all the time you need.

I will comment briefly (I hope) on the edit to your original post regarding the derivation of "equivariance" property of PWT (that of psi-squared distributed particles remaining psi-squared distributed under the guidance law): something does not seem quite right to me in the way you explain it. The context in which the equivariance property is meaningful is one where we have an ensemble of systems which are all described by the **same** quantum state (and Hamiltonian). So there should only be one psi appearing in the derivation (strictly speaking, each system has its own quantum state, but in this case they are all quantitatively identical). Otherwise, there is no one psi-squared with which to equate the probability distribution in the first place. Also, note that we're either coarse-graining or taking the limit as the number of members in the ensemble goes to infinity in order to get a smooth probability distribution for the configuration of a system randomly selected from the ensemble.

Now, you describe the probability distribution as "the expected number of particles in a given region" (of space?), which is not quite right - what PWT is concerned with here is the probability P that the configuration of the system (e.g., the 3N position coordinates of all N particles in the system) is found in a given region of configuration space (e.g., R^3N). At the end what you should have is not the rate of change of the number of particles in some region equated with the flux of particles through the boundary, but the rate of change of the probability P equated with the flux of probability through the boundary of the region in configuration space, due to the evolution of the ensemble of systems along the velocity field prescribed by PWT. And what is shown is not that the probability density obeys such an equation, but that psi-squared is a solution of that equation, and is the unique solution for which the probability density maintains the same functional relationship to the quantum state over time.

A point that perhaps should be emphasized is that the probability distribution for the configuration of the system, and the squared amplitude of the quantum state, are two conceptually distinct quantities that are contingently equated by the quantum equilibrium hypothesis. The equivariance property is that if the probability distribution is equal to psi-squared, the way it evolves over time (under the PWT guidance law) keeps it equal to psi-squared as psi evolves over time (under the Schrodinger equation).

Anyways, that's all for now - I have more to say in regards to part 1 of your response, but as I said, I will hold on to that until you have gotten a chance to reply in full to my first round of comments.

**Reply to Matthew Part 2**

You start by saying "I don't believe there is the kind of fundamental divide between quantum mechanics and quantum field theory." However, later on you say "The second point is that there really is no such thing as quantum field theory in the first place ... it is not mathematically well-defined." So if you say there is no fundamental divide between QM and QFT, and QFT is not mathematically well-defined, then does that mean that QM is not mathematically well-defined? Sorry, that's a bit of a cheap shot; I recognise that this is not the sense in which you deny a fundamental divide. But you do note that there is a difference between them, as well as various similarities. If so, then the division between us on this point is in the relative importance of the differences and similarities.

Secondly, I need to make a point regarding cut-offs and regularisation. The straight-forward momentum cut-off is not used because it violates gauge invariance. That is a dead end. It can be used as a toy model, e.g. in a phi^4 QFT theory, but not in the standard model. Of course, there are other ways to perform the regularisation, and lattice gauge theory does in effect introduce a momentum cut-off (inversely proportional to the lattice spacing) while remaining gauge invariant. Lattice guage theory is a perfectly valid QFT (albeit in U(1) electromagnetism it is not possible to define the continuum limit because of the location of the renormalisation group fixed points; one can do so in non-Abelian gauge theories), and is mathematically well-defined. However, it cannot be the actual theory of nature. The symmetry group in a LQFT is that of the hypercube, which allows additional terms in the Lagrangian. When simulating Lattice QCD (for example), we naturally use a Lagrangian which reduces to the standard continuum form, but we could in principle put in extra terms forbidden by Lorentz symmetry but allowed by hypercubic theory. Were reality a lattice, we would need some reason why those terms are suppressed, and the simplest answer is that reality is not a lattice. And I don't think a lattice gauge theory would work in the context of a pilot wave theory -- it is not trivial to descretise the derivative operator in such a way that does not lead to "fermion doublers" (i.e. additional identical copies of each Fermion), especially if we want to maintain chiral symmetry. The way to do so -- the overlap operator -- does not lend itself well to the Schroedinger picture. A discrete theory does not lend itself to a theory expressed in continuum differential equations as in the Pilot wave models.

The two remaining popular regularisation schemes are dimensional regularisation and Pauli-Villars. Both do introduce a momentum scale (indirectly in the case of dimensional regularisation), but this is not the same as a momentum cut-off, and neither reduce the number of degrees of freedom to something compact finite so I don't think either would work for your proposal. (I'm not sure that a momentum cut-off would reduce it to a finite number of degrees of freedom; the momentum is still a continuous variable, so there are an infinite number of momentum states even over a finite domain -- but I would have to think about that.)

The second general point I ought to make is that QFT is not a "theory" in itself, but a framework within which you can construct various theories. phi^4 theory is one such theory. Lattice QCD another. The standard model another. Some of these particular QFTs are mathematically ill defined. Others (such as lattice gauge theory, and, presumably, the QFT which is the true representation of nature) are not. Many of these theories are equivalent at the level of the Lagrangian, but not once you insert initial and final states (which have to be in some particular basis). One can call the renormalised standard model a theory which is an example of a QFT; but QFT itself is not a theory (despite the name). So when you say "QFT is a shifting sand of theories at different scales with different values of the fundamental constants" -- yes, but that doesn't mean that each one of those individual theories is a shifting sand in this sense.

You suggest, for example, that the standard model might just be an effective theory of something more fundamental. Maybe, although I don't consider that proven (it might, for example, be the flat space time limit of the more fundamental quantum gravity theory, in which case it is not an effective theory but the leading term in an series expansion for the full quantum gravity theory). However, the effective theories we know about and use are created by taking a higher order quantum field theory and integrating out various degrees of freedom. In other words, if the standard model is an effective theory, then the fundamental theory behind it would also be a quantum field theory. A different QFT, but still a QFT. That more fundamental QFT would, presumably, have to be internally consistent. The issue then isn't with the internal consistency of QFT in itself, but of those particular QFTs which require renormalisation and a careful definition of the measure, such as the standard model constructed from bare fields. [Wallace suggests string theory; to my knowledge that has been pretty much ruled out now by the failure of the LHC to find supersymmetry. Some people are just a bit slow to accept that.]

Moving on to the Wallace paper (sorry, I lack the time tonight to review the 't Hooft paper tonight). He starts by stating,

"For better or for worse, most canonical quantum field theories are found by starting with a classical field theory and then ‘quantizing’ it. To be sure, there is something intellectually unsatisfactory about this: given that quantum theory is the more fundamental theory, we would prefer to work in the other direction, that is, to recover classical field theories from quantum starting points (see Deutsch (1984) for a development of this criticism).

Nonetheless, the classical starting point has proven a powerful method for finding QFTs,

and we adopt it here."

I agree with Wallace that starting with a classical starting point is not the natural way of doing things. I accept that historically it was important in developing the canonical quantisation method. But it is not the approach I would take today, and to my mind it just complicates and confuses things. One can start with the premises of indeterminancy parametrised in terms of amplitudes, cluster decomposition, interactions through the creation and annihilation of particles, and a list of symmetries and the underlying space-time geometry and topology, and that is a decent starting point to define a QFT (albeit that I might have missed off some of the needed premises; I give a fuller description of this in my book). In other words, one can start from the more fundamental principles. One can then extract the classical limit and uncover a classical field theory. Starting from a classical theory and "quantising" it is not necessary and not the best way of doing it.

Wallace then discusses two main problems with QFT. The first one he skims over -- how to define the measure in an infinite configuration space. I recognise and agree that this is a problem -- as far as I know still outstanding (but my knowledge is far from perfect). It is avoided in QFTs with a finite number of degrees of freedom, such as lattice gauge theories in a finite volume, but, as stated, reality is not a lattice gauge theory. So this is a genuine outstanding problem (as far as I know; maybe there has been some development I am not aware of). But it is a problem that will affect all interpretations of QFT and all continuum QFTs -- including quantum gravity. In other words, there is likely to be some trick to get round the issue, and we just haven't been smart enough to figure it out.

The issue of the infinities that are dealt with via regularisation and renormalisation takes up the most part of Wallace's paper. I am not personally concerned about this, because of work done on the renormalisation group. Renormalisation is, in effect, a change of basis. The transformations from one basis to another are complex functions of momentum and all the fields (i.e. the change of basis mixes different momentum states, and also fermion and gauge field states). To perform such a transformation just modifies the Lagrangian -- it just adds additional terms, and changes the constants in front of the existing terms. We don't need perturbation theory to perform the renormalisation -- non-perturbative renormalistion is used in lattice gauge theory -- but it is easier to see what is going on. Of course, in perturbation theory, we perform a regularisation, which expresses the results in terms of the cutoff, perform an approximation to the basis change (valid up to the order in perturbation theory), and chose the parametrisation so that the results are independent of the renormalisation scale. In other words, the renormalised QFT is a valid QFT which lacks infinities and is independent of any cut-off scale. It is a well defined QFT (if we ignore the measure problem mentioned in the previous paragraph). Thus it is not QFT itself which is the problem, but some particular QFTs, such as the standard model as it is usually expressed in terms of the bare fields. And in particular, we don't need to go to some exotic beyond the standard model theory to eliminate this problem -- we already have the solution in the renormalised standard model. Of course, we regard the standard model (along with a large number of additional theories) as being in the same equivalence class as the renormalised standard model, in that their Lagrangians are equivalent to each other -- just changes in bases. But they do differ in the sense that we need to insert initial and final states by hand into the calculation. These need to be in some particular basis. Only the basis associated with the renormalised theory is valid up to higher momentum scales and thus free of infinities, and so this is the correct basis to use. Obviously, there is more I could say about renormalisation than this, but I don't think this comment is the place to say it. Maybe I should write a full blog post on it when I get the chance.

Regarding the Schroedinger picture, I agree that it is possible to construct a QFT in the Schroedinger picture (albeit for the standard model it is difficult to do so for the Maxwell term for the gauge fields, as you can see from the slightly clumsy way I treated it above -- one could fix to a gauge where the time derivative of the gauge field is zero and that might give you a Schroedinger picture, albeit one where gauge symmetry is hidden and you would have to worry about additional ficticious ghost fields). Indeed, my explanation in this post if applied to a non-gauge theory or a gauge theory without a Maxwell term could be expressed in the Schroedinger picture. My point is that this is not sufficient for the pilot wave theory -- you need to be able to express the evolution operator in the Schroedinger picture and in terms of differentials of the wavefunction (with all other terms Hermitian and commuting). It is that second requirement which I claim to be impossible in a QFT.

As to whether the renormalised standard model still leaves something to be desired in terms of "what is the world really made of?" and "what is really happening at atomic and subatomic scales?" -- hopefully I will address that in subsequent posts in this series.

**Reply to Matthew part 3**

First of all, sorry for the time taken to respond to this next part. Various things came up this week which required my immediate attention. I had a respite this evening, but I am not sure when this respite will end.

*(only one of which has to do with relativity specifically; the rest are indifferent between relativistic and non-relativistic QFTs, but no matter)*

I'll concede that one. Although I note in mitigation that the original motivation for QFT was formulated in order to combine quantum physics with relativity, and that non-relativistic QFT is usually constructed from a relativistic QFT in the infinite speed of light case (at least, that is how I have seen it done in the few times I have interacted with it).

*But this isn't necessary, as de Broglie's 1st-order formulation of PWT demonstrates: it simply adds the guidance equation to the evolution of the quantum state (and discards the now unnecessary measurement axioms and Born rule).*

But you still have to show that it is possible to construct a guidance equation that is consistent with QFT. In my edit, I showed how that can be derived for non-relativistic QM (you had a few quibbles about my derivation, but the general approach is reasonably standard). That derivation requires substituting in the non-relativistic SE at one point. This can be extended to Dirac QM and Klein Gordon QM (albeit with some differences in the case of Klein-Gordon, as the probability current takes on a different mathematical form). But the structure of QFT is very different and there is no precise analogue to the derivation I gave. If you know of one, I would happily retract this point. If not, then this is a big omission for pilot wave theory, and it cannot be shown to be consistent with the standard model.

*Classical mechanics equations are not always linear, though they are differential equations.* True -- I will concede that point.

*On the other hand, as I've argued, every quantum theory can be expressed in the Schrodinger picture, seen as an ODE for the evolution of the quantum state over time.*

While I agree that the theory can be expressed in the Schrodinger picture -- albeit that becomes a little more complex when you introduce the maxwell term -- it is disputable that in QFT it is expressed as a differential equation. At least, it is not the Fock state (which we are trying to solve for) which is differentiated with respect to space. It is more complex than just a straight-forward ODE.

*But care must be taken not to fall into confusion: the derivatives in the Hamiltonian operator are not derivatives in physical 3-d space. Rather, they are derivatives in configuration space, the space of possible coordinate values for the configuration of the system.* I haven't made that clear in my post, but I accept that.

*In this representation, creation and annihilation operators are represented with derivatives, analogously to the quantum harmonic oscillator; and quantum states are spread out in configuration space (not physical space).* While this seems plausible (at least for spin 0 Bosons; not sure about gauge fields and not sure about Fermions), it still has the problem that the derivation of the guidence equation requires derivatives in physical space acting on the wavefunction/Fock state. You still need to show how the spatial derivative in the Hamiltonian gets applied to the Fock state.

*A possibility that does work for fermions is the one suggested by Bell: discretize physical 3-d space and have particles jumping around in the lattice. The creation and annihilation operators have a discrete representation, but the configuration space, and motion within it, is discrete as well, so it works.* As I stated, though, there are good reasons to suggest that reality is not a lattice gauge theory. The symmetry group for a lattice and in the continuum is different; there are issues with fermion doublers and chiral symmetry, and also taking the continuum limit is not always straight-forward. Lattice gauge theory is a perfectly good means of performing non-perturabitive calculations in non-Abelian theories, but only because we are simulating the most generic Lagrangian possible in the continuum theory rather than in the lattice theory.

*You question how creation and annihilation operators are to be interpreted in PWT, and the answer is simple: they are operators acting on the quantum state, same as in the orthodox theory. The difference is that, related to these operators, PWT can have actual particle creation and annihilation represented by the "hidden variables", the beables in addition to the quantum state.* I think you miss my point here. The point is that you have two seperate evolutions -- the pilot wave and the guidence equation for the particles. Both need to have a creation or annihilation event at the same moment in time and the same point in space. My question was how you justify this seeming coincidence. In the psi-epistemic interpretations which I prefer, there are also clear beables -- the physical particles -- but no need for the pilot wave. A creation or annihilation event acts on the beables. Of course, we don't know what creations and annihilations happen between the initial and final state -- the true quantum history -- so we have to express possibilities as amplitudes, likelihoods, or pre-probabilities (however you want to phrase it). But that doesn't change that one of those possible histories corresponds to what happens in reality.

I would also disagree that the quantum state is gradually transitioning, but more on that later.

* There are methods for working out the probability current, and the probability density is usually immediately apparent from the form of the quantum state.* Again, it would be helpful if you can provide a reference for this statement for the particular example of the standard model.

*You assert that QFT predicts that particle creation and annihilation are instantaneous - I think this is false, and ask you to provide evidence. * This arises from the structure of the theory. Recall that a viable PWT has to be equivalent to that structure, so it would also apply there. The creation and annihilation operators are local in both space and time. They are not smeared out over a region of space or a period in time. If they were, it would lead to a different theory. In particular, the conservation of energy depends on the locality in time and the conservation of momentum on the locality in space. Equally, the path integral formulation, where reality is divided into a number of infinitesimal time slices which are then integrated over one by one, has complete creation and annihilation of particles within a single time slice. (Even the possibility of locomotion of particles from one location to another is constructed using creation and annihilation operators: you destroy the particle in one place, and create it in the next location, and unless you are claiming that the particle is in two places at once that has to be instantaneous.) The creation operator is not smeared over several time slices. Plus, of course, there is not just a single quantum field -- you have fields for electrons, each quark, neutrinos, photons and the non-Abelian gauge Bosons, the Higgs, plus anything else that is out there. Even in a field ontology, a switch from a field with no excitations to one with one excitations (e.g. photon emission from an electron) is a discontinuous change -- you cannot have a field representing half a photon at any moment in time. Or a W^- decay to an electron and anti-neutrino. Either it is instantaneous -- for both the pilot wave and the fields themselves -- or you are going to have a moment where you have half an electron present as the electron field gradually deforms (how else would you have a continuous evolution between no electrons and one electron?).

With regards to why I believe stochastic creation and annihilation events undermines the attractiveness of pilot wave theory is that it is no longer complete. In non-relativistic QM, the pilot wave and particles between them explain everything, with the deterministic SE/guidance equation. Now you have the pilot wave, particles/fields, the deterministic guidance equations, plus whatever is causing these otherwise inexplicable jumps in particle number. The point is that you have two different and unrelated mechanisms acting on the quantum state. Non-relativistic Pilot wave theory just had one (the deterministic evolution); consistent history or other psi-epistemic models just have one (indeterminate events, without any deterministic evolution); pilot wave theory plus spontaneous creation and annihilation joins Copenhagen and spontaneous collapse models in having two independent mechanisms competing in their actions on the same physical state. In other words, it complicates the mechanisms and consequently the philosophy.

*As I understand things, the Fock state (or maybe this would be an arbitrary superposition of Fock states; I'm afraid I'm not clear on the terminology) can be represented as a function that assigns an amplitude to there being certain numbers of particles in certain regions.* My point was that the Fock state isn't a function but a state. There is a difference between them in quantum physics, particularly QFT. A state is a particular vector on the Hilbert space. A function is a map between two different representations. My main point was that to get an amplitude in quantum wave mechanics, one just uses the wavefunction, which happens to be what is evolved by the Schroedinger equation. To get an amplitude from the Fock state |psi>, one needs to apply a particular final state <phi|, and then normalise it (with a factor I will denote as Z). It is the state |psi> which evolves under the Schroedinger evolution, not the amplitude <phi|psi>/Z. The calculation of the probability current, however, at least in wave mechanics, assumes that it is the amplitude that evolves under the schroedinger equation.

*And yes, you could just take the input state to be the initial state of the universe and get the objective amplitudes: this will get the same answers for PWT, because yes, the quantum state evolution in PWT is deterministic.* Except when you have spontaneous creation and annihilation of particles.

*Since you grant that, I'm a little puzzled by your paragraph about adding terms that violate Lorentz symmetry to the Lagrangian.* I think my point here was that if the evolution of the particle can be described by an equation of motion, then it would also be able to create a Lagrangian formulation and a principle of least action that would describe the motion of that particle. That Lagrangian would have to include non-local interactions to give rise to a non-local equation of motion.

You then discuss the ontology of quantum physics being non-local. This is only true in some interpretations of quantum physics, and not a result of the theory itself. Again, a psi-epistemic interpretation does not treat the wave-function/Fock state as necessarily treating a real and physical state (this will be clearer when I discuss these interpretations).

*You can change to a different frame in QFT, but as far as I can tell, if the quantum state is ontic, this is actually a change in the ontology and not just a re-description.* Which, to my mind, is a good reason to not treat the quantum state as ontic. As stated, Lorentz symmetry is fundamental to the modern construction to quantum physics. If Lorentz symmetry is to be understood as fundamental, then there cannot be a objectively preferred reference frame. If it is not fundamental, then you are left with the question of why the true theory is one which satisfies it, rather than one of the numerous non-Lorentz symmetry satisfying theories which are consistent with the foiliation of time.

A psi-epistimic theory does have local beables, but the quantum state (wavefunction/Fock state) is not among them. The beables are point-like particles (at least point-like when represented in a location basis). There is no non-locality in the ontology, and no reason to not consider Lorentz symmetry as fundamental to space-time and thus informing the various possible movements of the particles.

*From one frame you have A causing B, and from another frame you have B causing A.* A and B represent measurement outcomes here. But in my philosophy, neither A causes B nor B causes A. They are both quantum events which depend on the combination of the combination of matter and God's free action. (Obviously, as a theist I am happy to use God here; a non-theist would attribute it to the universe or something that plays the same role as God in theism in upholding the laws of physics.) It is not that one event causes another, but God is the cause of them both. The amplitudes we compute from quantum physics are a reflection of predictions we make how God acts in certain circumstances -- stochastic, of course, because God has free will, but nonetheless constrained by various conservation laws which arise from the underlying symmetries that reflect God's relationship with the universe.

And this is, of course, why I make the distinction between substance causality, which is explicable entirely in terms of physical substances and thus is local, and event causality, which depends on both the physical state of the universe and God. God is obviously timeless and omnipresent, so the principle of locality does not apply, and thus we expect non-locality.

*So QFT is the quantization of a classical field theory which is local (i.e., a theory with a Lorentz-invariant Lagrangian), but it turns out that it is not a local theory.* Again, I would disagree that saying that QFT is a quantisation of a classical field theory. QFT is more fundamental; we have to start our explanation with that, and derive the classical field theory as a limiting case of QFT. So the question remains why would a non-Lorentz symmetric fundamental theory necessarily have a lorentz symmetric classical field theory as its limiting case?

*And the fact that the non-locality is only revealed in the statistics of many events rather than in individual events is irrelevant; the statistics are aggregated from the individual events and they demonstrate that the individual events must have a certain character, as the EPR argument and Bell's theorem prove.)* I disagree here on two counts. Firstly one does not necessarily need statistics to show the non-local correlations. If you measure one particle as spin up along a particular axis, the other one is going to be spin down along the same axis. This by itself, of course, is not enough to demonstrate non-local action. But equally, when discussing causality, there is a clear distinction between the individual event and the ensemble. Individual events have causes (albeit that that cause is not physical in any interpretation except the pilot wave and perhaps many worlds). An ensemble, particularly, as we must, if we discuss the ensemble generated after an infinite number of events as this is what is predicted in QFT, is an abstraction. As such it does not have a cause in the same sense that an individual event has a cause. It is wrong to talk about a statistical ensemble as though it were a thing. The ensemble does not show that any particular individual event must have a certain character -- aside from it cannot occur in a state that has probability zero. Other than that caveat, the individual events are entirely indeterminate and unpredictable. Bell's theorem shows that the individual events cannot be driven by local hidden variables within the individual physical particles. That does not tell us what does drive them, or tell us much about their character.

*While the orthodox theory hides behind a veil of ontological obfuscation. (I realise I am starting to harp on this, my apologies!)* No need to apologise, I am harping on myself. I agree that Copenhagen lies behind a veil of ontological obfuscation. I disagree that that is the only alternative to pilot wave theory.

The fine tuning I discuss is as alluded to above. If we treat Pilot Wave Theory as fundamental, and pilot wave theory in itself fundamentally violates Lorentz symmetry, then there is no reason to think that the empirical results would satisfy Lorentz symmetry without some massive coincidence. There are plenty of non-Lorentz symmetric theories one could construct from the underlying principles behind pilot wave theory. The other interpretations of QFT do not have this problem because they regard Lorentz symmetry as one of the fundamental axioms behind the theory (related ultimately to the ontology of physical space-time). This explains why the theory appears to be Lorentz symmetric. But if you don't regard Lorentz symmetry as fundamental, then you have an additional task in explaining from your basic axioms of why it appears to emerge. You respond to this by stating that we put it in because the evidence tells us to put it there. But that is not the question I am asking. My question is based on how likely is it that the universe is as it is given the fundamental axioms that Pilot wave theory needs to function? Lorentz symmetry is not one of those axioms. Indeed, the fundamental ontology of pilot wave theory breaks Lorentz symmetry, as you have stated, so it cannot be an axiom. I agree that some other intepretations might have the same problem, but surely an interpretation where Lorentz symmetry is a direct consequence of its axioms is to be preferred as an explanation of physics compared to one where it is just an unlikely additional chance put in to fudge the theory?

The problem isn't that pilot wave theory makes the non-locality less mysterious. It is that it does so by making Lorentz symmetry more mysterious, so what it gains in one place it loses in another. Surely an interpretation which naturally explains both non-locality in event causality and Lorentz symmetry in substance causality is to be preferred?

*Part of your objection from gauge symmetry rests on a conceptually problematic notion of gauge transformations. David Wallace argues convincingly that gauge transformations must be understood as acting on the field values, and not on the phase of the quantum state.* But it is still expressed as a unitary rotation of our representation of something -- perhaps it is my lattice gauge theory background coming out here, where guage fields are expressed as unitary matrices. In pilot wave theory, that something has to be either the representation of the "particles" or the representation of the quantum state, as those are the only things around. As the particles are represented by a set of real numbers, a unitary transformation on them makes little sense, so that just leaves ...

I agree that the only ways that pilot wave theory could proceed are as you suggest. To fix to one particular gauge and assert that that gauge is the correct one to use just leads to the same problem as you have with locality: if the representation of the ontology lacks gauge symmetry, then why should it emerge in the dynamics without some special pleading? An interpretation which does not require that special pleading is surely to be preferred. To treat gauge invariant quantities as the beables (not totally unrealistic, as we only observe gauge invariant quantities) seems to exclude the wavefunction/Fock state/pilot wave as a beable as that is not gauge invariant. I have no problem with that, but I imagine that you might.

With regards to renormalisation, I agree that it is not necessarily an insurmountable problem for pilot wave theory, but it is an additional complication. I see renormalisation in terms of changes in basis to one which mixes the "bare" fermion and gauge fields, as well as fields at different momenta. This is basically a generalisation of Wilson's spin-block transformation approach to renormalisation. As such, the process of renormalisation would be difficult if you restrict yourself to one basis, but the end result is obviously the basis that needs to correspond to the ontology and you would say that corresponds to the actual physical state. It would, however, complicate writing down the pilot wave theory, as you would need to work from the Hamiltonian from the normalised basis which we don't actually know. Not insurmountable, but another complication.

So, anyway, that's my response to your response. Hopefully it wasn't too disorganised, and apologies for refering to interpretations which I am not going to flesh out until later in this series (consistent histories is up next, which is closer to how I view things and the comparison you probably should be making with Pilot wave theory rather than Copenhagen when debating with me.)

**Matt's response, round 2 (part 1)**

Thank you for your response to my first round of comments; here is my reply.

***Comparison with orthodox QM***

I realise that you don't accept the Copenhagen interpretation; my comparisons between pilot wave theory and orthodox QM are an attempt to illustrate the motivation for considering PWT despite the difficulties it faces (which I admit are there, although I don't believe they are as formidable as you judge them to be). I'm also not only trying to defend PWT to you, but to anyone who might come across this post, and for that purpose it is useful to compare to orthodox QM since it is still something of the default position.

I certainly intend to compare PWT to your preferred interpretations when you lay those out.

***More on Bohm vs de Broglie***

You say, *"The reason I stated that de Broglie's formulation is not as transparent as Bohm's is that Bohm's formulation is exactly equivalent to the non-relativistic Schroedinger equation."* But this seems obviously false: Bohm's PWT has definite particle trajectories; the Schrodinger equation doesn't. How can they be exactly equivalent? In terms of theory content, the following is a better summary of the situation (apologies in advance for multiplying acronyms):

Bohm's theory:

- the non-relativistic Schrodinger equation (for spin-0 particles only), written as a pair of coupled equations for the phase S and magnitude R, specifically:

--- the continuity equation for P=R^2

--- the "quantum Hamilton-Jacobi" (QHJ) equation for S

--- also note that S has to obey the "quantization constraint": integrals of the gradient of S around loops in configuration space have to come out to integer multiples of Planck's constant; this ensures that there are no discontinuities in a wavefunction reconstructed from R and S

- the "quantum Newton's second law" (QN2) for the acceleration of the particles (conceptually this is different from the QHJ equation for S, as the QN2 equation only applies along the particle trajectories while QHJ must be solved throughout the configuration space)

- the constraint on the velocities of the particles (this is the same as the guidance equation in de Broglie's theory, and makes QN2 redundant)

- the quantum equilibrium hypothesis (yes, Bohm's theory requires this as well)

de Broglie's theory:

- the non-relativistic Schrodinger equation (for particles of any spin)

- the guidance equation for the velocities of the particles

- the quantum equilibrium hypothesis

standard non-relativistic QM, for comparison:

- the non-relativistic Schrodinger equation (for particles of any spin)

- measurement axioms / rules for associating certain operators with certain observables or experimental configurations

- the Born rule

Of course, I would say that the equilibrium hypothesis is not actually needed at the fundamental level, for either Bohm or de Broglie's formulations of PWT. Also, strictly speaking, Bohm's formulation can handle particles with spin, but the way it does so is more complicated than in de Broglie's formulation.

(Edit: I realised after writing this section that you actually can eliminate QHJ from Bohm's theory and just use QN2, if you treat it as an equation on configuration space for the velocity field that enters into the guidance equation, and impose the quantization constraint on that velocity field. If anything this makes it a bit more like de Broglie's formulation, just that instead of constructing the velocity field for the guidance equation from the wavefunction, you solve an equation for the velocity field directly. But this is still a significant increase in complexity over de Broglie's formulation, since instead of a complex-valued scalar equation for the wavefunction you have a 3N-component vector equation for the velocities of all the particles, along with the pesky quantization constraint.)

If there is a focus on Bohm's theory over de Broglie's in the physics community, I don't believe it is due to any attempt to make pilot wave theory seem weaker - it is just an (unfortunate) artefact of the way QM developed over the last century. And in some ways, the difference between the two formulations (at least in the case of non-relativistic spinless particles) is more one of emphasis - one can derive the QHJ and QN2 equations within de Broglie's formulation, in order to draw out the analogy with classical mechanics. But where Bohm's formulation would regard those equations as fundamental, de Broglie's regards the Schrodinger and guidance equations as fundamental instead.

***Observability of the wavefunction***

Of course, the wavefunction is never directly observed, and must be inferred in PWT just as it is in ordinary QM. My point is that there isn't any additional uncertainty about the wavefunction in PWT compared to ordinary QM, and the uncertainty about the wavefunction isn't really relevant to PWT's explanation of the prototypical phenomena of quantum indeterminism. E.g., in standard QM we readily attribute the exact same, definite wavefunction to the electrons going through the double slit experiment even though in reality there may be differences for the wavefunction of each (maybe the shape or length of the incoming wavepacket varies slightly with each electron emitted). It just turns out that the differences don't do much to affect the result, and we can get away with a bit of an idealisation. We can do the same thing when applying PWT.

***On underlying philosophies***

It is probably true that the connection to mechanism and determinism is a significant part of the appeal of PWT to philosophers; I just haven't seen that it is much of a concern to physicists who are working on PWT (few and far between as they may be). Rather, the concern of pilot-wave proponents among physicists seems to be one that was expressed by John Bell (when he was asked whether QM had philosophical problems):

"I think there are professional problems. That is to say, I'm a professional theoretical physicist and I would like to make a clean theory. And when I look at quantum mechanics I see that it's a dirty theory. The formulations of quantum mechanics that you find in the books involve dividing the world into an observer and an observed, and you are not told where that division comes..."

It's not just that quantum mechanics has philosophical problems, it's that quantum mechanics isn't even really sensible **as a theory of physics** (and this is as true for QFT as it is for ordinary QM); all you really have is an instrumentalist "quantum recipe". So yes, we need to have a consistent and coherent underlying philosophy, but the even more pressing concern from a scientific perspective is that we need to have a consistent and coherent physical theory in the first place. And that is why a number of the physicists who work on pilot-wave theory are also perfectly happy to work on other models like spontaneous collapse, or even to try to see if anything sensible can be made of many-worlds. From the perspective of most pilot-wave proponents, PWT is not merely an "interpretation" of QM; it is actually a separate scientific theory (or perhaps, a group of theories with some core aspects in common), different from orthodox QM understood either as a purely instrumentalist "recipe" or as interpreted in Copenhagen.

Now, as a layperson interested in both physics and philosophy, I am as concerned about the underlying philosophy as the scientific theory. But it seems to me that having a "clean" theory in place can only help matters, especially when there's so many confusing and outlandish claims being thrown around regarding QM. And PWT does seem like one of the best avenues to arrive at such a theory, from my perspective.

***More on creation and annihilation events***

You write, *"Add in spontaneous particle creation and annihilation events, though, and the evolution of the pilot wave and evolution of the particle are not sufficient to understand the physics. You also need whatever causes these creation and annihilation events, so there is some additional beable beyond the pilot wave and particle."*

I completely disagree - in a pilot-wave model with spontaneous creation and annihilation of particles, these events are just another part of the evolution of the configuration guided by the quantum state; another "direction" to move in configuration space. The jump rates are generated by the quantum state just like the velocity field, and (in theories that have jumps between sectors and continuous motion within sectors) they work together to guarantee the equivariance property. In fact, in Bell-type QFTs, the deterministic particle motions and the jump events together are the continuum limit of what is in a lattice-regularised theory a unified process, the stochastic jumps of the fermion numbers on the lattice.

So understanding the creation and annihilation events is no different from understanding how the quantum state guides the motions of the particles generally. Certainly, there is work to be done there at the level of the underlying philosophy - though even there I think this objection is somewhat weak - but it is not a problem for PWT as a physics theory.

(As for the underlying philosophy, my own view right now is that if something like PWT is true, we should view the quantum state as a mathematical abstraction representing the causal powers and propensities of actually existing physical things, in so far as they effect the motion/change of some relatively fundamental physical features of the matter that makes up those physical things. I do think that some form of Aristotelianism better describes reality than atomism or mechanism, and I don't think indeterministic causation is much more problematic than deterministic causation in that framework.)

***Stochasticity and uncertainty***

Then you write, *"Secondly, if the physics is not deterministic, you can't say this is the pilot wave at time t_0 and so this is what it is going to be at time t_1."*

That is just not true - the evolution of the quantum state (which is just the Schrodinger evolution!) is deterministic in PWT, even in Bell-type QFTs with stochastic particle creation and annihilation. The stochasticity only occurs in the guidance law for the hidden variables. Even if it were true, however, what you say next:

*"you have, of course, to express your uncertainty in terms of amplitudes rather than probabilities because this is quantum physics and using probabilities will give you the wrong answer"*

... would still be false. By way of counterexample, there is no need to express one's uncertainty about the initial position of the particles in non-relativistic pilot-wave theory using anything other than classical probability theory. Nor are amplitudes needed to represent uncertainty in spontaneous collapse models - there is a probability per unit time per particle that a collapse occurs, and a probability per unit volume for where the collapse occurs if it does. Similarly for Bell-type QFTs with stochastic creation and annihilation of particles - the guidance law generates probabilities per unit time for jump events, and the resulting uncertainty in the evolution can be analysed completely via classical probability theory. Amplitudes represent features of the ontic quantum state in these theories, not uncertainties.

In an earlier post you argued that we have to express uncertainties with amplitudes rather than probabilities by giving an example involving the double-slit experiment. In a comment, I pointed out that you were getting the wrong answers using probabilities not because probabilities don't work here, but because you were using the wrong probabilities in your calculations. I'm afraid I never found your response to that comment satisfactory, certainly not in support of the strong claim that we **must** use amplitudes instead of probabilities to represent uncertainty (which you are making again here). As I skim through our earlier discussions it looks to me like you conceded that PWT is a counterexample to this claim, but argued that since it can't be extended to cover QFT, we should adopt the psi-epistemic viewpoint instead. But then you cannot claim that stochasticity requires a psi-epistemic viewpoint as an argument that PWT can't be extended to cover QFT - that is reasoning in a circle!

The bottom line here is that if a QFT extension of PWT succeeds, it will not require us to represent uncertainties using amplitudes, so the objection you base on this claim (that the theory would be partly psi-epistemic, so why not adopt a fully psi-epistemic view) is invalid.

**Matt's response, round 2 (part 2)**

Meant to bold my section headers in the first part, my bad... doing so in the remaining parts.

**Momentum cut-off and lattices**

I see how the momentum cut-off violates gauge invariance as well as Lorentz invariance, but I don't believe this makes it a complete dead end. Wallace certainly isn't as pessimistic about the possibility, and he argues that we could expect renormalization effects to remove any dependence of the low-energy behaviour on the details of how the cut-off is realised. While it's quite possible that he is not as familiar with the technical details as you are, based on my own (admittedly limited!) understanding of how renormalization works, his argument makes sense. So it seems to me that at worst, one could formulate a PWT with the mode expansions of the fields restricted (with a momentum cut-off and a finite spatial volume, but both large enough to have negligible effect on the predictions) and expect that both Lorentz and gauge invariance would be recovered at the observational level.

In fact, I take back what I said in the 1st round about gauge invariance seeming like a harder problem than Lorentz invariance. If the PWT reproduces the empirical predictions of standard QFT (which it will, if like in the non-relativistic theory it predicts that the local beables will be distributed according to the Born rule, which is what PWT does by construction) then the issue of gauge invariance will be no different than the issue of Lorentz invariance: the theory will obey gauge invariance and Lorentz invariance at the observational level, to the same extent as standard QFT. (I.e., we would have the same empirical justification for postulating those symmetries.)

Regarding your concern about adding terms to the Lagrangian because of the artificial symmetry of the lattice in such a theory - if we really need any justification for not having such terms, I think the notion that this pilot-wave theory is only an approximation to an unknown deeper theory (which we don't expect to have the artificial symmetry) is perfectly sufficient. I'm not saying a PWT with these kinds of features (e.g., based on a lattice or an arbitrary cut-off) should be seen as anything other than a stepping stone to a deeper theory; though I am saying we should consider the possibility that the deeper theory could look more like PWT than like orthodox QM.

From what I have read, fermion doubling appears to be related to the way that the discrete derivative operators distort the energy-momentum dispersion relation. There is a derivative operator on lattice theories that does not distort the dispersion relation and it seems to be a natural one to use (basically, interpolate the field with Fourier components defined from a DFT of the field values at the lattice points, then take the derivative of the interpolation). I don't know for sure if it resolves fermion doubling, but it seems possible. I'm also aware that this interpolation derivative is "non-local" in a technical sense that the more commonly used operators are not; but by construction this instance of non-locality should become less severe as the momentum cut-off is increased.

**Shifting sands, renormalization, and infinite dimensionality**

I didn't state this directly in my original comment, but I consider the problem of defining a measure in infinite-dimensional spaces to be at least as significant as the problem of infinities that are dealt with via renormalization. You can be as optimistic about the possibility of a solution as you like, but without one, from my perspective, we simply do not have a well-defined theory for any quantum system with infinite degrees of freedom. (And "there is probably a solution out there" doesn't follow from "this is a problem for everyone," even when the latter is conjoined with "we've been doing pretty well just ignoring it".) Not saying that optimism here is unreasonable, just that the problem should be recognized.

This ties into my "shifting sand" comment - without a solution to the measure problem, there is no such thing as "the" renormalized Standard Model - at best there is a class of theories with higher and higher (but still finite) momentum cut-offs which all have the same behaviour at accessible energy scales. But the "fundamental" constants in such theories have to be taken as the constants at the cut-off scale, which would be different for each theory. The renormalization group lets us navigate these different theories, and it legitimizes the renormalization process in that way. But it doesn't make the theory well-defined in the infinite limit.

Moreover, I am a bit sceptical of your statement that renormalization is just a change of basis. Again, my understanding is admittedly limited here. But in the literature on the foundations of quantum mechanics, objections to QFT based on the mathematical shiftiness of renormalization are typically met with the response that renormalization has been legitimized by the renormalization group method. But the RG is closely linked to the effective field theory perspective, in which renormalization involves integrating out short distance behaviour. But that is changing the degrees of freedom of the theory, which isn't merely a change in basis - it is changing the Hilbert space as well. A renormalized theory isn't just the same theory from a different perspective; it is a different theory. (By way of comparison: changing from a basis of eigenstates of the free Hamiltonian to a basis of eigenstates of the full Hamiltonian, with interactions turned on, would be merely a change in basis. But AFAIK that isn’t the same thing as renormalization.)

A couple more points on this subject from the perspective of PWT. First, compared to standard approaches to QFT, I think PWT has more potential to be viable even in the absence of a solution to the measure problem. And that is because, at a fundamental level, PWT doesn't actually need a normalizable quantum state. It might be (though wouldn't necessarily be) more difficult to find the appropriate guidance equations without that mathematical structure, but if the evolution of the quantum state and the local beables are defined, it doesn't matter if the quantum state isn't normalizable. (By way of analogy, in ordinary PWT, an infinite plane wave isn't a normalizable quantum state, but the guidance equation works with it just fine.)

Secondly, to the extent that one *can* consider standard QFT to be mathematically well-defined in the continuum limit (or if we did have a solution to the measure problem), I don't see any obstruction in principle against also extending PWT from (say) a lattice-regularised theory to a continuum theory. There simply hasn't been enough work done on it yet (which is merely a historical accident due to the unjustified rejection of pilot-wave theory in the early days of QM). I think it would be a bit of a double standard to be optimistic about the measure problem but not allow optimism here! And I think that in turn takes a lot of the force out of a number of your objections, e.g., those regarding a theory being constructed on a lattice - the objectionable features should go away in the continuum limit.

**Quantization of classical theories**

The way I view things, “quantization” is just a heuristic way of finding quantum theories - once the theory is stated, certainly you can take it as fundamental. But in certain respects it seems like the connection to the classical theory is still there in QFT - often you see the Standard Model is literally stated by writing down what is essentially the Lagrangian density for a classical field theory. The action integral in the path formulation uses classical quantities (there are no definite values of the field in spacetime to be integrated over in the quantum theory!), funny business with Grassmann variables aside (which I really think need better justification and mathematical definition than I have been able to find for them anywhere, but nevermind).

So some of my comments about quantization should be seen as referring to the role that the classical Lagrangian still seems to play in QFT.

**The Schrodinger picture**

You say it is disputable that QFT even in the Schrodinger picture is expressed as a differential equation. But that is literally just what the Schrodinger picture is: you have an ordinary differential equation for the quantum state (a vector in Hilbert space), with the rate of change of a vector given by a linear operator acting on that vector, and the quantum state evolving continuously according to that rate of change. That is all I'm saying when I insist that QFT can be put in the Schrodinger picture. And it can be; for another physicist to back me up on that, see Sean Carroll's blog post "You Should Love (or at least respect) the Schrodinger Equation".

(Yes, I'm aware that merely having the evolution of the quantum state in the Schrodinger picture is not sufficient for a pilot-wave theory, and I never claimed that it was. We'll get there in a moment.)

And specifically, when you do QED in Coulomb gauge - which incidentally is how I've seen the electromagnetic field quantized in introductory QFT books, so I take it that gauge fixing prior to quantization has at least some legitimacy - the Schrodinger picture works just fine. So I'm not sure why you think there is a hang-up there regarding the Maxwell term. (I know there are more complications when it comes to non-Abelian gauge theories; put that in the “more work is required” column.)

**Guidance equations**

So now, again, on the question of whether we can have guidance equations for a pilot-wave QFT. My apologies for not going into more detail, or providing more references, in my original comments.

There are numerous examples of guidance laws for pilot-wave versions of QFT in the pilot-wave literature. I believe one of the first was Bohm's - he formulated a pilot-wave theory for the (free) EM field. Struyve's review article "Pilot-wave theory and quantum fields" has several examples. Now, what you are asking for is not just a guidance equation, but a derivation of such that shows that the guidance equation has the "equivariance" property that features in the explanation for how PWT reproduces the Born rule. And derivations like this don't show up in the literature as often - but (it seems to me) at least part of the reason for that is that the derivation often so closely parallels the derivation of the guidance equation in ordinary PWT that the authors don't bother including it.

Allow me to explain that last point. I think part of the problem in our discussion so far is that you still have not properly understood the basic structure of pilot-wave theories. This is illustrated when on one hand you say you accept my point that the derivative operators appearing in the Hamiltonian are derivatives in configuration space, not physical space, and then in your very next sentence you say *"the derivation of the guidance equation requires derivatives in physical space acting on the wavefunction/Fock state."* No! No, it doesn't!

Here's a way to get to a derivation of a guidance equation (shooting for a high level of generality here; I'll get to specifics in a second):

(1) Start with a way of defining a probability distribution (strictly speaking, the equilibrium probability distribution) for the "hidden variables" from the quantum state.

(2) Find the evolution of that probability distribution over time, due to the evolution of the quantum state.

(3) Postulate a guidance law such that the evolution of the hidden variable configuration produces the same evolution of the probability distribution as the evolution of the quantum state does. (Usually, this is suggested / massaged out of the form of the evolution found in step 2.)

When the configuration space of the hidden variables is a continuous manifold, the probability distribution in step 1 is a probability density "p" in configuration space (not physical space). What you end up with in step 2 is something like dp/dt + stuff = 0, where "stuff" involves derivatives in config-space. Ultimately because the Hamiltonian is Hermitian, "stuff" can always be expressed as div(j), where "div" is the divergence in config-space, and "j" is a vector in (the tangent space to) config-space. Then the guidance law is just v = j/p, expressing the "velocity" of the configuration point in config-space. When I said in my original round of comments that there are methods of working out the probability current, I meant specifically that there are ways to figure out what "j" should be by starting from "stuff". (Struyve and Valentini's "De Broglie-Bohm Guidance Equations for Arbitrary Hamiltonians" is one reference for more detail on that.)

Now, there are no physical space derivatives anywhere here. The derivatives in "stuff" usually come from the quantum state being represented as a complex-vector-valued function on config-space and the Hamiltonian being represented in terms of config-space derivatives, and in all relevant cases I know of, such a representation is possible. (But the derivatives could also come about in other ways, e.g., because the probability in step 1 is extracted by a set of projection operators that continuously depend on the config-space location, and those projection operators interact with the Hamiltonian operator in a way that produces config-space derivatives.)

For an example you can consider the quantized Klein-Gordon scalar field - or the quantized photon field. In both of these cases the free Hamiltonian is mathematically isomorphic to a set of quantum harmonic oscillators, so clearly there is a representation of the quantum state as a function (of the "position" coordinates of the oscillators) where the Hamiltonian acts as a differential operator. And in fact it turns out that these "position" coordinates are just the coefficients of the mode expansion of the field, so you can have a pilot-wave theory with a definite configuration for the field (parameterized by those coefficients) and derive a guidance law for the motion of the field in a way that is exactly analogous with ordinary PWT. There's infinitely many oscillators in the continuum case, sure, but that is a different issue. (Moreover, it turns out that because the conjugate momenta to the config-space variables do not appear in typical interaction terms, the form of the guidance law is usually not changed when we move over to the interacting theory - this is the case in phi^4 or QED, for example.)

When the configuration space is discrete or has discrete sectors, the derivation of the stochastic jump rates (which are the discrete analogues of the velocity field in configuration space) follows the same pattern, and you can see it worked out in the papers by Duerr et al. on their Bell-type QFTs (look for the keywords "minimal jump rate" or something like that).

The probability currents that you mention from Klein-Gordon QM or Dirac QM are basically entirely irrelevant here. The pilot-wave quantum field theory that we are after would not have a wavefunction which is a solution of the K-G or Dirac equation guiding a single particle in physical space. It would have a quantum state guiding an entire configuration in configuration space - just as it does in the non-relativistic theory.

**Matt's response, round 2 (part 3)**

**Even more on creation and annihilation**

You say, *"I think you miss my point here. The point is that you have two separate evolutions -- the pilot wave and the guidance equation for the particles. Both need to have a creation or annihilation event at the same moment in time and the same point in space. My question was how you justify this seeming coincidence."*

I’m not entirely sure what you are saying here. It might be that this objection stems from a failure to understand pilot-wave theory on its own terms; you are instead importing assumptions of your own interpretation that the pilot-wave theory doesn't share. There are no creation and annihilation events in the evolution of the quantum state - that is just false. What you have there is nothing other than a continuously changing vector in Hilbert space; a changing superposition of basis states (in whatever basis you choose). The only creation and annihilation events in the pilot-wave perspective are in the evolution of the local beables. So there is no coincidence to justify.

If that isn’t what you were saying, then it might be that you hadn’t realized that the probabilities for the stochastic jumps are generated by the quantum state in the same way as the velocity field that guides the motions of the particles, so that it is hardly a coincidence that their occurrences are consistent with the evolution of the quantum state (in the sense of matching the statistical predictions that can be obtained from the quantum state, since, as noted in the last paragraph, the quantum state evolution does not literally contain any definite creation/annihilation events for the hidden variable evolution to be consistent with).

*In the psi-epistemic interpretations which I prefer, there are also clear beables -- the physical particles -- but no need for the pilot wave. A creation or annihilation event acts on the beables. Of course, we don't know what creations and annihilations happen between the initial and final state -- the true quantum history -- so we have to express possibilities as amplitudes, likelihoods, or pre-probabilities (however you want to phrase it). But that doesn't change that one of those possible histories corresponds to what happens in reality.*

I have never encountered such an interpretation explicated in a clear and coherent way, so I look forward to future posts in this series.

Regarding whether particle creation and annihilation is instantaneous: *The creation and annihilation operators are local in both space and time. They are not smeared out over a region of space or a period in time. If they were, it would lead to a different theory. In particular, the conservation of energy depends on the locality in time and the conservation of momentum on the locality in space.*

Umm... isn't it symmetry (not locality) with respect to time and space translations that are responsible for conservation of energy and momentum? (And you can have those symmetries even with a finite momentum cut-off, particularly when the cut-off is implemented as a restriction on the mode expansion of the fields rather than as a restriction to a fixed lattice.) Moreover, not all creation and annihilation operators are local - in fact, the first ladder operators for fields that one typically encounters in a QFT textbook are the ladder operators for definite momentum states, which are completely delocalised. But the theory can be completely formulated in terms of those operators.

But this is all irrelevant, because once again, all that the math gives you is a quantum state continuously evolving in time. There are no creation and annihilation events in the evolution of the quantum state, just a relationship between the quantum state and its rate of change in time that can be characterised by a Hamiltonian operator. You need a clear ontology to say whether there actually are any particles to create or annihilate at all.

*Even in a field ontology, a switch from a field with no excitations to one with one excitations (e.g. photon emission from an electron) is a discontinuous change -- you cannot have a field representing half a photon at any moment in time.*

You absolutely can have that. Or more precisely, in a PWT with a field ontology for the electromagnetic field, there just are no such things as photons in the first place. There is a quantum state (which may be described as a superposition of states with different numbers of quantum excitations we call photons, but these don’t exist in the local ontology), and the quantum state guides the motion of the actual, physical electromagnetic field. The quantum state can influence the field to have particle-like behaviour at the observational level, tending to produce discretized results in certain interactions, but this is analogous to the way a Stern-Gerlach device causes electrons to land in roughly two discrete locations on the detection screen. It isn't that the electron can only be in those locations, it is just that (in PWT) the interaction forces it towards one of those two spots. Similarly, it isn't that the field can only have (say) certain discrete amplitudes; rather, the guidance equation tends to bring it to those discrete amplitudes in certain situations - but it can evolve continuously between them.

**Repeating myself on stochastic jumps**

Because it came up more than once and seems to be rather significant in your perspective, I want to emphasize two points (with apologies for the repetition):

The evolution of the quantum state in PWT is deterministic, ***even when the guidance law is stochastic*** (or when it has a stochastic component). It’s just the Schrodinger evolution, identical with orthodox QFT.

The stochastic jumps representing creation and annihilation events, when present in the theory, ***are not a separate process added onto the guidance law***. They are part of the guidance law, derived from and generated by the quantum state in the same way, and they function together with the continuous motion between jumps in the demonstration of the equivariance property. So there’s an important sense in which the jumps and the deterministic motions are the same process, acting in a way that is appropriate to the kind of motions possible in configuration space (discrete motion between discrete sectors, continuous motion within continuous sectors).

I’ve been spending a lot of time defending this feature of Bell-type QFTs because I do think they have the best chance of succeeding (especially combined with the interior boundary condition method if further developments of that prove fruitful) - and at least for the ontology of fermions, it is a fairly natural result of attempting to make a PWT for fermionic fields, in that it is suggested by the math itself. For bosons, a field ontology falls out of the math more easily.

**Amplitudes in QFT vs QM**

You write, *My point was that the Fock state isn't a function but a state. There is a difference between them in quantum physics, particularly QFT. A state is a particular vector on the Hilbert space.*

The wavefunction in ordinary QM is also a vector in Hilbert space. You can think of the function psi(x) = <x|psi> where |x> is a position eigenstate. And the probability current can be written entirely in terms of states and operators, if you want. This isn’t a difference between QFT and ordinary QM.

**Locality and Lorentz invariance**

*I think my point here was that if the evolution of the particle can be described by an equation of motion, then it would also be able to create a Lagrangian formulation and a principle of least action that would describe the motion of that particle.*

Depends what you mean. Ordinary PWT can be formulated from a Lagrangian density on configuration space, with the non-relativistic Schrodinger equation and the guidance equation as its resulting equations of motion, essentially treating PWT as a field theory on config-space. But something like that would be completely irrelevant: comparing the config-space Lagrangian to, e.g., the Standard Model Lagrangian would be comparing apples to elephants, and it is unnecessary to think of the theory in that way in the first place.

*If we treat Pilot Wave Theory as fundamental, and pilot wave theory in itself fundamentally violates Lorentz symmetry, then there is no reason to think that the empirical results would satisfy Lorentz symmetry without some massive coincidence. There are plenty of non-Lorentz symmetric theories one could construct from the underlying principles behind pilot wave theory. … My question is based on how likely is it that the universe is as it is given the fundamental axioms that Pilot wave theory needs to function?*

To be clear, there is no coincidence at all that a theory with a Hamiltonian compatible with Lorentz symmetry produces empirical results satisfying Lorentz symmetry - that just follows naturally from the way the theory works - so the only coincidence you can be referring to here is the fact that the theory has such a Hamiltonian.

And honestly, this just seems like a complete non-problem to me. The fundamental axioms that QM needs to function also don’t require Lorentz symmetry, and QFT is just QM applied to fields (there was my whole shtick with there being no fundamental divide between them, after all). Lorentz symmetry is imposed as an additional constraint when formulating the theory, ultimately due to empirical considerations. And we can impose the same constraint when formulating pilot-wave theories (in which context it specifically applies to the Hamiltonian, because that is what ultimately determines the dynamics - the guidance equation follows from it and isn’t independent).

The fundamental ontology of PWT “breaks Lorentz symmetry” (with quotes, because this doesn’t make it incompatible with the Hamiltonian used in the Standard model), but so does the ontology of the orthodox theory if it has one at all. Many-worlds is in the “not even wrong” category when it comes to locality and Lorentz symmetry, and I’m not sure what the current prospects of objective collapse theories are in regards to Lorentz symmetry, but it looks like it is at least debatable. So from my perspective PWT is actually in the same boat as most everyone else, or at least in the same flotilla. (Obviously I have my doubts about psi-epistemic theories; I’ll set those theories aside until your later posts.)

I do think fine-tuning arguments can work (specifically, I think the fine-tuning argument well-known in apologetics circles provides some reason to believe in God), so there’s a chance I could be convinced that there is something to this objection. But as it stands I just don’t see the force of it.

Regarding your event vs. substance causality divide, I’ve made this comment before, but it still seems to me like a step too far towards occasionalism. But if non-locality is not considered problematic when it is effected by a timeless and non-spatial entity, I don’t see much difference from the physics standpoint if it is effected by a merely non-spatial entity (e.g., the quantum state). Though I do recognise that the problem you see here is not non-locality, but Lorentz symmetry.

**Gauge symmetry**

Regarding what the gauge transformation acts on: *In pilot wave theory, that something has to be either the representation of the "particles" or the representation of the quantum state, as those are the only things around. As the particles are represented by a set of real numbers, a unitary transformation on them makes little sense, so that just leaves …*

If you have a field ontology, the gauge transformation clearly acts on the field values. In a particle ontology I’m not exactly sure how it pans out, but since the possibility of gauge transformations does not trivialise the probabilistic predictions for the behaviour of quarks and such in the orthodox theory, and the guidance laws in PWT are constructed to be consistent with those probabilistic predictions, I don’t see any threat that they would trivialise the guidance laws either.

*To fix to one particular gauge and assert that that gauge is the correct one to use just leads to the same problem as you have with locality: if the representation of the ontology lacks gauge symmetry, then why should it emerge in the dynamics without some special pleading?*

And I’d give the same response.

The one difference between Lorentz and gauge symmetries is that I think there are good philosophical reasons to believe in the A-theory of time, and thus a privileged foliation is a natural postulate anyways. Whereas we don’t have something like that with respect to gauge symmetry.

There is one thing that could be said - in QED, it is arguable that the Coulomb gauge is a natural choice because it corresponds to the actual gauge-invariant degrees of freedom of the EM field. (With the gauge choice reflecting the split between the transverse and longitudinal parts of the electric field, and their different behaviour.) I’ve read that more work is needed to identify the gauge-invariant degrees of freedom in non-Abelian gauge theories; doing so could help find a pilot-wave model for those theories.

*To treat gauge invariant quantities as the beables (not totally unrealistic, as we only observe gauge invariant quantities) seems to exclude the wavefunction/Fock state/pilot wave as a beable as that is not gauge invariant.*

The quantum state is a beable in PWT, but it is not a local beable (defined in physical space) - its role is to direct the motion of the local beables. So there’s no problem with the quantum state having gauge dependence as long as that gauge dependence doesn’t affect the motions it produces in the gauge-invariant quantities. But more generally, I am fine with using gauge dependent hidden variables as well. Those can still be seen as auxiliary variables working together with the quantum state to produce the motion of the invariant quantities, which are viewed as the real local beables.

Ideally it should be possible to switch to a different gauge and show that you can still get the same class of possible motions for the invariant quantities, but I see that as less important than the empirical adequacy of the theory.

**Conclusion**

Sorry that this is so long... it is a complicated subject! Maybe to sum up, I believe most of your objections can be significantly mitigated, and I still think PWT has a very good chance of succeeding as the correct "interpretation" of quantum theory.

Forgive me if I'm missing something, but ...

1) The issue with explaining particle creation with differential equations is that particle number just isn't a continuous parameter, and it doesn't even make sense to talk about a derivative of particle number over *anything* - time, space, beauty or truth, it makes no difference. So how can evolving a quantum state continuously in time change a *discrete* quantity?

2) The issue with Lorentz and gauge symmetries can be put this way: all versions of pilot wave theory assume a privileged foliation as a *real* feature of the underlying physics, not a mathematical fiction. Any theory making that assumption breaks Lorentz symmetry. So what constrains the real world to behave according to the Standard Model, which obeys Lorentz symmetry, if not the underlying physics? Is there better reason to believe in that constraining factor than in the mysterious "measurement" process of Copenhagen interpretations or the "flashes" of objective collapse theories?

Hi Michael,

1a) With a particle ontology, the quantum state can change the particle number by generating probabilities for stochastic jumps. (The Bell-type QFTs by Duerr et. al. show how the resulting motion retains the equivariance property.)

1b) With a field ontology, there are no particles at the fundamental level, only continuous evolution of the underlying beables (the field values). But the quantum state can guide it towards particle-like behaviour, e.g., tending towards certain discrete amplitudes (and due to rapid upon interaction with a macroscopic system, we never observe the in-between values).

2) The underlying physics of a hypothetical pilot-wave QFT includes the Hamiltonian of the Standard Model, which produces the Lorentz-symmetric (and gauge-symmetric) empirical results without any fine-tuning required. And we have as much reason to believe in that factor of the theory as we have for believing the results of orthodox QFT.

Matthew:

Continuous evolution of probabilities for stochastic jumps is what orthodox QFT says is going on between measurements (the measurements themselves being the jumps.) If that's the best pilot wave theory can do for particle creation and annihilation, it's no better off than the Copenhagen interpretation.

Besides, even in a field ontology where particles aren't real, particle number is still a well-defined property of the field as a whole, and it can't vary continuously - there's no such thing as a continuum of integers. If the field is single-valued at all points, as it must be under PWT's assumptions, there's no *continuous* transformation which can change the field's particle number, for the same reason that there's no deformation of a manifold that changes its genus. The problem is mathematical, not empirical.

So you have to fit discontinuous evolution of quantum states into your theory somehow. But quite aside from the problems that creates with Lorentz symmetry - you can't model a system that changes discontinuously with differential equations. Differentials aren't well-defined without continuity. Orthodox QFT sweeps the discontinuity of QM under the "measurement problem", which is about the same as sweeping dust under the rug; but PWT only manages to trade one pile of dust for another.

Michael, when you say:

*Continuous evolution of probabilities for stochastic jumps is what orthodox QFT says is going on between measurements (the measurements themselves being the jumps.) If that's the best pilot wave theory can do for particle creation and annihilation, it's no better off than the Copenhagen interpretation.*

... what it honestly sounds to me like is "A lion and a seahorse are the same thing, because seahorses are horses, and horses and lions both have manes".

I'm trying to give you the benefit of the doubt here, because maybe you are confused by the terminology of "stochastic jumps" and such, and don't understand what it looks like. (In which case, maybe learn more about it before making criticisms.)

But if you seriously can't see the world of difference between (i) a theory which postulates that particles move about on definite trajectories and sometimes indeterministically are created or annihilated (through being emitted or absorbed by other particles, or by pair creation or annihilation, etc), vs (ii) a theory which says nothing definite about physical reality or is even anti-realist about it, with a vague and observer-centric split between the two domains where the completely different parts of the theory apply - and where the part of the theory that corresponds to what we actually observe looks less like a physical theory than like an instrumentalist procedure for making predictions... then it seems like there is little point in me continuing any discussion with you.

If instead you can see the difference, and recognize that there might just be good reasons for preferring (i) over (ii), then great, we can continue our conversation.

*Besides, even in a field ontology where particles aren't real, particle number is still a well-defined property of the field as a whole, and it can't vary continuously*

Just because the eigenvalues of an operator on the quantum state, bearing a certain relationship to a property of the field that we call the "particle number", are restricted to the integers, doesn't mean the values of that property itself are restricted to the integers. It is trivially easy to show a counterexample to your claim: the particle number of the complex Klein-Gordon field is the integral over space of Re(i*phi*dphi/dt); continuously vary the overall magnitude of the field and you can make the particle number any positive real number you want.

*So you have to fit discontinuous evolution of quantum states into your theory somehow.*

Now you're just confused; there is no discontinuous evolution of the quantum state. d|psi>/dt = -iH|psi> is the equation for the continuous rate of change of the quantum state, and it is obeyed at all times in PWT.

*you can't model a system that changes discontinuously with differential equations.*

Sure, and in a PWT that does have discontinuous changes (such as creation and annihilation events), there's more than just differential equations - there's also an equation that says that the probability for a discontinuous change from configuration C to configuration C' to occur in an infinitesimal time interval dt is equal to r(C',C)dt, where r is some function of the quantum state.

**Thanks to Matthew And Michael**

Thanks for your replies, Matthew, and your comments, Michael. I'm busy for the next few evenings; I will try to respond towards the end of the week.

Matthew: "But if you seriously can't see the world of difference between (i) a theory which postulates that particles move about on definite trajectories and sometimes indeterministically are created or annihilated (through being emitted or absorbed by other particles, or by pair creation or annihilation, etc), vs (ii) a theory which says nothing definite about physical reality or is even anti-realist about it, with a vague and observer-centric split between the two domains where the completely different parts of the theory apply - and where the part of the theory that corresponds to what we actually observe looks less like a physical theory than like an instrumentalist procedure for making predictions... then it seems like there is little point in me continuing any discussion with you."

Well, of course the *difference* between i) and ii) is manifest. And I did not say Copenhagen and PWT are "the same"; I said PWT is no better off than Copenhagen. The reason is that both theories lie open to the same objection: they both postulate two dynamical processes, only one of which applies at any given moment, with nothing said about which one is correct - except "make the theory agree with experiments". Copenhagen has evolution of the quantum state, and measurements; relativistic PWT has particles moving on fixed trajectories, and random creation/annihilation of particles. (Non-relativistic PWT doesn't have this problem; but it doesn't agree with experiments, making it a non-starter.)

"Sure, and in a PWT that does have discontinuous changes (such as creation and annihilation events), there's more than just differential equations - there's also an equation that says that the probability for a discontinuous change from configuration C to configuration C' to occur in an infinitesimal time interval dt is equal to r(C',C)dt, where r is some function of the quantum state."

As an aside, that "probability of discontinuous change" is an awful lot like the "swerves" in Epicurus's version of atomism. Just as Epicurus postulated random deflections in the otherwise uniform motion of atoms to explain why atoms collide, this postulates random creation/annihilation events disturbing the otherwise fixed trajectories of particles. Epicurus's theory is open to the same objection as PWT (and Copenhagen) too.

Michael, according to you, PWT being open to this **one** objection is sufficient to make it **no better** than Copenhagen. I say that sounds ridiculous, ignoring the differences between them that you admit are manifest.

But I deny that it is even open to the objection. The whole character of even the stochastic part of this kind of PWT is fundamentally different from Copenhagen; in at least three respects.

One is intelligibility. There is a significant sense that the collapses in Copenhagen are "more random" than the jumps in PWT, and that is because while both are indeterministic, the process in PWT is precisely defined within the theory itself, with a definite probability for any given outcome occurring in any given time interval, and intelligible in terms of the postulated ontology, without bringing in anything external to the theory. Whereas the collapses in Copenhagen do not have any defined probability of happening (just probabilities of outcomes when it happens), and invoke the vague notions of an "observer" or a "macroscopic system" - something outside the purview of the theory, to some extent up to the subjective judgement of the physicist as to when it applies, and what set of outcomes is being considered.

The second is continuity. The jumps in PWT are discontinuous because they involve changes in particle numbers, but they still only move between points which are "nearby" in configuration space - e.g., a configuration where all the particle positions are the same except that one has just emitted another particle, or a pair of particles have just annihilated each other, etc. Whereas the collapses in Copenhagen completely and utterly change the wavefunction to something very different.

The third is the unity and compatibility of the process with the rest of the theory. In PWT the stochastic and deterministic processes function together to guarantee the equivariance of the whole motion, which plays a significant role in the theory. I think it is tendentious to say they contradict each other - over any given time interval, both parts of the guidance law (the probability for jumps between sectors and the velocity field that the configuration follows within a sector) apply and work in concert. They are really one process: the motion through configuration space guided by the quantum state. Whereas in Copenhagen, the collapses **break** one of the most important features of the evolution of the quantum state, its unitarity, and there is really no feature in the theory that can help unify them.

(The spontaneous collapse theories suffer in the second and third respects compared with PWT, but at least they are better than Copenhagen in the first respect; in that the collapses are intrinsic to the evolution of the quantum state and occur at precise times in clearly defined ways, in contrast to the vagueness of the orthodox QM. And usually they are better than Copenhagen in postulating a clear physical ontology as well.)

So I think you are quite wrong, but I do thank you for drawing this objection out - I think it helped me explain this feature of (one type of) PWT more clearly.

**Further questions regarding “The Associationist Mindset”**

Dr. Cundy,

I have been contemplating what StardustyPsyche said about consciousness being a hallucination. I still don’t know what he means by this. He says that those that criticize this line of thinking are confusing hallucination with “unreality”. I mean, isn’t that what a hallucination is? The experience of something that’s not real? The only conclusion I’ve come to is that he’s changed the definition of hallucination so much as to be unrecognizable. What do you think?

**Response to Matthew/Michael**

Ok, that's a lot to get through. And I'm only going to have time to respond to part of what is written. At some point we are going to need to agree to disagree, and I am sure that we both have things we need to do.

I should say I generally agree with Michael's points, except where he says that one particular objection makes PWT no better than Copenhagen. I quite agree that Copenhagen has many deep problems. PWT does make philosophical sense (or at least numerous forms of it do); the question is whether it (or one of the forms which makes sense) can be applied to QFT.

I should also say that I am always nervous to appeals to an underlying theory behind QFT. We don't know what that theory is, so it is useless to speculate over it. A "Quantum gravity of the gaps" approach strikes me as being no better than a "God of the gaps" approach. We have to try to understand QFT as it is. (And, perhaps, coming to such an understanding might give us clues about what lies underneath it.)

With regards to Bohm's theory being equivilent to the Schroedinger equation, you write "But this seems obviously false: Bohm's PWT has definite particle trajectories; the Schrodinger equation doesn't. How can they be exactly equivalent?"

I might have been unclear here. I meant to say that it is mathematically equivilent. In the derivation I gave in this post, right up to the final equation all we have done is reformulate the Schroedinger equation. At that point, one can still go with a Copenhagen interpretation, or spontaneous collapse, or many worlds, or whatever. But the reformulation provides us with another natural interpretation. We suppose S and R represent something physical, just as the symbols do in Newtonian mechanics, and S behaves like a Newtonian particle and R a Newtonian wave. The final step of the interpretation is then just as we are used to doing when interpreting classical mechanics. It might just be me, but de Broglie's approach never struck me as quite so intuitive.

Moving on to particle and creation events, you replied to me saying:

*You write, "Add in spontaneous particle creation and annihilation events, though, and the evolution of the pilot wave and evolution of the particle are not sufficient to understand the physics. You also need whatever causes these creation and annihilation events, so there is some additional beable beyond the pilot wave and particle."*

*I completely disagree - in a pilot-wave model with spontaneous creation and annihilation of particles, these events are just another part of the evolution of the configuration guided by the quantum state; another "direction" to move in configuration space. The jump rates are generated by the quantum state just like the velocity field, and (in theories that have jumps between sectors and continuous motion within sectors) they work together to guarantee the equivariance property. *

But then in response to Michael, you write,

*Sure, and in a PWT that does have discontinuous changes (such as creation and annihilation events), there's more than just differential equations - there's also an equation that says that the probability for a discontinuous change from configuration C to configuration C' to occur in an infinitesimal time interval dt is equal to r(C',C)dt, where r is some function of the quantum state.*

Which seems to be what I was trying to say. If you have spontaneous particle creation and annihilation events, then there is something in addition to the differential equations guiding the pilot wave and the particle. In other words, the pilot wave and particle by themselves are no longer sufficient to describe the physics.

I also agree with Michael that you have a problem in that QFT PWT with particle creation and annihilation will contain discontinous jumps, which cannot be described by solving a differential equation.

Many complex spaces are described in terms of various homotopy classes, usually characterised by a winding number, and it is impossible to pass from a configuration with one winding number to another via a continuous transformation. Consider, for example, an elastic band placed around a stick. That would qualify as a winding number of 1 -- it winds once around the stick. You can deform the band in numerous ways, but, without cutting it up and then gluing it together again, or cheating by using the third dimension, you will never have it removed from the stick -- a winding number of 0 -- or wrapping around it twice -- a winding number of 2, and so on. So each configuration, or shape of the rubber band, is associated with a particular winding number, and continuous deformations cannot map from one winding number to another.

This is important in the standard model -- both the QCD and SU(2) Yang Mills vacuums have numerous homotopy classes, and it is most likely that topological objects of one sort or another drive spontaneous chiral symmetry breaking and quark confinement. But that's not so relevant to the subject at hand. What I think is relevant is that I would imagine that in the sort of field theory you are describing, particle number would also be a topological quantity. Admittedly particles would count as 1 and anti-particles as -1, so the creation of a electron-positron pair wouldn't violate particle number, but a W^- decay would. Particle number is a property of the quantum field which is fixed to be an integer, and as such you can't get from a particle number 1 field to a particle number 0 field via a continuous deformation of the fields, i.e. by solving differential equations. And I think it obvious that particle number is a observable of the quantum state, and not just the pilot wave "particles," since the evolution of the quantum state is the same as in other interpretations, and these lack the pilot wave "particles" but have a definite particle number.

Later on, you write,

*There are no creation and annihilation events in the evolution of the quantum state - that is just false.*

Except the evolution of the quantum state is just the standard Schroedinger evolution -- which contains creation and annihilation operators representing a change in the particle number (at least in the usual ways of interpreting QFT). This change in particle number would correspond to a creation or annihilation event.

And let's think of it in another way. Suppose we have a down quark, which emits a W^- and up quark, and then the W^- decays into an electron and anti-neutrino. If the particles are beables, then at the start of the experiment there is a down quark, and at the end a up, W^-, and anti-neutrino in the particle configuration. That change must happen instantaneously. But in non-Pilot wave QFT, the various particle numbers are extracted from the quantum state by itself (as there are no seperate particle beables in the instrumentalist interpretation), and they always correspond to what its observed. "Always" taken to the pilot wave theory means that there is never a moment where particle number among the particle beables is out of step with the particle number defined in the quantum state. If the first changes discontinuously, then so must the second.

* "you have, of course, to express your uncertainty in terms of amplitudes rather than probabilities because this is quantum physics and using probabilities will give you the wrong answer"*

*... would still be false.*

Of course, there are places where you use probabilities in quantum physics. But not in the particular place I am discussing.

For example, suppose that we have electron-electron scattering. The standard way of calculating in perturbation theory is to draw various Feynman diagrams, calculate the amplitude from each of them, and add up those amplitudes to get the total amplitude, which you then square to get the cross section. The set of Feynman diagrams thus involves a superposition of states, each involving different particle creation and annihilation events. The point is that nowhere in this calculation, until the very end, do we make use of probabilities. There are interference effects between the different possible creation/annihilation paths. Interference is only possible if you are describing the system using amplitudes, not probabilities. If you treat creation/annihilation stochastically and say they have a certain probability of occuring in a particular timestep, you are not going to get interference between different potential trajectories with different patterns of particle creation/annihilation, and thus you will get the wrong result at the end of the calculation.

You might respond that the superpositions between the amplitudes represented in these diagrams exist in the quantum state rather than the particle state. But, as I argued above, the evolution equation for the quantum state has creation/annihilation events built into it, which have to correspond to what is happening with the particle beables.

Regarding your statement,

*Now you're just confused; there is no discontinuous evolution of the quantum state. d|psi>/dt = -iH|psi> is the equation for the continuous rate of change of the quantum state, and it is obeyed at all times in PWT.*

But the equation that describes the evolution of the Fock state in the standard model, as I stated in my post, cannot be reduced to that form. The starting point is the exponential form I listed in my post. In QM, one would simply expand the exponential, rearrange, and take the limit as the time step is reduced to zero to get the form of the SE as you describe it. But in QFT this can't be done because the Hamiltonian operator describes a discontinuous change of the Fock state, so one can't just discard the higher order terms in the expansion of the exponential, and there are issues with it being made of non-commuting operators (which, in practice, means that we have to add in time-ordering into the evolution equation, which is not reflected in that simple differential equation).

I should also discuss Lorentz symmetry and fermion doublers on the lattice, but that will have to wait for another day.

**Matt's response, round 3**

I realize that there is a lot in my comments for you to respond to, but in my defence, Dr. Cundy: if you didn’t write so much that was in need of correction, I wouldn’t have as much to say.

**Bohm vs. de Broglie, Again**

It seems you didn't actually read all of what I wrote, because it should be clear that Bohm's PWT is not just a reformulation of the Schrodinger equation. (You might as well say that de Broglie's PWT just **is** the Schrodinger equation in that case.) The S part doesn't "behave like a Newtonian particle" any more than the R part does; both are fields in configuration space. To get Bohm's PWT you have to say "there is a particle configuration moving along a trajectory in configuration space with velocity proportional to the gradient of S". The reinterpretation requires that additional equation (which is just the guidance law in de Broglie's PWT). Both formulations of ordinary PWT contain the Schrodinger equation, and add something to it (the guidance law, and the equilibrium hypothesis).

**No Additional Beables Required, Again**

You made the claim that when we have spontaneous creation/annihilation, the PWT is incomplete, and some additional beables are required over and above the quantum state and the particles in order to explain these events. I disagreed; the quantum state generates the propensities for the spontaneous creation/annihilation events just as it generates the motion of the particles between such events, so there is no further element required.

You are now supporting your claim by pointing out that spontaneous jumps require more than just differential equations. Which is completely irrelevant. The quantum state isn't a differential equation; its evolution is described by a differential equation. And the particles aren't differential equations; their motions/changes are described in part by differential equations and in part by the attribution of probabilities for spontaneous creation/annihilation events, and **both parts are generated by the quantum state**.

In other words, equations aren't beables. You ought to be able to understand that, so I was rather astonished to read this:

*"If you have spontaneous particle creation and annihilation events, then there is something in addition to the differential equations guiding the pilot wave and the particle. In other words, the pilot wave and particle by themselves are no longer sufficient to describe the physics."*

There is a term for your move from the first sentence to the second: non sequitur.

And if what you really meant all along was just that something more than differential equations would be required… why, exactly, is that a problem? Your objection has moved from “this kind of PWT theory is incomplete” to “I don’t like the way this kind of PWT works”.

**Evolution of the Quantum State, Again**

Skipping to the end of your comment, you continue to insist that QFT cannot be described with a continuously evolving quantum state, and I continue to reply that this is completely false.

Your justification for the claim is: *But in QFT this can't be done because the Hamiltonian operator describes a discontinuous change of the Fock state, so one can't just discard the higher order terms in the expansion of the exponential, and there are issues with it being made of non-commuting operators (which, in practice, means that we have to add in time-ordering into the evolution equation, which is not reflected in that simple differential equation).*

The Hamiltonian doesn’t change the state discontinuously in QFT any more than it does in QM. In both cases you have a vector in Hilbert space being acted on by a linear operator, producing another vector in Hilbert space which is equated to the rate of change of the original vector. The Hamiltonian in ordinary QM is also made up of operators which don’t commute (position and momentum, or even the ladder operators when you look at QHO). And your comment about time-ordering is irrelevant because the operators don’t evolve over time in the Schrodinger picture.

Starting from your own exponential expression for the time evolution operator, as long as H is independent of time (which, again, it is when we are working in the Schrodinger picture):

psi(t)=exp(-itH)*psi(0) —> dpsi(t)/dt = -i*exp(-itH)*H*psi(0) = -iH*psi(t)

(And there’s no concern here about non-commuting operators messing up the derivative: no matter what operators it is made up with, H commutes with itself, and that is all that is necessary for the above to follow.)

I have mentioned Sean Carroll, who states that the Schrodinger equation is completely general and applies to QFT as much as to ordinary QM (just with the appropriate Hamiltonian, of course). Wallace’s discussion in the article you looked at is (albeit somewhat implicitly) mostly in the Schrodinger picture. The introductory QFT texts I have read start off in the Schrodinger picture and quickly move to other formulations for convenience, but nowhere is it claimed that the Schrodinger picture is impossible (and I certainly can see no obstruction to it) - and the move to e.g. the Heisenberg picture, or the path-integral formulation, are even given justification in those texts by the fact that they are interchangeable with the Schrodinger picture. Anytime I’ve seen the question of whether the Schrodinger picture is valid in QFT come up on sites like the physics stackexchange, the answer is in the positive. I specifically recall one such post referencing a paper by R. Jackiw titled “Schrodinger Picture for Boson and Fermion Quantum Field Theories” which described not only a quantum state evolving continuously over time, but even represented that quantum state as a complex-valued function on the configuration space of the field, exactly analogously to the situation in ordinary QM. (Not that this “wavefunctional” representation is necessary for the Schrodinger picture - you can work in the occupation number basis in the Schrodinger picture perfectly well). And all the quantum foundations literature I have read discusses QFT as having a continuously evolving quantum state; the question is mostly whether it should be understood as describing superpositions of particle configurations, or instead of field configurations.

I could probably further multiply references to support this claim about the Schrodinger picture in QFT if I had institutional access to academic works - or a higher budget - but alas, I lack those resources. Suffice to say every other source I have on the subject contradicts your insistence here that it can’t be done; and the reasons you have given that it can’t be done, don’t hold up.

So I can confidently say that you are so wrong about this, and it impacts your critique of PWT at more than one point - and even, it seems, your understanding of how QFT can be interpreted more generally.

**Your Imagination Is Wrong**

Now, regarding whether creation/annihilation events would require discontinuous changes in a PWT with a field ontology, you write:

*"What I think is relevant is that I would imagine that in the sort of field theory you are describing, particle number would also be a topological quantity."*

Key words here, *"I would imagine…"*. And that's all it is. Outside of your imagination - when you actually try to work out what a PWT like this would look like - it's just not true that the particle number is a topological invariant of the field configuration.

Here’s another example, because you ignored the one I provided to Michael. The operator for the number of photons in a QED state is given by summing the number operators for all the different modes of the photon field (indexed by momentum and polarization). Those individual number operators are the products of the creation and annihilation operators for those modes. And those creation and annihilation operators are related to the operators for the field values by what is, essentially, a Fourier transform.

Now consider a PWT state with a definite field configuration - definite values that correspond, conceptually, to the field operators. The values that correspond to the ladder operators, then, are amplitudes for the waves in the field, and summing the appropriate products of the amplitudes gives you the value that corresponds to the "photon number" of the field. And guess what - since the field values can vary continuously, the amplitudes can vary continuously, and thus the "photon number" can vary continuously.

It should be obvious, but maybe it is worth stating outright here: in a PWT with a field ontology - i.e., where the "hidden variables" include a field configuration - that field configuration is not operator-valued. Its relationship to the quantum field (the field of operators) is analogous to the relationship between the particle positions and the position operator in the ordinary PWT of non-relativistic QM. The particle number of the quantum field is restricted to integers (more correctly: has eigenvalues restricted to integers) because of the commutation properties of the operators. The corresponding property of the hidden-variable field configuration has no such restriction, because it isn't an operator and doesn't have eigenvalues in the first place.

It should also be obvious that it isn't even true that the particle number has integer values - usually it has no definite value at all, any time the quantum state is in a superposition of different eigenstates of the particle number operators (which it typically is). Also see the next section for more on that.

Definite particle numbers are extracted from the quantum state, as you say - by the “collapse” part of the formalism, which assumes an interaction (e.g., with a particle detector) to decohere the quantum state. Those definite particle numbers correspond to what we observe **via those interactions** - but between those interactions, when the quantum state is free to evolve to states without any definite particle number, there is nothing constraining PWT field beables to configurations with integer values for the “particle number” property. (Obviously that doesn’t apply to particle beables, but that isn’t what this paragraph is about.)

(To speak more precisely about this would require going into detail about how, in PWT, the quantum state of a subsystem can be defined from the quantum state of the larger system, and the hidden variable configuration in the part of the larger system outside of the subsystem. The keywords to look up for more info on that in pilot-wave literature are “conditional wavefunction”.)

**No Creation/Annihilation Events in the Quantum State**

There is no basis in the standard formalism for your whole conception of creation and annihilation events being part of the evolution of the quantum state. Here is what the quantum state looks like: a superposition of states with different numbers of particles with different positions or momenta, polarizations, spin states, etc. Here is what the Hamiltonian, containing those creation and annihilation operators, does to the quantum state (via the Schrodinger equation): it continuously changes it to a different superposition. So where are the creation/annihilation events in this?

The Hamiltonian acting on the quantum state, at every moment in time, contains creation and annihilation operators at every location in space (or, in a different basis, with every momentum, completely delocalised in space). Are you really claiming that creation/annihilation events are happening literally everywhere, all the time? If not, what distinguishes the operators corresponding to the events that actually happen, from the ones corresponding to the events that don’t actually happen? The answer is clear: there is nothing (in the standard formalism) to distinguish them, and so you cannot say that creation/annihilation events are part of the evolution of the quantum state. You can only have genuine creation/annihilation events in the local beables, which the standard formalism does not have.

The only thing you may be able to say at this point is that the quantum state represents our uncertainty about the actual motions and creation/annihilation events, and we can extract probabilities (or maybe more weakly, “pre-probabilities”) for histories of events, which is after all what the PWT with stochastic creation/annihilation events does as well. But there is a significant difference. The probabilities in PWT are ontic: they are propensities for the system to behave in a certain way, and when the process plays out, you end up with a definite history for the system. Whereas (to the best of my knowledge, but we’ll see in your future posts) it remains to be shown if we can extract probabilities for definite histories in a coherent and consistent way in psi-epistemic theories. Moreover, I think it is highly debatable whether “representing uncertainties using amplitudes” even makes sense.

**Uncertainty and Probability**

And on that topic…

*Of course, there are places where you use probabilities in quantum physics. But not in the particular place I am discussing.*

That doesn’t even remotely address what I was talking about.

*You might respond that the superpositions between the amplitudes represented in these diagrams exist in the quantum state rather than the particle state. But, as I argued above, the evolution equation for the quantum state has creation/annihilation events built into it, which have to correspond to what is happening with the particle beables.*

Of course that’s how I would respond, because that is the PWT perspective. And your objection to it here has no basis, as I’ve already argued.

Until next time...

Matthew: "It should be obvious, but maybe it is worth stating outright here: in a PWT with a field ontology - i.e., where the "hidden variables" include a field configuration - that field configuration is not operator-valued. Its relationship to the quantum field (the field of operators) is analogous to the relationship between the particle positions and the position operator in the ordinary PWT of non-relativistic QM. The particle number of the quantum field is restricted to integers (more correctly: has eigenvalues restricted to integers) because of the commutation properties of the operators. The corresponding property of the hidden-variable field configuration has no such restriction, because it isn't an operator and doesn't have eigenvalues in the first place."

I had thought the the whole conceptual advantage of non-relativistic PWT was that a particle's position and momentum both *had* definite values, to infinite precision, and all the behavior that makes it seem that it doesn't was due to guidance from the pilot wave. Thus I assumed that in a relativistic PWT, the number of particles present is also a definite value at any given moment, even in a field ontology where each "particle" is just an excitation of the overall field. Is that wrong? I don't see that it matters that the "hidden-variable field configuration" can have a continuously varying particle number, because that represents the pilot wave, not the actual matter that constitutes the things we really perceive.

Put another way, if a pilot-wave formulation of QFT doesn't say that a quantum system actually *is* in a Fock state at every moment, with a probability of jumping to another Fock state in the next moment - what beables *does* it claim exist? Only the pilot wave itself?

"The probabilities in PWT are ontic: they are propensities for the system to behave in a certain way, and when the process plays out, you end up with a definite history for the system."

Yes, and that's why the changes from one Fock state to another are an issue. I don't see any way to fit continuous motion within a sector of configuration space and discontinuous jumps between sectors into a single process that operates in reality. You have to split what the pilot wave does into two parts, the continuous and the stochastic, to run any calculations and get testable predictions. And once you've split them conceptually, what reason is there to claim they aren't split in reality?

So you end up with three classes of beable: the actual matter, regarded as either particles or a field; the pilot wave, guiding the matter's movements to produce non-local correlations; and the random "swerves" that jump the system from one sector to another. What causes the swerves? *shrug* Who knows?

Michael, thanks for continuing the discussion!

*Thus I assumed that in a relativistic PWT, the number of particles present is also a definite value at any given moment, even in a field ontology where each "particle" is just an excitation of the overall field. Is that wrong?*

Yes, it is wrong.

Different kinds of pilot-wave theories for QFT are possible. With a “field ontology”, you have a definite field configuration (e.g., an electromagnetic field in space). Particles aren’t real in this case; the particle behaviour is entirely the result of the guiding of the field configuration by the quantum state. And the point of the particles in ordinary PWT isn’t to have particles with infinitely precise positions; it is simply to have a definite representation of what is going on in the world, rather than the entirely indefinite representation that is the quantum state in orthodox QM. (Another advantage of PWT is that the role of the quantum state is entirely clear as well: it directs the motion of what is going on in the world.) In a PWT with a field ontology, the definite picture of the world is provided by the field configuration; the particles aren’t necessary.

PWT with field ontology is where we’re having the discussion about fields having “particle numbers” not restricted to integers.

With a “particle ontology”, you have a definite configuration of particles (some definite number of particles existing in definite locations), and that configuration (like the field configuration in a field ontology) is guided by the quantum state. There is certainly an advantage to such a theory in that it is much easier to explain the reduction to the non-relativistic PWT, since you already have particles moving around.

PWT with particle ontology is where we’re having the discussion about stochastic creation/annihilation events.

It is possible for these to be combined, with some quantum fields (e.g., bosonic ones) being associated with a field ontology, and some quantum fields (e.g., fermionic ones) being associated with a particle ontology. It is also possible for only some of the quantum fields in a theory to be associated with local beables (e.g., fermions with a particle ontology, while the electromagnetic field is merely an aspect of the quantum state - this is still sufficient to give a definite picture of the world).

*I don't see that it matters that the "hidden-variable field configuration" can have a continuously varying particle number, because that represents the pilot wave, not the actual matter that constitutes the things we really perceive.*

No. Once again - this was even stated, in bold, in my first comment on this post! - the “pilot wave” is the quantum state. (Pop-sci explanations of PWT often speak as if the pilot-wave is a field in physical space. It isn’t. It’s the same quantum state that appears in orthodox QM.) In a field ontology, the “hidden variable” field configuration is what represents the matter that constitutes the things we really perceive.

*So you end up with three classes of beable: the actual matter, regarded as either particles or a field; the pilot wave, guiding the matter's movements to produce non-local correlations; and the random "swerves" that jump the system from one sector to another. What causes the swerves? shrug Who knows?*

You know, it's one thing to go in criticising a theory when you only have a pop-sci level of understanding of it; it’s another to ignore something your interlocutor has repeatedly stated. The quantum state is what generates the propensities for the creation/annihilation events; it causes them stochastically, and at the same time it causes the motion of the particles on intervals between such events.

Of course, as I mentioned in an earlier comment, I think one may regard the quantum state as an abstraction from the causal powers of physical things. Perhaps you could say it is something like a representation of the forms of things that exist, while the local beables - particles or fields - are the matter. So in that case what causes the creation/annihilation events is the natures of existing physical things (typically the things existing nearby the event, though entanglement can result in action-at-a-distance). And that seems to me to be a perfectly satisfactory answer, just as you could say that the nature of a carbon-14 atom causes it to move with constant velocity in the absence of forces and at the same time stochastically causes it to decay with a certain probability in a given time period.

I do not regard that metaphysical analysis as necessary for the cogency of PWT **as a scientific theory**, though I do certainly agree with you and Dr. Cundy that a coherent metaphysics is necessary for a complete philosophy.

**Fermion doublers, Lorentz symmetry, Schroedinger picture, and more.**

This is going to have to be my final response on this post; as you can probably tell from my delay in writing I have got too much else going on at the moment, and I think we have both reached the point where we are just repeating ourselves. On those matters where I do want to say more, I think I need to work things out in more detail, which requires the time I devote to a full blog post rather than a conversation in a comment.

Lattice QCD and Fermion Doublers

*From what I have read, fermion doubling appears to be related to the way that the discrete derivative operators distort the energy-momentum dispersion relation. There is a derivative operator on lattice theories that does not distort the dispersion relation and it seems to be a natural one to use (basically, interpolate the field with Fourier components defined from a DFT of the field values at the lattice points, then take the derivative of the interpolation).*

You seem to have misunderstood what I was saying here. The issue isn't in representing derivatives without fermion doublers on the lattice. This is done by (for example) the Wilson Dirac operator which adds a *∇ ^{†}∇* to the basic difference operator

*∇*. The problem is doing so while respecting chiral symmetry (which is involved in the distinction between left and right handed fermions). There is a theorem, the Nielsen–Ninomiya theorem, which shows that it is impossible to satisfy chiral symmetry on the lattice given a few other assumptions such as locality (which is needed to maintain the correct continuum limit). Locality here is defined as psi(x)Dpsi(x+delta), where D is the representation of the Dirac operator, decreases at least exponentially with delta.

That last paragraph ought to be qualified, because there is a lattice variant of chiral symmetry -- the Ginsparg-Wilson symmetry -- and a known family of operators -- the overlap operators -- which satisfy it exactly (and various other solutions which satisfy it approximately). However, the operators aren't symmetric between left and right handed fermions, and it is still difficult to define projectors into the left and right handed fermion states. Also the overlap operator is only exponentially local (including in time), so it can't really be adapted to a Schroedinger picture, as it compares states from future and past time states.

Why is chiral symmetry significant? For QED and QCD it is only an approximate symmetry of the Lagrangian, so one can argue that breaking it with the Dirac operator is the same as an additive renormalisation of the fermion mass. However, in the electroweak sector, left and right handed fermions behave differently, so you need well defined projectors into left and right handed states, and you need a Dirac operator consistent with those projectors, which means that you need a particular notion of chiral symmetry. I am not aware that this has been successfully done in a lattice gauge theory.

Renormalisation as change in basis

You also questioned my treatment of renormalisation as (in effect) a change in basis. My inspiration for saying this is the Wilson spin-block renormalisation program, but in a slightly modified form. The original approach by Wilson changed the dimension of the system, so he averaged over a fine-grained lattice to give a coarse grained lattice. This effectively creates a new action for the averaged variables, with similar terms (the possible terms in the system is still constrained by the same symmetries), but different coefficients. This involves a loss of data, and is not reversible.

But it is also possible to construct analogous transformations which keep the dimensionality of the system the same, and which involve no loss in data and which are reversible. This transformation will lead to an equivalent physical theory; you would just have new creation and annihilation operators which are some function of the old operators. If the transformations are consistent with the symmetries, then the new action would have the same basic terms and interactions as the original action, but with different (and, in a momentum basis, momentum dependent) coefficients. The overall analysis is the same as in Wilson's original transformation: there will be a fixed point of this transformation, and various trajectories in parameter space leading to this fixed point, which (if the transformation changes the underlying correlation length of the theory) corresponds to the renormalised action. So you would have a new mass, Fermion weight, and electric charge which are functions of the original parameters, the momentum, and the various variables which parametrise the transformation. When we perform a traditional renormalisation procedure in perturbation theory, we replace the mass, electric charge, and Fermion weight with new parameters which are functions of the original parameters, the momentum, and the details of the renormalisation scheme. In other words, the change of basis can (if constructed in the right way) reproduce what is done in perturbative renormalisation -- except the change in basis construction doesn't depend on and can be done without perturbation theory.

If the new action is mathematically equivalent to the unrenormalised action, have we changed anything through this transformation? Yes, because we need to insert initial and final states into the calculation of the amplitude. These will have to be expressed in some basis, and the claim is that only using the renormalised basis gives mathematically consistent results. For consistency, we ought to use the same basis in the action as for the initial and final states. This can be done by expressing the action in the renormalised basis. This will be theoretically cleaner -- there will be no need to use a regulator as we perform the calculation -- but it much harder as the action is far more complicated (and I am not sure it has been calculated). The alternative is the usual method of working in a regulated unrenormalised basis, and then renormalising (i.e. rotate to the correct basis) and last of all removing the regulator. The two approaches give the same result, so naturally we use the easier (if philosophically less clean) approach.

Lorentz symmetry

*Lorentz symmetry is imposed as an additional constraint when formulating the theory, ultimately due to empirical considerations.*

I think this is the point where we differ. In the modern understanding of field theory, symmetry plays a far more important role than you seem to realise. You are looking at things from an empiricist point of view, so "empirical considerations -> physics -> philosophy", while I am thinking about things in the ontological perspective, "philosophy -> physics -> experimental results." In other words, a philosophy which explains why the experimental results are only consistent with physical theories which satisfy certain symmetries is superior to a philosophy in which those symmetries have to be added on in an add-hoc manner. In the case of Lorentz symmetry, we are discussing the philosophy of space and time. A philosophy which states that there is an objective and fundamental foilation in time -- which means that Lorentz symmetry is not a fundamental property of space time -- cannot explain why the physics satisfies Lorentz symmetry without bringing in additional assumptions. In other words, there are a vast number of possible physical theories consistent with that philosophy, and out of those you need to arbitrarily select the single one of those which leads to the empirical results implying Lorentz symmetry without a good *a priori* reason to do so. (And, for that matter, a good *a priori* not to do so, since your underlying philosophy is consistent with Galilean relativity, and its E^{3} × E^{1} geometry.) A philosophy that has no objective foilation in time, but states that at a fundamental level space and time map to a Minkowskian hyperbolic geometry, has a good *a priori* reason for supposing that the physics will satisfy Lorentz symmetry.

Or, if we think of it in terms of Probability, if E represents the empirical data, F the philosophy where there is an objective foilation in time, and L the philosophy that space-time at a fundamental level satisfy Lorentz symmetry, then

P(E|L) = 1

While

P(E|F) = 1/(the total number of possible theories consistent with that understanding of space and time, Lorentz symmetric and non Lorentz-symmetric, i.e. a large number).

We can turn this around using Bayes' theorem, and conclude that

P(L|E)/P(F|E) = large number.

Obviously, this discussion is a bit simplistic, since I have excluded gauge symmetry and the other symmetries, and there is also freedom in the other parameters (which would, if I were to do this illustration properly, reduce both probabilities by the same factor), plus it isn't Minkowski geometry we should be discussing but the Riemann geometry of general relativity, but none of these affect the final conclusion. There are, of course, other reasons for preferring Lorentz symmetry/foilation philosophies, which would have to be taken into account. But as long as there is a viable philosophy of physics which proposes a fundamental Lorentz symmetry in its philosophy of space-time then that ought to be strongly preferred over a viable philosophy of physics including a philosophy of space-time which denies Lorentz symmetry at the fundamental level (but possibly allows it to emerge at the empirical level but no good *a priori* reason to choose that possibility over others which lack the symmetry).

My own philosophy of time, as I have outlined in other posts, is a B-theory with a subjective notion of the present, but objective notions of the direction of time, and which events (within a light cone) are earlier or later. I adopt this philosophy largely by combining the argument above while avoiding the problems in the more usual "static block" formulation of the B-theory (which denies an objective direction in time and thus has problems defining causality).

Conservation of energy

You question my statement that conservation of energy in QFT is due to locality (rather than symmetry).

It is true that in a classical theory, the conservation of energy arises from the symmetry via Noether's theorem. There are, however, two problems when applying this to quantum physics. The first is that the object that is conserved in Noether's theorem is the first row of the stress-Energy tensor. This is, however, not the same as energy or momentum in quantum physics, defined in terms of the eigenvectors of the time evolution/spatial derivative operators. Secondly, the derivation of Noether's theorem assumes the classical equations of motion are satisfied, which is not the case in quantum physics.

However, we do still have energy conservation in quantum physics, but for a different reason. I'll just use the interaction term in the massless Dirac part of the action to illustrate this, but the same principle applies to the whole action. We start in the location basis

*\int d ^{4} x psibar(x) γ_{μ} (eA^{mu}(x))psi(x)*

and Fourier transform it to give,

* \int d ^{4} x e^{ix(p_1 - p_2 - p_3)} \int d^{4} p_1 d^{4} p_2 d^{4} p_3 psibar(p_1) γ_{μ} ( eA^{mu}(p_2))psi(p_3)*

We then integrate over x to give a delta function

* \int d ^{4} p_1 d^{4} p_2 d^{4} p_3 psibar(p_1) γ_{μ} δ^{4}(p_1 - p_2 - p_3) ( eA^{mu}(p_2))psi(p_3)*

And that delta function is what ensures the conservation of energy and momentum. This only arises because the theory is local, i.e. there is no additional function of x appearing in the action. This ultimately is why the conservation of energy and momentum at each vertex is among the Feynman rules in perturbation theory.

Schroedinger picture in QFT

You went to a length in your last post defending the Schroedinger picture in QFT. I am not sure why you did this, because I have not denied that this is possible and consistent with field theory. I have stated that there are complications when it comes to a quantum gauge theory with a Maxwell action. But saying there are complications does not mean that it is not possible. And, obviously, for those QFTs which are not gauge theories there aren't these complications and it is straight-forward.

Where I disagree with you is not that QFT can be expressed in a Schroedinger picture, but that the evolution of the quantum state in that picture can be written in terms of a straight-forward partial differential equation. This is a second assumption you have been making, separate from the Schroedinger/Heisenberg picture discussion. And it is this assumption I disagree with.

But you have given me food for thought here, and I am going to need to flesh out

my response in detail, which I can't do in a comment. (Rather, I realised what I was going to write here was incorrect, so I need to think a bit more about this.) I will need to prepare a

full blog post to give me the excuse to work through the details. [Not the next post -- I would like to get to the end of my planned sequence of posts first to avoid getting bogged down in the details here.]

Discontinuities in QFT evolution.

*The operator for the number of photons in a QED state is given by summing the number operators for all the different modes of the photon field (indexed by momentum and polarization). Those individual number operators are the products of the creation and annihilation operators for those modes.*

OK.

*And those creation and annihilation operators are related to the operators for the field values by what is, essentially, a Fourier transform.*

So the operators for the field values are basically what I call the creation and annihilation operators in the location basis.

*Now consider a PWT state with a definite field configuration - definite values that correspond, conceptually, to the field operators.*

Technically it would be the field operators applied to a vacuum state. The eigenstates of those operators don't really correspond to anything physical.

*The values that correspond to the ladder operators, then, are amplitudes for the waves in the field, and summing the appropriate products of the amplitudes gives you the value that corresponds to the "photon number" of the field.*

This is where I struggle to understand what you mean. The "values" here are states rather than numbers. A certain configuration of creation operators applied to a vacuum state. As such, one can't really talk about summing the products of the amplitudes -- I'm not even sure what that means in this context. One can get one particular basis state configuration by a suitable product of creation operators applied to a vacuum state. You could also discuss the quantum state as being a superposition of different states that you get from adding together different basis state configurations, each multiplied by a particular amplitude. I guess that you define the set of all those amplitudes as a field configuration. We will need one set for each different type of particles (e.g. photons, each quarks, each anti-neutrino, and so on). These sets (particularly those for Bosonic states) are also going to be of infinite size, since there are an infinite number of possible base field configurations. The photon number is defined for each of these basis state configurations, but not for the superposition. One can get an expectation value for the photon number in the usual way, but that just corresponds to an average value over numerous different measurements, not an individual snapshot of reality.

*And guess what - since the field values can vary continuously, the amplitudes can vary continuously, and thus the "photon number" can vary continuously.*

I can understand how your superposition can vary continuously, although I disagree that the photon number is defined for this superposition (you also state this below). In my interpretation, the superposition indicates the possible physical configurations (in a particular basis), one of which will be actual, the likelihood for which determined by the amplitude in front of the particular term in the superposition. I understand that those amplitudes in front of each term in the superposition could vary smoothly. Take, for instance, the term in the Hamiltonian that corresponds to a W^- decay into an electron and anti-neutrino. Previously, the superposition would have contained terms which contain that W^- at that location, and no electron or anti-neutrino. At the next instant in time, one would have the the amplitudes for those terms containing the W^- decreasing a little, and those terms containing the electron or anti-neutrino increasing a little (depending on the previous amplitude for the W^-). This particular change of state is not associated with a spatial derivative and thus it is not obvious to me that it will contribute to the probability current.

*It should be obvious, but maybe it is worth stating outright here: in a PWT with a field ontology - i.e., where the "hidden variables" include a field configuration - that field configuration is not operator-valued. Its relationship to the quantum field (the field of operators) is analogous to the relationship between the particle positions and the position operator in the ordinary PWT of non-relativistic QM.*

Again, I am not quite with you here in your terminology, but I think I see where you are going.

*The particle number of the quantum field is restricted to integers (more correctly: has eigenvalues restricted to integers) because of the commutation properties of the operators. The corresponding property of the hidden-variable field configuration has no such restriction, because it isn't an operator and doesn't have eigenvalues in the first place. It should also be obvious that it isn't even true that the particle number has integer values - usually it has no definite value at all, any time the quantum state is in a superposition of different eigenstates of the particle number operators (which it typically is).*

*Definite particle numbers are extracted from the quantum state, as you say - by the “collapse” part of the formalism, which assumes an interaction (e.g., with a particle detector) to decohere the quantum state. Those definite particle numbers correspond to what we observe **via those interactions** - but between those interactions, when the quantum state is free to evolve to states without any definite particle number, there is nothing constraining PWT field beables to configurations with integer values for the “particle number” property.*

OK, so I think I (finally) understand what you are saying here. As stated, I will need to spend more time than I can devote now to working through the details. I am a little bit nervous about mentioning a "collapse" part of the formalism, although I think I understand that you mean the usual decoherence processes; albeit that these don't (in other interpretations of Quantum physics) determine what state you end up with. I am also concerned about the practicalities of this.

Take for example, the amplitude for there being an electron at location x. In practice, this would in itself be a superposition of states -- the set of basis states contain all those with an electron at that location with any other combination of other fields at that location and fields at any other location. So, if we consider the standard model Hamiltonian, the change for this amplitude for the next timestep would have contributions from the amplitudes for electrons at neighbouring sites (and the other fields being the same values), and also also a contribution from the amplitudes for all possible basis configurations which have at least one W^- and no anti-neutrino at that site (with everything else being the same, but still an infinite number of amplitudes). So the rate of change (in time) of this amplitude would depend on the spatial differential of the electron amplitude plus an infinite number of amplitudes associated with the W^- present and anti-neutrino absent basis configurations. My concern is that when you construct the probability current associated with this amplitude (and consequently the "particle" guidance equation), you will get the spatial derivative term, but the terms corresponding to what in other interpretations we call the creation and annihilation of particles will cancel out. So there is no change in particle number for the "particle" part of the pilot wave theory. At least, that's what I get just trying to figure out how this would work in my head. I would need to work through the details in full to be sure.

**Matt's response, round 4**

Dr. Cundy, thank you again for your comments. I really do appreciate your taking valuable time to engage with me on this subject. As you have said that was your last comment on this post, I'm not expecting a further response, but I do want to try to continue to explain my position.

**Fermion doubling**

I admit I don't completely understand the full import of the Nielsen-Ninomiya theorem, though I was aware of it from my reading. I *think* the kind of "lattice" model I have in mind evades the NN theorem by being "non-local" in the relevant sense, though I believe it is capable of recovering the correct continuum limit despite this. (My argument is that the operators at the lattice points obey the discrete analogue of the correct commutation relations; using an interpolation based on a discrete Fourier transform, you get operators for arbitrary points between the lattice points, which don't obey the correct commutation relations; but by averaging these operators over a small region a few times larger than the lattice spacing you can recover field operators obeying commutation relations which approach the correct ones, in the limit as the lattice spacing goes to zero.)

But again, I'm not suggesting that a lattice model should be taken as fundamental in any way, only that it could be a stepping stone to constructing a more appropriate theory. And again, if more work were put into pilot-wave theory (instead of it being ignored by most physicists for almost a century now, often for bad reasons!), I think there's a good chance these kind of technicalities could be resolved.

**Renormalization**

Thank you for your comments on renormalization; I will need to look into things more deeply. Overall, I think you have said that this is a *difficulty* for pilot-wave theory but *not* a decisive argument against it, and I agree with you on that.

**Lorentz symmetry**

*"In the modern understanding of field theory, symmetry plays a far more important role than you seem to realise. You are looking at things from an empiricist point of view, so "empirical considerations -> physics -> philosophy", while I am thinking about things in the ontological perspective, "philosophy -> physics -> experimental results.""*

No, I am well aware of the importance that symmetry is held to have. My point is that there would be no reason to postulate a "philosophy" that specifically implies Lorentz symmetry in the first place, were it not for empirical considerations. It is postulated to explain what we observe. I agree with you that less ad-hoc postulates are to be preferred, all else being equal - that's just Occam's razor - but from my perspective, all else is *not* equal. Moreover I think you *vastly* overstate the degree to which postulating a neo-Lorentzian theory is ad-hoc, if it can really be called ad-hoc at all when all things are considered.

You compare a philosophy L which states that spacetime is fundamentally Lorentz-symmetric and has no foliation, with a philosophy F which states that spacetime has a foliation, with regards to their probability for predicting E, the evidence for Lorentz-symmetry. Of course P(E|L)=1. But *L is a more specific proposition than F* - it isn't implied by not-F, and you could multiply theories which lack an preferred foliation but also lack Lorentz-symmetry (e.g., spacetime could be Euclidean, or the metric could be diag(-1,-1,+1,+1) instead of Minkowskian, etc). And on the other side of things, Galilean-symmetry isn't implied by F; there's more to Galilean spacetime than a foliation (the first few chapters of Maudlin's book on the philosophy of spacetime illustrates this nicely). A more apples-to-apples comparison would be to match L against, say, N (for neo-Lorentzian), which says that there is an objective foliation but also a light-cone structure which constrains the dynamics. Then P(E|N)=1 as well, and Bayes tells us:

P(N|E)/P(L|E) = P(N)/P(L) = P(N|F)/P(L|not-F) * P(F)/P(not-F)

(The second equality follows since we are construing N and L such that P(N|not-F)=0 and P(L|F)=0.)

At the level of generality we are talking about here, I don't really think that you can say P(L|not-F) is all that much greater than P(N|F). I'm willing to grant that it may be somewhat greater - the foliation and the light-cone structure have different symmetries, so in some sense a theory that combines them is more complicated, maybe less expected, than a theory which combines a foliation with a Galilean inertial structure rather than a light-cone structure. That is a bit subjective, though - the foliation and the light-cone structure are not incompatible on a fundamental level; they're just different structures which play different roles in a neo-Lorentzian theory. Another route you could go is to say that P(N|F) is lower than P(L|not-F) because there are more options for kinds of structures that could be added given F than given not-F. And I would counter, maybe not - you could add a Galilean inertial structure to a spacetime model initially without a foliation. That structure would itself bring in a foliation, but one could deny that it is to be identified with metaphysically privileged, A-theoretically relevant foliation of F. Maybe that denial is itself ad-hoc, but again, that judgement is a bit subjective.

You might argue that P(F) is intrinsically lower than P(not-F) because P(F) postulates a structure that P(not-F) doesn't. And I would counter with the argument that P(F) is actually higher than P(not-F) on other considerations; the non-local phenomena of QM being one, and philosophical arguments in favour of the A-theory of time being another (arguments which are not sufficiently addressed merely by allowing for an objective direction of time as in your B-theory).

So overall, I don’t see that a neo-Lorentzian theory is at any disadvantage to the way that relativistic spacetime is traditionally conceived.

Another way I might make my point is to say that a pilot-wave theory, with a foliation, *isn't* denying Lorentz-symmetry only to have it magically re-emerge later without explanation. It says that Lorentz-symmetry is present in one part of the theory - and for that matter, you could stipulate that it *has* to be present in this part of the theory as a matter of "philosophy" or "the modern understanding of field theory" - it is present in the Lagrangian density that goes into constructing the Hamiltonian operator of the theory. It just also says that the Lorentz-symmetry isn't present in another part of the theory, the foliation, which is implicitly needed to make it possible to talk about the quantum state that the Hamiltonian acts upon in the first place.

Your notion of "fundamental Lorentz symmetry" conjoins the assertion that Lorentz symmetry is present in the theory in the appropriate way with the *denial* that a foliation exists. And I would say that this denial doesn't actually do any work in explaining what we observe. It isn't necessary to postulate that all the structures in the theory have to obey Lorentz symmetry, just that some part of it does in a way that explains our observations. And making this unnecessarily restrictive postulate prevents explanation of other aspects of reality that we observe (or at the very least, introduces a significant tension that, in my opinion, you are just dismissing, rather than addressing, by saying that only creation and annihilation events need to be caused locally).

**Conservation of energy**

Okay, so you interpret conservation of energy and momentum to hold in quantum theory if eigenstates of energy and momentum stay within the same eigenspace under the evolution of the quantum state. But the quantum state and its evolution is the same in PWT as it is in the orthodox theory, so PWT has no issue with conservation of energy and momentum. This is even true in the kind of "lattice" model I've been alluding to, where the number of degrees of freedom is made finite by restricting the mode expansion of the field. You get the exact same momentum-conserving delta function arising from the integration, ultimately because of the form that the Lagrangian density has. And energy is conserved as long as the Hamiltonian is independent of time in the Schrodinger picture (because the Schrodinger equation, or the time evolution operator, will always evolve an energy eigenstate simply by rotating its phase).

This thread of the discussion picked up when you made the claim that particle creation and annihilation must be instantaneous and so must be described by some kind of discontinuous change, because the creation and annihilation operators are local in time and space, because otherwise energy and momentum wouldn't be conserved. I'm hoping now you can see how your objection doesn't follow: in a pilot-wave theory with a field ontology - i.e., a definite field configuration evolving in time in addition to the quantum state (instead of definite particle positions moving around in addition to the quantum state), there are no particles at a fundamental level and particle behaviour is emergent (and so "particle creation and annihilation" doesn't have to be instantaneous or involve any discontinuous change), yet energy and momentum are conserved in the same sense as in the orthodox theory.

**Schrodinger picture**

*"You went to a length in your last post defending the Schroedinger picture in QFT. I am not sure why you did this, because I have not denied that this is possible and consistent with field theory. ... Where I disagree with you is not that QFT can be expressed in a Schroedinger picture, but that the evolution of the quantum state in that picture can be written in terms of a straight-forward partial differential equation."*

Great. In the Schrodinger picture, when the Hamiltonian is independent of time, the time evolution operator is U(t)=exp(-itH), where t is the time difference between the final and initial states. And then the Schrodinger equation, dpsi(t)/dt = -iH*psi(t), can be immediately derived from the expression psi(t)=U(t)*psi(0), because H commutes with itself. This is the basis of my claim that the quantum state evolves continuously in time, via an *ordinary* differential equation for a vector in Hilbert space.

(Since you're wondering why I've belaboured this point, just see all the times I've referenced the quantum state evolving continuously in these comments. Though part of me doesn't know anymore why I bothered - I realised a couple weeks ago that even if the quantum state evolved discontinuously, as long as it was by a unitary operator, you could have a commensurate discontinuous jump in the hidden variables and maintain the equivariance property; the derivation is analogous to the derivation of the stochastic jump rates in Duerr et al's Bell-type QFTs. In reality the quantum state evolves continuously, but even if it didn't, a pilot-wave theory would still be possible.)

When the quantum state is expressed as a complex-vector-valued function on configuration space, the Schrodinger equation may also take the form of a *partial* differential equation in configuration space. This is useful for constructing pilot-wave guidance equations, since the evolution of the quantum state can be used to define a vector field in configuration space, which is equated to the rate of change of the definite configuration which the PWT postulates to exist alongside the quantum state. But this isn't necessarily the only way to obtain guidance laws; I described the general process for doing so, and pointed to some references for more info, in an earlier comment (last section of comment 13 on this post, my response round 2, part 2).

I have not claimed that the Schrodinger equation can always be represented as a partial differential equation, though this can be done in more cases than I think you realise; and it isn't necessary for a viable PWT that the Schrodinger equation can be so represented. And what certainly is not necessary for PWT is that the Schrodinger equation be represented as a partial differential equation in physical space.

**Definite field values, etc**

*"This is where I struggle to understand what you mean. The "values" here are states rather than numbers."*

No, they aren't. I've explained this already, multiple times, in comments on this post - probably in comments on older posts on your blog as well. In a pilot-wave theory with a field ontology (rather than a particle ontology), the "hidden variable" - the extra element postulated over and above the quantum state - is a definite field configuration (rather than definite particle positions).

So when I was talking about a "PWT state with a definite field configuration", I wasn't referring to the quantum state at that point. The state of a pilot-wave theory contains more than just the quantum state. And when I was talking about "amplitudes for waves in the field" I wasn't talking about quantum-mechanical amplitudes, but literally just numbers that specify the magnitudes of various modes of the definite field configuration. So yes, you literally can sum (or integrate, in the continuum case) the products/squared magnitudes of those numbers and get a number out at the end. I did try to be pretty specific, but apparently we still had a breakdown in communication.

Maybe I haven't been doing as good a job of explaining things as I could, but… it also feels like you haven't taken the time to properly understand the ideas you are criticising.

I've worked out a concrete example of the kind of regularized pilot-wave theory I've been alluding to, for quantum electrodynamics. (I believe I can accomplish the same thing for the electroweak theory, at least in principle, since the inclusion of the scalar field actually helps draw out the gauge-invariant degrees of freedom of the system. QCD presents a greater difficulty, however.) I can send it to you if you're interested… it might help you to have an actual example of the things I am talking about.

*"I am a little bit nervous about mentioning a "collapse" part of the formalism, although I think I understand that you mean the usual decoherence processes"*

By this I simply meant the part of the formalism where you apply the Born rule, instead of the unitary evolution of the quantum state.

*"I am also concerned about the practicalities of this."*

In the paragraph that follows (last paragraph of your last comment), it looks to me like you are still thinking about things in the wrong way. The probability current in PWT is not a vector in 3D space and it is not constructed for individual particles independent from all the other particles in the system. Rather, it is a vector in configuration space that describes the motion of all the particles, and/or of all the degrees of freedom of the fields, simultaneously. Now, that doesn't immediately answer your question about how the changes in particle number happen, but rather than try to explain it to you badly here, at this point I think it would be better to simply recommend a closer read of Duerr et al's papers on Bell-type QFTs (along with subsequent work in that area).

**Closing thoughts**

Looking back and taking stock of our discussion, I honestly don't think a single one of your objections to PWT has hit its target. You basically had seven objections in your original post. The first four were completely unfounded:

*(1) QFT can’t be expressed by differential equations, and PWT needs it to be expressed that way*

**Objection unfounded**, for several reasons; the representation of QFT is not as different from QM as you assert; PWT does not need to be in Bohm’s formulation and the equivalence of the empirical results with orthodox QFT can be proven just as well in de Broglie’s formulation, and de Broglie’s formulation is more easily generalizable and doesn’t have the same requirements as Bohm’s

*(2) Can’t decompose QFT evolution into a part describing particles and a part describing the pilot-wave*

**Objection unfounded**, this isn’t how PWT works even in Bohm’s formulation, much less in de Broglie’s

*(3) Can’t reproduce phenomena of creation and annihilation of particles*

**Objection unfounded**, and seems to be based on your own misconceptions about how PWT works; whether it is simple lattice models, or the more-refined Bell-type QFTs, or models with a field ontology, these phenomena can certainly be reproduced

*(4) Amplitudes in QFT don’t work the same way as they do in QM*

**Objection unfounded**; you failed to realise both that QFT can be represented in a way that is more similar to how NRQM is ordinarily done, and that the setting of PWT (and the relevant probability density) is in configuration space, not physical space

The remaining three objections in your original post:

*(5) PWT is non-local and breaks Lorentz-invariance*

**Objection overstated**; I understand the concern here, but as I’ve argued, I simply don’t think it has as much weight as it is generally held to have

*(6) PWT breaks gauge invariance*

Somewhere between **objection overstated** and **real difficulty, but plausibly surmountable**; I think it is extremely likely that there is some way to work with gauge-invariant degrees of freedom for the local beables of the theory; and what ultimately matters is the empirical and explanatory adequacy of the theory, and in that regard some of the same arguments can be made re: gauge symmetry as for Lorentz symmetry

*(7) Renormalization*

**Real difficulty, but plausibly surmountable**; as discussed, it seems like we both agree on that assessment

During our back and forth in the comments, we could say that one further objection arose to my use of a lattice-regularized model:

*(8) Doesn’t have a good continuum limit*

**Real difficulty, but plausibly surmountable**, there is already work being done to resolve this (progress is slow, but it is there); and orthodox QFT has a similar and related problem (the lack of a measure for an infinite-dimensional space)

Now, obviously, we are still going to have a disagreement here; your evaluation of the relative importance of certain aspects of the theory is going to be different than mine, etc. But I really think your overall assessment of pilot-wave theory is seriously flawed, particularly in regards to the first four objections you put forward, which are based on misconceptions.

For that reason, I hope your readers don’t simply take your words on this subject as gospel, but look into the matter themselves. I also hope you are open to continuing to look into this subject and refining your judgments. If you do, though, beware: I have seen more than one physicist erect and knock down a pilot-wave straw-man, without knowing they haven’t interacted with the real thing.

**Post Comment:**

**for bold text**, < em>... < /em>

*for italics*, and <blockquote> ... </blockquote> for a quotation

All fields are optional

Comments are generally unmoderated, and only represent the views of the person who posted them.

I reserve the right to delete or edit spam messages, obsene language,or personal attacks.

However, that I do not delete such a message does not mean that I approve of the content.

It just means that I am a lazy little bugger who can't be bothered to police his own blog.

Weblinks are only published with moderator approval

Posts with links are only published with moderator approval (provide an email address to allow automatic approval)

Matt's response (part 1)You knew it was coming... the response from your resident pilot-wave fanatic. I did a word count and my response is... long. So I'm going to break it up into several comments. At a couple points I reference some articles; I'll post links to them in a separate comment. Here we go.

On Bohm vs de BroglieYou present what is effectively Bohm's second-order formulation of pilot-wave theory (PWT), where the force on the particle (from the classical potential V), determining its acceleration, is augmented by the force from the "quantum potential" Q. This is inferior to de Broglie's earlier first-order formulation, where the guidance equation directly gives the particle velocity, for a couple reasons. For one, the 2nd-order form is redundant: the velocity must still be constrained by the guidance equation to ensure the theory gives the correct behaviour (though this can be interpreted as a constraint on initial conditions that is preserved by the dynamics). For another, the 2nd-order form does not generalize as well (e.g., to particles with spin). Bohm's formulation is great at showing how PWT reproduces Newtonian mechanics in the classical limit, and it is good as an alternative way to motivate the guidance equation, but almost all presentations of pilot-wave theory that I have seen focus on de Broglie's formulation.

You mention de Broglie's formulation but curiously state that the derivation is not as transparent as Bohm's formulation; I find this odd. If anything, the guidance equation appears even more directly in de Broglie's PWT: as you acknowledge, it simply states that the particle velocity is the probability current divided by the probability density; i.e., it is the velocity field that an ensemble of particles distributed according to the psi-squared probability density must follow in order to stay psi-squared distributed. (You then go on to object that this switches between an ontic and an epistemic interpretation of the wavefunction; more on this in a moment.) This simplicity is part of why the 1st-order form generalizes more easily; whenever the quantum state can be expressed as a complex-vector-valued wavefunction on the configuration space of the system, you can find the probability density and the probability current from the evolution of the quantum state, and form a vector field on configuration space which is the guidance law for a pilot-wave interpretation of that quantum system. (This generalizes in a fairly natural way even further to systems where the configuration can jump between discrete sectors of the configuration space, e.g., particle creation and annihilation in Bell-type QFTs.)

As for your objection that the 1st-order PWT switches between treating the wavefunction ontologically (as a guiding wave) and epistemically (as a probability distribution), there is no problematic switching going on. The fundamental role is the ontological role: the wavefunction guides the configuration. But as a consequence of this ontological role, the wavefunction can have an epistemic role for us: qualitatively, we can say the wavefunction tends to guide the configuration towards regions where psi-squared is high and away from regions where it is low, so we are more likely to find it where psi-squared is high. And turning things around, as we observe quantum phenomena, the epistemic role the wavefunction has for us can lead us to postulate an ontological role via the guidance law, as an explanation for those phenomena.

A lot of detail and nuance can be added here, related to how PWT explains the Born rule and justifies the quantum equilibrium hypothesis, and how in PWT we can define wavefunctions for subsystems from the wavefunction and configuration of a larger system. This comment is already going to be excessively long without getting into that, so instead I'll point to Travis Norsen's article "On the Explanation of Born-Rule Statistics in the de Broglie-Bohm pilot-wave theory" [1] as a great place to start. The upshot is that your concerns about the propriety of using the wavefunction to express a probability density in PWT can, I believe, be more than sufficiently allayed. And this is in stark contrast to orthodox quantum mechanics, where the quantum state being sometimes treated as ontological (i.e., when under unitary evolution) and sometimes as epistemic (i.e., when applying the Born rule) is genuinely inconsistent, since the two roles cannot be coherently related to each other the way they are in PWT.

You also seem to suggest that it has not been shown that the 1st-order formulation is empirically equivalent to the Copenhagen interpretation, whereas this has been shown for the 2nd-order formulation. This is not the case. You may recall the argument in Maudlin's book (which occurs elsewhere in the literature as well) that all empirical results can be reduced to the positions of particles, and the 1st-order PWT ensures particles are psi-squared distributed just as they are when performing any measurement in the Copenhagen interpretation, so empirical equivalence is guaranteed. (Of course, care must be taken to ensure empirical equivalence still holds when trying to extend PWT to encompass phenomena covered by QFT, but the point is that there is no advantage to Bohm's formulation over de Broglie's in this regard.)

---Some further problems with your presentation of pilot-wave theory---

There are some conceptual errors in your presentation of pilot-wave theory. You focus on the single-particle theory, I assume for the sake of simplicity, but this may give the false impression that the pilot-wave is a field in 3-d physical space rather than configuration space. You also state that changes in the pilot wave are "influenced by the motion of the particle", and that "because the pilot wave cannot be directly observed, its initial state in any experiment is unknown", both of which are false.

The pilot-wave is nothing other than the wavefunction/quantum state of ordinary QM, and obeys exactly the same evolution (sans collapse).It is not influenced by the motion of the configuration point, and this holds true in all attempts to extend PWT to cover QFT that I have seen. (By the way, this demonstrates that the bouncing droplet experiment is nothing more than a toy model, and any objection based on it is spurious.) Furthermore, there is no more uncertainty about the pilot-wave in PWT than there is about the quantum state in ordinary QM. The fact that the exact same quantum state leads to multiple possible outcomes is explained in PWT entirely by uncertainty about the initial configuration of the "hidden variables" (i.e., the variables in the state besides the wavefunction).---On the motivation for pilot-wave theory---

Before moving on to discussing the main objections to PWT, I want to comment on the supposed motivation for it. In your discussion you put a lot of emphasis on PWT resembling or preserving the philosophy of classical mechanics, and you seem to consider it a mark against attempted extensions of PWT which move away from classical mechanics (e.g., by being stochastic rather than deterministic, or by being based on de Broglie's 1st-order form rather than Bohm's 2nd-order form with its clear resemblance to classical mechanics).

But from everything I have seen, proponents of pilot-wave theory unanimously (with perhaps two exceptions, who are Bohm himself and Peter Holland) care not a whit about these things. Rather, the motivation for PWT is ontological clarity and conceptual coherence; it is about looking for a serious explanation for quantum phenomena without the vagueness and inconsistency present in orthodox QM. Attempted extensions of PWT should be judged on those points (and, of course, empirical adequacy), not on any requirement that they resemble classical physics.

Perhaps related to this, pilot-wave theory is a physics theory, not a philosophy of nature all by itself. I don't see it as being in competition with Aristotelian philosophy, for example. Perhaps you do, and this is why you think that theories with stochastic particle creation and annihilation events necessitate "coming up with a new philosophy after all". But to me, this seems to miss the point - if it turns out that a pilot-wave QFT requires these events, does that mean it is no better than orthodox QM? Of course not! It would still trump Copenhagen by providing a clear ontology and a resolution of the measurement problem.