## Introduction

I am having a look at different philosophical interpretations of quantum physics. This is the fifth post in the series. The first post gave a general introduction to quantum wave mechanics, and presented the Copenhagen interpretations. I have subsequently discussed spontaneous collapse, the Everett interpretation, and Pilot Wave interpretation. Today I intend to discuss the consistent histories approach.

I have not seen philosophers discuss the consistent histories approach that much, which is a pity because in my view it is one of the stronger interpretations of quantum physics (indeed, it shares a lot in common with my own interpretation). Among physicists, it is somewhat more popular. The approach is particularly associated with Robert Griffiths and Roland Omnes. I will mainly base this summary on Robert Griffiths book Consistent Quantum Theory. There is also a good article in the Stanford Encyclopedia. A similar interpretation was developed by Gell-Mann and Hartle.

My own interpretation, as described in *What is Physics?*, can also be thought of as an
extension to the consistent histories approach. I did not realise that at the time. I only truly understood
Griffith's interpretation when I first read his book, which was well after I published *What is Physics?*
My approach is not exactly the same as that of Griffiths, and we emphasise different parts of the framework. My thoughts now are that our two works compliment each other:
I fill in some of the gaps to his work, and he fills in some of the gaps in mine. I will discuss my own interpretation
and how it differs from Griffiths' approach at the end of this series. (I should say, however, as a caveat, that I had read
some of Omnes' work some time before publishing my own, after I had developed my ideas, but before they had fully crystallised.
It is possible that there was some influence there.)

The advocates of this interpretation claim is that it is a natural interpretation of quantum physics. They further go on to claim that it resolves all the standard paradoxes (or at least they are not problematic). It does not involve non-local interactions, or need multiple worlds. It is built from two main premises that go beyond the textbook quantum physics. Firstly, that quantum dynamics is indeterminate. Not just during decoherence or measurement events, but always. The Schroedinger equation is just an instrument used to assign probabilities to various outcomes. Secondly, new logical principles are required to map between a quantum phase space and a classical probability space. It is this second premise which is the biggest departure from classical thinking. However, the new quantum logic maps to classical propositional logic in the domain where classical mechanics is appropriate.

## Philosophy of probability

Before proceeding, I need to discuss a little the philosophy of probability theory. First of all, what is a probability?

A probability is essentially a set of numbers which follow a few
standardised rules. We have a set of outcomes, which we can label
*x _{1}, x_{2}, x_{3}*. I will discuss
a discrete distribution (where the outcomes can be mapped to the
integers), as that captures everything I need for this brief discussion,
but it can be extended to a continuous distribution (where the outcomes
can be mapped to the real numbers). I assume that the set of outcomes
are irreducible (can't get any smaller), orthogonal (don't overlap) and
complete (whatever system we are describing can only fall into one of
these outcomes).

The probability is defined as follows. The probability *P(x _{i})
* is a map between the outcome x

_{i}or a specified subset of outcomes

*x*and a real number such that:

_{i}∪ x_{j}∪…-
*∑*_{i}P(x_{i}) = 1 -
*∀ i: P(x*_{i})>0 -
*∀ i≠ j: P(x*_{i}∪ x_{j}) = P(x_{i}) + P(x_{j})

In addition, we need the definition of the conditional probability

*P(x*

_{i}|x_{j}) P(x_{j}) = P(x_{i}∪ x_{j})So what does the probability mean? Well, so far it doesn't mean anything. It is just a map between an abstract set of outcomes and an abstract real number which obeys certain rules. Nothing more. To apply a meaning to it we need to use it in the context of some particular physical system, i.e. interpret the outcomes with various physical states or combinations of states (these need not all occur at the same time). There are then various interpretations which can be used. These include the classical, the Bayesian (which describes the probability as an expression of uncertainty), the frequentist (which interprets the probability in terms of frequency distributions) and the logical. Of these, the most useful for the consistent histories approach are the classical and the logical.

The classical interpretation of probability states that a symmetry in the physical system we are modelling (or our knowledge of that system) should be reflected by a symmetry in the probabilities. So, for example, if we have a perfectly balanced six-sided dice, and there is no known bias in how it is thrown, then we can use the symmetry to map each outcome to an equal probability of 1/6. The problem with the classical interpretation is that, in macroscopic life, the symmetries are never perfectly exact. However, in quantum physics there are exact symmetries, and so symmetry plays a key role in constructing the uncertainties in quantum physics. Not precisely as done in the classical interpretation (as in the dice roll), but it nonetheless the demand that the way we calculate probabilities reflect the symmetries of the real physical world provides a large part of the underlying basis of why we can be confident that the predictions of the theory will mirror reality.

The logical interpretation of probability views probabilities as an extension of logic, to also include cases where not all of the causes are known. In short, probabilities are statements that connect various axioms with certain conclusions. Those axioms will include a rule in how we parametrise those unknown causes. We will usually use a symmetry rule to do so. Additionally, the axioms include what is known: perhaps an initial state of the system, and some means of parametrising the known causes. All probabilities in this interpretation are thus conditional (despite my notion above). For example, when considering the dice, we assign the probability of 1/6 to each outcome with the explicit assumption that there is an underlying symmetry in the dice. In other words, the assumption of symmetry is explicitly stated in the result we give. If we assume that the dice is unbalanced in a particular way, then we would assign a different probability to each outcome. Both of these probability distributions are correct, in the sense that they follow directly from the premises. To decide which one is applicable to a particular dice, we would need to study that dice, observe its weight distribution and compute how that would affect how it rolls, and determine which set of assumptions best matches the properties of the dice.

The probability is, in this interpretation, a statement of uncertainty; but unlike the Bayesian interpretation which is subjective, in the logical interpretation the probability is objective. Insert the same axioms, apply the various well-defined rules to go from premise to conclusion, and we get a set of numbers which are mapped to each possible outcome. It doesn't matter who performs the calculation or what they personally believe to be true; all they have to do is insert some set of beliefs as axioms and perform the calculations.

What do these numbers mean? They still don't (yet) have any physical meaning. They are just a set of numbers which obey certain rules.

Indeed, there is no fundamental reason why we have to express our uncertainty in terms of probabilities. The quantum mechanical amplitude plays the same basic role. The amplitude is again a map between a set of outcomes and a number. In this case, we map between a set of outcomes that ought to be indexed via either a complex vector space, real numbers or integers (depending on what we are modelling) to a complex number. As with the probability, there are various rules describing how and when we ought to add and multiply amplitudes. As with the probability, all amplitudes are conditional on various axioms, including the initial state, the known causes, and a particular parametrisation of the unknown causes. And, as with the probability, the amplitude by itself doesn't mean anything. There is, in other words, a clear analogy between the amplitude and a probability. In quantum theory, the amplitude is more fundamental, but the probability is more useful.

So what are the uses of probability? The key observation is that a frequency distribution is described by the same mathematical rules as a probability distribution. This means that if we get the axioms correct (including the correct parametrisation of the unknown causes, and the correct laws of motion connecting initial states to final states), then in principle and in the right circumstances a probability distribution can be used to represent a prediction for a frequency distribution. It is a representation and prediction rather than a frequency distribution, because probabilities are calculated from various axioms (which need not correspond to reality) and frequencies are measured. This is a means of comparing theory (built ultimately on symmetry) with experiment (which measures frequency distributions). So a probability distribution can be used to predict a frequency distribution. But not every probability distribution constitutes such a prediction; nor is every frequency distribution able to be modelled by a theoretical calculation of probability (for example, if there is no known symmetry that enables us to correctly parametrise the unknown causes). The calculation of probability needs to correctly model the unknown causes, and use the correct physical laws in order to deduce the final distribution from the axioms. If either of these fail, the predicted distribution will not mirror reality. Similarly, the frequency distribution needs to be measured after an infinite number of observations. Obviously this is impossible, but we can still compare against experiment by substituting a large finite number of observations while allowing a certain level of statistical imprecision. We cannot use probabilities (except 1 or 0) to say anything about the outcome of individual events. Their only uses are in comparison with a frequency distribution, or in decision theory where they are combined with a measure of cost or benefit to decide the best action. That last use is very helpful in gambling, but not so much for the scientist when acting as a scientist, so I won't discuss it in this post.

Similarly, with the quantum amplitude, we observe that the modulus square of the amplitude obeys the same rules as a frequency distribution. We can thus in principle use the amplitude, under the right circumstances, to make predictions for frequencies. The same caveats apply as with probabilities. Again, we cannot use amplitudes (other than 0 or those with modulus 1) to say anything about individual events. The only use is, if we get the laws of physics right and correctly parametrise the uncertainties, then they can be used to predict a frequency distribution. (Or they could be used in decision theory, but again that is not really relevant.)

There is no fundamental philosophical reason why we should use an amplitude or probability (or, indeed, something else) to express our uncertainty. One is a map to a complex number which obeys certain rules; the other is a map to a real number which obeys certain other rules; but essentially they are the same sort of thing: a map from a set of outcomes to a set of numbers. The reason we choose one over the other when modelling a physical system is simply what best corresponds with the laws of physics for that system. For Newtonian systems, we use the probability. For quantum systems, with their more complex ontology, the amplitude proves to be more useful (in the sense that we can extract the correct frequency distributions if we map the set of outcomes and intermediate states between the initial state and final outcome to complex numbers, and we don't if we map the outcomes and intermediate states to real numbers).

So the quantum mechanical amplitude is best thought of as an extension to logic, capable of parametrising the effects of both known and unknown causes, where the set of possible states is mapped to a complex number, which serves as an expression of our uncertainty and from which we can make predictions for frequency distributions.

Why do we use amplitudes rather than probabilities as the basis of the predictor of
frequency distributions? Chiefly because probabilities are insufficient to represent the
different states of a quantum system. We start with an initial state; our goal is a final state
which we wish to compare with a measured frequency distribution. In the middle, the system runs
through numerous intermediate states; and during the calculation we have to represent our uncertainty
concerning those intermediate states in some way. In quantum physics, there are interference effects, where
the same outcome reached via two different paths breaks the rule that the probability of *A* and B
is the sum of the probabilities that arise from the different paths. Amplitudes capture those effects correctly,
both at the end of the calculation and in the intermediate states.

The key observation is that both probabilities and amplitudes are mathematical maps from a set of states or outcomes to a set of numbers which obey certain properties, and in principle allow us to predict what a frequency distribution would be if certain assumptions were satisfied (and the physical system is correctly modelled). Which one we use depends solely on the system that we are trying to model.

## Summary of the interpretation

In summary (in my own words, which the likes of Griffiths and Omnes might not fully agree with), the approach can be described as follows.

- The fundamental beables of the system are the particles we observe: the electrons, quarks, photons and so on. The wavefunction does not represent a physical object, and only exists to help parametrise the theory and help make predictions. The mathematical representation is a means to model various possible observable results. It is not (necessarily) a complete representation of the particle. Indeed, we cannot fully represent the particle. What we can represent are various possible observable properties of the particle, represented by the various quantum states in different bases. At any moment in time, we can ask the question of what we might observe if we perform a particular measurement at that time (i.e. when the particle decoheres into a particular basis). The underlying mathematics of quantum theory, including the wavefunction, are simply a mathematical tool that allows us to make predictions for observable quantities of the beables, and in particular to understand correlations between different measurements of the system (e.g. the initial state of the system, and various intermediate and final states).
- The motion of the particles is entirely indeterminate and unpredictable. So, unlike the Copenhagen interpretation where there is a deterministic evolution of the wavefunction and then an indeterminate wavefunction collapse, and the Everett or pilot wave interpretations where there is deterministic evolution and no collapse, here we have no deterministic motion of the beables, just a wholly indeterminate motion. This means that we cannot make precise predictions of individual particle trajectories or measurement results, but only express the predictions of the theory stochastically.
- The wavefunction serves as a pre-probability (in Griffith's language, I tend to call this a likelihood, but mean the same thing); i.e. it is a parametrisation of our uncertainty about the current states of the particles. It is not a probability, as it does not follow the basic mathematical rules that govern probabilities, but can be used to calculate conditional probabilities. These are used to make predictions about frequency distributions, after the experiment is repeated numerous times.
- The pre-probabilities are conditional on the initial state of the system that we input into the equation, and also any subsequent experimental data we can use to refine them. We can thus, in this interpretation, construct amplitudes that span measurements at multiple different times. (The Copenhagen interpretation struggles with this, as there is always a wavefunction collapse of the entangled system when there is a measurement event.)
- We divide the duration we are interested in into various time slices. These represent the times where there are events of interest, although we can insert intermediate times as necessary. For the initial time slice we insert the initial state of the system. At subsequent time slices we construct the set of possible states the particle could evolve into at each of those times (we can omit those states which are forbidden by the various conservation laws). This obviously works best in either the Schroedinger or interaction picture.
- In reality, the particle will select one of those states at each time slice, giving a well-defined trajectory. But, because of the fundamental indeterminacy, we don't and can't know which of those states will be occupied in practice (unless we take a measurement), so we have to consider all possible and consistent trajectories. Each path specifying one particular state at each time slice is known as a history. The procedure outlined here gives us a family of histories, which represents one possible representation of all possible trajectories of the system. To calculate a probability, we combine the various different histories leading to the same final outcome.
- The most important way in which quantum physics differs from classical physics is that there is no single basis in which we could describe the physical states. There are therefore multiple possible ways in which we could describe these families of histories. But these possible bases are not all consistent with each other. A consistent basis is one where the operators whose eigenstates define a given state commute with each other. An inconsistent basis is one where they do not. The key rule to performing calculations in quantum physics is the single framework rule: we cannot combine histories where the state of the same particle is described in different and inconsistent bases at the same time. Most apparent quantum paradoxes come from violating the single framework rule. When performing a calculation, we select the basis that best fits the various measurements we intend to make. Thus we look at the experimental set-up, and construct the framework on the basis of that, and then make predictions for the results.
- We calculate a weight for each history by applying the Schroedinger time evolution between each selected time step. This weight is known as a pre-probability or amplitude. In effect, it is an acknowledgement that while numerous different state transitions are possible, they are not equally likely; and it is through this process that we express the relative likelihood of each history. Relative probabilities for particular outcomes can be calculated by summing up the weights for all histories leading to that outcome, and taking the modulus square. The modulus square of the pre-probabilities, once appropriately normalised, does satisfy the mathematical rules that describe probability distributions, and thus can be used to make predictions for frequency distributions. While individual events are entirely unpredictable, this allows us to make predictions for frequency distributions after the experiment has been repeated enough times (where enough here means as close to an infinite number of times as we can get; for finite samples there will be some statistical uncertainty, which can be estimated and thus compensated for when comparing theory to experiment). In general (there are a few exceptions where the particle state starts as and remains an eigenstate of the Hamiltonian), we cannot even in principle, even with complete knowledge of the physical variables, predict the outcome of an individual experiment. We can say for certain that some final states are forbidden entirely, for example if they violate a conservation law. We thus have a number of possible outcomes, but no way of knowing which of them will occur. Thus an individual experiment cannot verify or refute any prediction of quantum theory (unless the outcome is one of those with probability zero). It is only when we consider an ensemble of results that we can calculate statistical distributions and compare theoretical prediction with experiment.
- All the pre-probabilities (and thus all the probabilities) are conditional on the initial state and the correct equations for the quantum time evolution being used to calculate the weights. We can also add to these conditions any other data calculated at intermediate time periods, by filtering the histories that contribute to the final outcome to only include those which pass through that particular state. Thus, for example, in a two slit experiment where we look at one of the slits, that creates some additional data and we can filter the histories to only include those which are consistent with that intermediate-time measurement. That could, of course, dramatically change the probability distribution for the final outcome. Thus, at least within the context of the discussion so far, quantum physics is primarily a means of making predictions concerning frequency distributions for possible outcomes when an experiment is repeated numerous times. Unlike the Copenhagen interpretation, the wavefunction does not represent something physically real. Wavefunction collapse is thus not something that happens in reality, but only in the theorists notebook as he converts his pre-probabilities into something that can be turned into a prediction. This completely resolves the measurement problem. The beables of the system are just the particles we observe; there is no need to add anything to these. Their paths are entirely indeterminate, which explains why experimental outcomes appear indeterminate. The various bases we use to represent the possible states of the system are also not part of reality, most of the time. We must remember that the mathematical representation of reality is not reality itself, but only a tool we use to extract data that we can compare with observation. We should not hope to be able to capture every aspect of reality (i.e. the beables) in a mathematical representation. We can hope to capture some aspects of reality in certain circumstances; and this is where the mathematical formulation proves useful. After decoherence, the system does collapse into one of a set of possible states that correspond to one of these bases. Before decoherence, we cannot say which if any of these representations describes the particles. But we can still use them to parametrise the possible motion of the particle, and ask questions about the probabilities that we would get for each possible outcome if we were to try to make that particular measurement at that particular time.

In the next few sections, I am going to summerise various selections from Griffith's book. (Re-writing in my own words, but closely following his presentation.) I cannot do the book justice in a blog post; so I recommend that interested readers read the book itself, as well as the related literature from Omnes and Gell-Mann.

## Stochastic Histories

Consider a coin tossed three times in a row. There are eight possible outcomes, for example HHH or THT. These eight possibilities are known as a sample space. The event algebra contains all 256 possible combinations of sample spaces, for example the empty set, the eight sample spaces, the 56 different ways of having two sample spaces, and so on. The elements of the sample space are referenced as histories, where a history is a particular sequence of events at successive times. A compound history will contain two or more elements from the sample space.

For example, we can consider a particle undergoing a random walk between
the limits *-N* and *N*. There are thus *2N+1* possible
locations for the particle. At each timestep, it either moves up or down
one place or stays where it is. If the particle starts at location
*s _{0}*, then a history of the particle's motion consists of
giving its locations as a sequence

*(s*. The sample space of the histories is all the different possible sequences.

_{0},s_{1}, …,s_{n})
So far, these definitions are based on a classical system, but they can
also be carried over to a quantum system. A quantum history of a physical
system is a sequence of quantum events at successive times, where a
quantum event can be any quantum property of the system in question.
So, for a given set of times, a quantum history is specified by a set of
projectors for each time,
(*P _{1}*,

*P*,…). The projectors might, for example, be onto particular spin states (so

_{2}*P*projects into a spin up state on the

_{1}*z*-axis,

*P*projects into a spin down state on the

_{2}*x*-axis, and so on), or to energy eigenstates, or any other particular quantum state. The projectors need not all span the same dimensional space, so, for example, one can mix the spin and energy eigenstates.

The sample space for a particular projector corresponds to the Hilbert
space of whichever observable is being measured. For example, the Hilbert
space might have basis state vectors spanning the spin up and spin down
states along the *z*-axis, or it might correspond to the energy
eigenstates for a particular Hamiltonian. The history Hilbert space
will be a tensor product of all the relevant Hilbert spaces for a
particular quantum history. So if *A _{i}* is the Hilbert space
in which the projector

*P*resides, then the history Hibert space is written as

_{i}
*A = A _{1}⊙A_{2}⊙A_{3}⊙…*

⊙ is a tensor product, only used instead of ⊗ to indicate that the spaces are understood to be at successive times. One can extend this Hilbert space history to additional times by adding in the appropriate operators for the additional times. The history exists as one element on this sample space, so, for example, we can write this as the projector

* Y = P _{1}⊙P_{2}⊙P_{3}⊙…*

Expanding the sample space for the history to additional times (whether added on to the end of the sequence, or before the start, or at intermediate times) involves inserting an identity operator at a particular time, and then expanding it as all the projectors in a particular basis. This is equivalent to the original, unexpanded, history. We can then, if we choose, select a particular expanded history by selecting one particular projector in the sequence.

The operator *∧* is used to indicate the product of two
histories as long as they commute. So if

* Y' = Q _{1}⊙Q_{2}⊙Q_{3}⊙…*

Y∧ Y' = Y'∧ Y

then
* Y∧ Y' = P _{1}Q_{1}⊙P_{2}Q_{2}⊙P_{2}Q_{2}⊙…*

The disjunction, ∨, is defined as

*Y ∨ Y' = Y + Y' - Y∧ Y'*

When we add two histories together, we cannot in general add together
the individual elements of the histories, unless all the previous elements
are identical. Suppose, for example, that we have two histories, *Y _{1}* and

*Y*, with

_{2}
* Y _{1} = P_{1}⊙P_{2}⊙P_{3}*

*Y*

_{2}= P_{1}⊙P_{2}⊙Q_{3}
where *P* and *Q* are different but commuting projectors. In this case,

* Y _{1} + Y_{2} = P_{1}⊙P_{2}⊙(P_{3} + Q_{3})*

However, we cannot combine two histories in this way if two or more members of the history are expressed in different bases.

A sample space of histories is that set of mutually orthogonal projectors, one for each time in the history, which sum up to give the identity operator. We can also define an event algebra

* X = ∑ _{a} π_{a}Y_{a}*

where *π _{a}* is either zero or one. This gives a subset
of the sample space of possible histories.

This framework is flexible enough to allow us to impose various conditions at different times. We can restrict the analysis to just those histories which contain one particular projector at a given time. The most important example of this are those histories which have a fixed initial state, where the first projector in the history is set to a particular value.

Before moving onto the dynamics, we need one more definition, which is
the unitary history. A unitary history is one where the projectors
*P _{j}* defined at time

*t*can be written in the form,

_{j}
*P _{j} = T(t_{j},t_{1}) P_{1}T(t_{1},t_{j})*,

where *T* represents some unitary time evolution operator.
In quantum physics, this would usually be represented by the time-ordered
exponential of the timestep times the
Hamiltonian operator *H*,

*T(t _{j},t_{1}) = T[e^{i (tj-t1) H }].*

We can also consider unitary families of histories, which are the set of unitary histories starting from different states.

## Wave function as pre-probability

So far, I have just outlined and introduced the notation. To apply this to a particular physical problem,
we need to consider the dynamics of the system.
The end result of this discussion would be to a defined weight to each possible outcome. This weight can then be
used to predict a frequency, and in this way we can compare theory with experiment. So,
if one starts in the state *s _{0}*, there will be a certain weight associated with its
transition to state

*s*. The total weight for the history will be the product of the weights for each state transition (including when the particle stays in the same state over a timestep). To calculate a probability from this, we need to add in an initial state, expressed perhaps as an amplitude distribution for the possible locations of the particles in the system. This is known as

_{1}*contingent information*: information which is not related to the dynamics, but necessary to understand the amplitude for the final state.

The contingent information is used to restrict the sample space of histories to just those which start at
that point. The final amplitude for a particular state is then taken by considering the different
histories that include that final state, adding up their various weights, and normalising by an
appropriate factor (determined to ensure that the total probability for all outcomes adds up to 1). In
this way, we can construct contingent probabilities for various outcomes given
an initial state. We can, of course, also add in several initial conditions or several final outcomes
at different times by only including those histories which include those points. So, for example,
we can ask "What is the amplitude for the particle to start at *s _{0}* at time

*t = 0*, pass through time

*s*at time

_{2}*t = 2*and

*s*at time

_{3}*t = 3*and finish at

*s*at time

_{5}*t = 5*?". We consider those histories that pass through each of those scenarios, add up the weights for those histories, and appropriately normalise it.

So if we start with an initial state *ψ _{0}* at time

*t*.

_{0}*T(t*represents the Schroedinger evolution of that state from time

_{1},t_{0})*t*to time

_{0}*t*. We then want to know the weight for the histories that will lead to the system being in a state

_{1}*φ*at time

_{k}*t*.

_{1}
We can write *ψ _{1} = T(t_{1},t_{0})ψ_{0}*.

*ψ*should not be thought of as a representation of a physical property or physical state. To talk about physical observables only makes sense if the wavefunction is in the correct basis for that observable, and in general

_{1}*ψ*will not be in that basis. It should be thought of only as a mathematical construct used in the calculation of probabilities.

_{1}*ψ*is not a physical property, and nor is it a probability. Griffiths labels it as a

_{1}*pre-probability*(in my own work, I use the term

*likelihood*to convey the same idea). In addition to wavefunctions obtained by a unitary time development, density matrices are often used as pre-probabilities. The pre-probability is useful to calculate probabilities for different bases (that is to say, different families of histories). As long as there is no inconsistency arising from combining results calculated from different bases, there is no harm in doing so.

Once we have assigned a pre-probability to each history leading to a particular outcome, we can use them to compute a conditional probability for that outcome.
We first of all sum up the pre-probabilities for that outcome, and use the Born rule, which assigns a probability that the system will be in a state *φ _{k}*
conditional on the initial state

*ψ*and the particular time evolution operator we are using,

_{0}*|φ*. The Born rule assigns this as a probability to the history between the initial and final states. The weights are positive and sum up to 1 over all outcomes, so can be interpreted as a probability.

_{k}^{†}T(t_{1},t_{0})ψ_{0}|^{2}One does not, of course, have to calculate probabilities in this way. One can, for example, start with the final state and evolve backwards in time. One can also use the Born rule to calculate the expected value of different properties.

It is important to remember when applying the Born rule for two times that a family of histories tells
us nothing at all about what happens at intermediate times. Such times can be included by adding in
identity operators at those intermediate times, and then expanding those identity operators as the
sum of all possible states in some particular basis. We can then eliminate from the sum those
transitions which are forbidden by the dynamics. The basis we choose might not be the most
natural one to consider the Schroedinger time evolution, i.e. the eigenstates of the Hamiltonian
operator, or the basis of energy states. So in an energy basis, there is only one possible path from
the initial state to the final state. In a different basis (such as the location basis), there could be
several different paths to the final state. Each of these paths will be associated with a particular
weight drawn from the dynamics in this basis. The pre-probability *ψ _{1}* will reflect
the sum of all these weights.

So the wavefunction at any given time (except, perhaps, the initial time) is not interpreted as the
representation of anything physical,
but as a pre-probability used to calculate the probabilities that the system will be found in a given
state at a subsequent time. This is thus a *ψ*-epistemic interpretation of quantum physics.

## Consistent histories

Having discussed about histories, I now need to turn to the "consistent" part of this interpretation. The Born rule is used to extract probabilities out of the Hilbert space.

For a general history of the form

* Y = P _{1}⊙P_{2}⊙P_{3}⊙…*

One can assign an amplitude to each history using the chain operator

*K(Y) = P _{1} T(t_{1},t_{2}) P_{2} T(t_{2},t_{3}) …*

This operator makes sense if the P are any quantum operators, and not just projectors. One can also expand it to consider the weight for multiple histories,

*K(Y∪ Y' ∪ Y'') = K(Y) + K(Y') + K(Y'') * (*)

The sequence of the operators is necessarily time-ordered. The weight for
a history or set of histories is defined as the inner product of *K*
and its conjugate.

*W = (K*

^{†}(Y),K(Y))
This is positive and by construction the total weight, once properly normalised,
for all possible histories adds up to one. These are two of the three
conditions required for it to function as a
probability distribution and thus a predictor for a frequency
distribution.
The third condition required for this to be interpretable as a probability
is that in
classical probability theory the probability of something
being A or B is only equal to the probability of A plus the probability of
B, if there is no intersection between A and B. In this analysis, however,
that is only true in certain circumstances. If we consider a family of
histories *Y ^{a}* which can be combined to give

*Y = ∑*

_{a}w_{a}Y^{a}
for some weights * w _{a}*, then the function

*K*becomes

* K(Y) = ∑ _{a} w_{a} K(Y^{a}),*

and

*W(Y) = ∑ _{a,b} w_{a}^{†}w_{b} (K(Y^{a})^{†},K(Y^{b}))*

This only corresponds to the additivity condition that defines a
probability if *(K(Y ^{a})^{†},K(Y^{b})) = 0* for

*a ≠ b*. Thus to extract a probability from this framework, we should restrict ourselves to only those families of histories which satisfy this consistency condition. Thus we restrict ourselves to those parts of the sample space which satisfy the condition. Those parts of the sample space which do not satisfy the consistency condition are, in effect, meaningless, in the sense that we cannot draw empirically useful conclusions from them. What we observe are individual measurement events. However, because physics is fundamentally indeterminate, these are entirely unpredictable. We can, however, combine a number of measurements (preferably an infinite number) to obtain a frequency distribution. We can, in principle, make predictions about this distribution if we can have a mathematical construct which satisfies the same roles as a frequency. That is obtained by considering the inner product of two chain operators for a set of histories which satisfy the consistency condition.

So, how does the inner product between two histories work? Essentially,
each *P _{i}^{a}* defined at a particular time

*t*involves a projection onto a particular subspace of the Hilbert space. We can, for example, write it as

_{i}*|φ*. The inner product between those two projectors in effect gives another projector,

_{i}^{a}><<φ_{i}^{a}|
*
|φ _{i}^{a}><φ_{i}^{a}|φ_{i}^{b}><φ_{i}^{b}|*

If the subspaces defined by each projector do not overlap, then we will get zero. If they do overlap, then we will get something which is not zero. So to calculate the inner product of a pair of histories, we start with the last possible time, and take the inner product of those projectors. This gives a result; let us say that it is non-zero. We then apply the time evolution operator to each side of the projector

*T ^{†}(t_{i-1},t_{i}) |φ_{i}^{a}><φ_{i}^{a}|φ_{i}^{b}><φ_{i}^{b}|T(t_{i},t_{i-1})*.

We then apply the next pairs of projectors in the history, and so on. In practice, for the total product to be zero requires that either the initial or final states are orthogonal to each other (assuming they are not the identity operator, in which case we need to move to the next timestep along), or the time evolution operator cannot map between two projectors within the history.

## Quantum Reasoning

There are various important differences between quantum and classical reasoning. In classical physics, the fundamental object is represented as a point in the classical phase space. In quantum physics, it is represented as a vector, or a one dimensional line in a Hilbert space. In classical physics, properties are always orthogonal to each other; in quantum physics that is only true if they are represented in the same basis. If they are in different bases, then a comparison is meaningless. The difference is negated if a single framework -- a complete collection of commuting projectors -- is used. But in quantum theory, there are many different frameworks which can be used to describe a system. When discussing a quantum system, one cannot mix results from incompatible frameworks.

Quantum dynamics differs from classical dynamics in that quantum dynamics is indeterministic, while classical dynamics is entirely deterministic. Only in particularly special cases can we predict the results of a quantum system (i.e. if the initial state is and remains an eigenstate of the Hamiltonian operator). In quantum physics, then, the best we can do is to assign a subspace which defines a family of histories, and to assign probabilities to those histories. This is only valid for consistent families of histories. Consequently, the reasoning process involved in applying the laws of quantum dynamics is different from a deterministic classical system.

We can draw conclusions based on some initial data. The conclusions are only valid if that initial data reflects reality. The "initial data" need not have all been extracted at the same time. Thus all probabilities calculated from quantum physics are conditional. The first step in drawing conclusions from some initial data is to express that data in proper quantum mechanical terms. After the initial data has been embedded in a sample space, and probabilities for various outcomes assigned according to the quantum process, the reasoning process follows from the usual rules of probability theory. The weirdness of quantum physics emerges because there are many different sample spaces in which one can embed the initial data. Hence the conclusion drawn depends on the sample space being used.

In classical thinking, one starts from an initial state, and integrates the equations of motion to give a trajectory which determines a configuration at each time from which one answer any question of physical interest. In quantum physics, things are different. One has to start with the questions one wants to ask, and from that construct a framework in which those questions can be answered. Once one has this framework, then one uses it, the initial state, and the dynamical laws in order to calculate probabilities for each possible outcome. One cannot use a single framework to answer all possible questions, because in every framework some questions are undefined.

There are two questions raised from this. Firstly, is there consistency? Given that there are multiple possible frameworks, but we have to choose one particular framework in order to perform the calculation, does this choice affect the final results of the calculation? The answer to this is that the system is consistent. The second question relates to any underlying philosophical issues that arise from that fact that alternative incompatible frameworks can be used to describe the same system.

The initial data, combined with the quantum mechanical laws, can be used in different frameworks to yield different conclusions. Do all these conclusions apply simultaneously to the same physical system? For those conclusions expressed as probabilities between 0 and 1, there is always some uncertainty in the outcome, and thus one cannot conclude that the two conclusions are inconsistent. But sometimes the probabilities are 0 or 1, which indicate that the corresponding event is either true or false. The question is whether you can have two different, inequivalent, events which are both assigned as true, and, if so how to interpret that.

If the frameworks are compatible with each other, no problems will arise.
You can't have *A* is true and *B* is true unless *A* and *B*
refer to the same thing.

If the frameworks are incompatible, then in principle you might have the
situation where two inconsistent outcomes might both be regarded as true.
So you can have *A* true and *B* true where *A* and *B* are
in some respects different.
For example, one can consider two frameworks which have the same initial
and final state, but differ in some of their intermediate states.
Thus in the first framework, we say that the true trajectory undertaken
by the particle was described at some intermediate time by one of a
particular
set of states, while in the second framework it would be described by one
of a different set of states. Both of these frameworks are regarded as
being true, because the probability for the particle being in the same
final state is 1.

The problems go away when we consider the two systems as applying to
independent runs of the same experiment, but that is
too easy a way to get out of it. The main reason why this apparent paradox
is only apparent was that we set up the framework to investigate the
question of what the particle would be in its final state, which is where we
perform the measurement. Intermediate states have a different status in
the theory. They represent only potentialities or possibilities.
For example, if in one framework, the system passes through a state
*A* at some intermediate time, and in another framework the system
passes through
a state *X* in an incompatible basis, then in the second basis the
question of whether or not the system is in state *A* at that time
is undefined. And if the question is undefined, then it is meaningless to
say that there is a contradiction. This is unintuitive from the classical
perspective, where every framework is compatible and every question well
defined, but that is just the limitation of the classical way of thinking.
The notion of truth depends on the framework being used. All truths are
conditional on the framework used to express them.

Note that this is not
philosophical relativism, as a conditional truth is still objective.
It is analogous to the difference between the Bayesian and logical
interpretations of probability. In the logical interpretation, it is an
objective truth that the probability of *X* conditional on
assumptions *A,B,…* is *x*. As all probabilities are
conditional, all probabilities are objective. The Bayesian interpretation
states that there are unconditional and thus subjective probabilities.
The probability depends on who it is who makes the statement, and their
level of knowledge. The Bayesian interpretation leads to subjective
statements and thus philosophical relativism, where a statement might be
"true" for one person and not the other, while the logical interpretation
has only objective but conditional probabilities. Consistent histories
is analogous to the logical interpretation of probability, where truth is contingent
on the framework, but one cannot say that any framework is the right one
for any particular person to adopt and thus remains the same for all
people. Quantum Bayesianism (which I will discuss in the next post) is
more consistent with a subjective Bayesian interpretation.

## Measurements

In a measurement, a physical property of some quantum system (for example, the particle we are investigating) becomes correlated with a property of a different quantum system (the measuring apparatus). The two systems together form one closed, larger, quantum system. The principle and processes involved are no different to any other quantum event. There is no distinction between the quantum world of the particles and the classical world of the measuring system.

So to perform a measurement, we need an appropriate Hilbert space for the combined system, an initial state, some unitary time development operators, a suitable framework and a family of consistent histories. As usual, a correct quantum description of the system must employ a single framework.

Measurements fall into two classes; destructive and non-destructive. In non-destructive measurements, the property in question is not destroyed by the process of measuring it. In destructive measurements it is altered by the measurement process, often in an uncontrolled fashion. Here what is measured is the property of the particle before it is measured, and the property of the apparatus after the measurement, i.e. at two different times. The failure to properly account for the different times during the measurement process is one reason why the Copenhagen/ von Neumann account of measurement is inadequate.

Consider the measurement of a spin using a Stern-Gerlach experiment.
Here we have a magnet aligned so that particles with a positive spin along
the *z*-axis are deflected upward, and those with a negative spin
are deflected downwards. There are then two properties of interest, the
spin state *|z ^{±}>* and the spin location,

*|ω>*before it encounters the magnet, and

*|ω*afterwards. There are then two unitary time developments

^{±}>
*|z ^{+}> |ω> → |z^{+}> |ω> → |z^{+}> |ω^{+}>*

*|z*

^{-}> |ω> → |z^{-}> |ω> → |z^{-}> |ω^{-}>
If the initial state is not one of the *z* spin states, but instead
(say) *|x ^{+}> = (|x^{+}> + |x^{-}>)/√2*, then the unitary evolution is described by the two histories

*|x ^{+}>|ω>⊙|z^{+}>|ω>⊙ |z^{+}>|ω^{+};>⊙*

*|x*

^{+}>|ω>⊙|z^{-}>|ω>⊙ |z^{-}>|ω^{-};>⊙
We can calculate the weights for these two histories using the framework
above, and find that each outcome has the probability of *1/2*.
Putting the system in this way shows the correlation between the
*|ω>* states which reflect the apparatus property and the
*|z>* states which reflect the particle property (its spin).
The apparatus property correlates with the particle property before the
measurement takes place.

This general procedure can be extended to a macroscopic measuring apparatus. The configuration space of possible states of the measurement apparatus is going to be very large, as is the set of states that correspond to the two outcomes, but again we can construct histories where the initial state is given by the particle in its initial state paired with the various states that represent primed measurement apparatus, through to the end results that have the measurement apparatus giving one result or the other.

We can also think about the case where there are successive measurements
at different times. This can be done when a particle is not destroyed by
the first measurement. The procedure here is similar to that of a single
measurement. One constructs the various possible histories, assigns
weights to each history, and from that calculates probabilities for
histories which satisfy one particular result at the first time, and then
a second result at a subsequent time. This procedure also, unlike many
approaches to quantum physics, allows us to construct conditional
probabilities, such that the probability that the measurement at time
*t _{2}* will be

*X*given that the measurement at time

*t*will be

_{1}*Y*. In those psi-ontic interpretations which involve wavefunction collapse, the probability for a given result is given by the Born rule. However, the wavefunction is only defined at one particular time, and it is assumed that it collapses with each measurement.

The wavefunction collapse model is seen as problematic for two reasons. The first is asking what it special about a measurement that leads to a collapse? (This problem is partially but not fully solved by decoherence.) Secondly, there is the problem that the collapse is a non-local process. In the histories interpretation of quantum physics, there is no physical non-local effect. The individual particle is the physical beable. Its possible motion is described by the quantum evolution equations, and in each possibility there are no jumps from one place to another, or sudden collapses in anything physical.

Wavefunction collapse is thus not a physical effect but a mathematical
procedure used to calculate statistical correlations. It takes place in
the theorist's notebook rather than the experimental laboratory. The
procedure is in certain respects analogous to the Bayesian updating of a
classical probability on the basis of new information. The analogy is
not perfect: wavefunction collapse involves the pre-probabilities or
amplitudes rather than the probabilities themselves. This is obviously
not a physical effect, and only in a metaphorical sense can be said to
have been caused by the measurement. Indeed, we can calculate conditional
probabilities for the outcome at the *t _{1}* based on the
initial state and the outcome at the later

*t*, and it makes little sense to say that the later measurement affects the state at the earlier time.

_{2}So wavefunction collapse is not required to describe the interaction between a quantum system and the measuring device. Any result can be obtained by constructing an appropriate family of histories. This approach is more natural, more flexible, and avoids the philosophical problems associated with wavefunction collapse.

There have been numerous paradoxes associated with quantum physics. Griffith's treatment of them is quite extensive; I will just describe his discussion of EPR type experiments below. But there paradoxes are not really problems. They either arise from treating wavefunction collapse as a physical effect, or trying to smuggle in some classical assumptions into the quantum paradigm.

Physical theories should not be confused with physical reality. They are, at best, an abstraction of physical reality. At worst, they are only approximate or completely false. This is true of classical physics as much as it is of quantum physics. The phase space and Hilbert space are both mathematical constructs. Physical objects do not solve differential equations to determine where to go next.

There is thus no good reason to suppose that just because some mathematical object is useful in the theory that it plays a role in reality, unless, of course, it is used to represent something which is directly observed. Wavefunctions are merely used for the convenience of constructing the theory; they do not exist in the real world, or correspond to anything which does exist. They are not directly observed; there is no need to suppose that they exist in reality; and their role as a pre-probability most naturally leads to an interpretation where they are merely epistemic.

Two questions concern any theory of physics. The first is whether it is logically coherent; and this is (in principle) easy to check. The second, more subtle, problem concerns its relationship to the real world. This must naturally move beyond questions of mathematical proof, logical rigour, and even agreement with experiment (as just because a theory agrees with experiment, to within the experimental imprecision and uncertainty, does not mean that it is true). Of course, if a theory makes good philosophical sense and gives many predictions that agree with experiment, we have good reasons to believe that it, at least to a certain extent, corresponds to the real world in a certain way.

So how does a quantum way of looking at the world differ from a classical way of looking at the world? Firstly, quantum theory employs a wavefunction in a Hilbert space rather than points in a classical phase space to describe a physical system. This suggests (in Griffith's version) that the quantum particle does not possess a precise position or precise momentum. (I would personally say that this is going too far: it merely shows that we cannot know the precise position or momentum of a quantum particle, if it has one.) This does not mean that quantum entities are ill-defined; a ray in the Hilbert space is as precise a specification as a point in the phase space. The classical concepts of location and momentum can only be used in an approximate way when applied to the quantum domain. The uncertainty principle arises because quantum particles are described by a different mathematical structure to classical particles. It just arises because of the nature of quantum reality, and that what does not exist cannot be measured.

Secondly, while classical physics is deterministic, quantum dynamical laws are stochastic. The future behaviour of a quantum system cannot be predicted with certainty, even when given a precise initial state. This is an intrinsic feature of the world, in contrast to classical physics where any apparent uncertainty is due to missing data about the underlying system. The Born rule does not enter as an approximation, but as an axiom of the theory.

Thirdly, in quantum physics, there are multiple incompatible ways in which the system can be described. Two seemingly incompatible results can both be said to be true because they are derived correctly from the same initial data but in a compatible framework. There is no good classical analogue for this. However, to compare between two results (and in particular combine them into a single probability distribution) requires using consistent frameworks. In classical physics, one can in principle describe an object by listing every possible property, and assigning that as either true or false. This is not possible in quantum physics, because some properties can only be expressed in incompatible frameworks. To describe a quantum system, the theoretical physicist must select one particular framework. From an ontological point of view, no framework is better or worse than any other. The one which is chosen is simply the one which is most convenient to express the particular experimental results the physicist is interested in.

This freedom to choose a framework does not make quantum physics subjective. Two physicists who use the same framework will get the same results. Two physicists who use different frameworks will get different results, but this is not a contradiction because results are always expressed as being conditional on the framework used. Nor does this choice of framework change or affect physical reality. It merely constrains the physicist as he tries to make precise or probabilistic predictions about certain properties: this is only possible if those properties are consistent with the chosen framework. Thirdly, choosing one particular framework does not mean that a result calculated from a different framework is false.

This multiple incompatible framework rule applies at all physical scales. However, due to decoherence, the inconsistency between the frameworks decreases when the physical system becomes larger. Thus one can reasonably approximate the system as though the frameworks applicable to the system were consistent. The larger the system, the better this approximation will be. Thus classical physics, where all frameworks which can be used to describe a system are consistent, in this sense emerges as a very good approximation to the quantum system. Likewise, once we move to larger scales, the stochastic nature of quantum physics becomes obscured. The probability that a macroscopic system diverges wildly from the path of least action is sufficiently small that we would never observe it. Or, one needs a far more precise measurement than is possible with a naked eye to observe the "random" fluctuations from the classical path. Thus pretending that the macroscopic objects obey a deterministic trajectory is a very good approximation to the actual world. Again, the larger the scale, the better the approximation. Thus the differences between quantum and classical physics become considerably less observable as we move to a macroscopic world. In this sense, we can say that classical physics emerges from quantum physics.

There are, however, three similarities between quantum physics when thought of consistent families of histories and classical physics. Firstly, measurements play no fundamental role in the physics. Measurements simply reveal the state of the quantum system shortly before the measurement took place. Secondly, both quantum and classical physics are consistent with the idea of an independent reality, a real world which does not depend on human thoughts or attitudes to it. Thirdly, quantum physics, like classical physics, is a purely local theory which does not require influences which propagate across space faster than the speed of light. The idea that physics is fundamentally non-local arises either from an idea that wavefunction collapse is a physical process, or by assuming hidden variables in addition to the quantum Hilbert space, or by employing arguments which violate the single framework rule.

Each of these similarities with classical physics has been challenged by various other interpretations of quantum physics, and the idea that there is a real world out there has been challenged by idealistic philosophers. But quantum physics does not give us reasons to abandon them; and as they better explain the world we observe than the alternatives we ought not to abandon them (even if we cannot strictly prove that there is a real world and it is not all in our heads). But, quantum physics does show that the real world is very different to what was believed in pre-quantum days.

## EPR and Bell's inequalities

Griffiths' book discusses the various "paradoxes" commonly cited as following from, and how they can be explained in consistent histories. I don't intend to discuss all of these here, but I will address the Einstein-Podolsky-Rosen paradox, and the phenomenon of quantum entanglement. As I tend to do, Griffiths uses the Bohm version of the set-up: a spin zero particle decays into two spin half particles. We know that the spins of the two particles must be correlated, and opposite to each other in the same basis. A classical hidden variables theory could explain this correlation. But if we rotate the axis of the two measurement devices by a small amount, we expect the spins to be aligned some of the time. A classical hidden variables theory would give a prediction for the probability for the spins to be aligned (in terms of an inequality); quantum physics gives a different prediction (which can violate the inequality), and experiment agrees with the quantum physics prediction.

The calculation of the inequality is based on a number of assumptions, most importantly that there is no non-local interaction between the two particles, and that the spin states along the two angles are well defined from the time of decay (the hidden variables). As Griffiths points out, however, this second assumption is contrary to the principles of quantum physics. The projectors into spin states along different axes are inconsistent with each other, so we cannot say that a quantum particle has a definite state along one axis and also along a slightly different axis.

We need some notation to represent this experiment in terms of
families of histories. First of all, I consider the case where the
measured spins are aligned. Ψ represents the parent spin-0 particle.
The two daughter particles are *a* and *b*. The particle states
are *z _{a}^{+}* for particle

*a*along the positive spin axis in the

*z*-direction, and

*Z*represents the corresponding detector state. We can then write the possible evolutions of the particles in terms of the family of histories, up until the point where we measure the first spin.

_{a}^{+}

Ψ⊙ z

Ψ⊙ z

Ψ⊙ z

_{a}^{+}z_{b}^{-}⊙ Z_{a}^{+}z_{b}^{-}Ψ⊙ z

_{a}^{-}z_{b}^{+}⊙ Z_{a}^{-}z_{b}^{+}From here we can calculate conditional probabilities for the various possible combinations of results, and we find that the probability of the two detectors recording the same spin is zero.

If the detectors are aligned along a different direction, the *x*
direction, then we can express the family of histories as

Ψ⊙ x

Ψ⊙ x

Ψ⊙ x

_{a}^{+}x_{b}^{-}⊙ X_{a}^{+}x_{b}^{-}Ψ⊙ x

_{a}^{-}x_{b}^{+}⊙ X_{a}^{-}x_{b}^{+},and again we can calculate conditional probabilities for the measurements of these two families, and find that the spins are always going to be anti-correlated.

Recall, that we are free to use any consistent set of operators to
represent the system. We choose the set which is most convenient
for whichever measurement we are going to make. Selecting projectors along
the *z*-axis does not state that the particle itself has a spin along
that axis; it is merely a tool used to predict (probabilistically) what
we would measure
should we entangle the particle with a detector aligned along that axis.
But, in particular, measuring the spin of one particle has no effect
on the spin of the other particle. It merely tells us which of the
two branches best describes the actual paths of
the particles, when we parametrise the system in this particular basis.

We can also consider the case where we mix the two measurements. Here the expression of possible histories is a bit more complex.

Ψ⊙ z

Ψ⊙ z

Ψ⊙ z

Ψ⊙ z

Ψ⊙ z

_{a}^{+}x_{b}^{-}⊙ Z_{a}^{+}x_{b}^{-}Ψ⊙ z

_{a}^{-}x_{b}^{+}⊙ Z_{a}^{-}x_{b}^{+}Ψ⊙ z

_{a}^{-}x_{b}^{-}⊙ Z_{a}^{-}x_{b}^{-}Ψ⊙ z

_{a}^{+}x_{b}^{+}⊙ Z_{a}^{+}x_{b}^{+}
From this, we deduce using the usual methods that the probability of measuring
*X _{b}^{-}* conditional on measuring

*Z*is 1/2. Again, the measurement on the first particle does not influence the second particle; it merely selects two of the families of histories from which we can calculate the conditional probabilities.

_{a}^{+}Griffiths then expresses the paradox as follows:

- Suppose the
*z*-spin is measured for particle*a*. This allows us to predict the*z*-spin measurement for particle*b*. - Suppose the
*x*-spin is measured for particle*b*. This allows us to predict the*x*-spin measurement for particle*a*. -
Particles
*a*and*b*are isolated from each other, and consequently be affected by measurements carried out on the other particle. - Consequently, particle
*b*must possess well defined values for the spin along the*x*and*z*directions. - This is, however, contrary to the principles of quantum physics.

The problem with this argument is in point 4. This implicitly violates the
single framework rule for quantum physics. In other words, to express this
statement in terms of families of histories would be to attempts to
merge together two families of histories, one where particle *a* is
expressed in the *x*-spin basis, and the other where it is
expressed in the *y*-spin basis. In other words, statement 4,
while it seems natural from a classical mindset, violates the rules of
constructing quantum histories, and is thus a nonsensical statement.

In other words, statement 1 is valid in a circumstance where the *z*-spin is measured, and statement 2 is valid in a circumstance where the *x*-spin is measured. But one cannot combine these statements, because one cannot
construct a consistent basis where one simultaneously measures the
*z* and *x* spins. To merge together the two statements
wrongly supposes that there is a single framework that describes all the
particles. The correlation between the states is just a logical dependence
brought about by the choice of frameworks. This is merely something that
takes place in the theorist's notebook, and does not indicate any change
in the particles.

A hidden variable theory (and I am still paraphrasing Griffiths book rather than giving my own opinion) is an approach to quantum physics which supposes that the Hilbert space of the standard analysis is supplemented by a set of hidden variables which behave analogously to those seen in classical mechanics. Well-known hidden variable theories were formulated by Bohm and de-Broglie. The simplest and most naive hidden variables theory is one in which the different components of a hidden variable possess well-defined values. More sophisticated models also exist. Bell showed that hidden variable models of this kind cannot produce the standard quantum mechanical correlation between the spins of the two particles if one supposes that there is no long-range influence between them.

For example, consider three spin axes, separated by an angle of *2π/3*, denoted as *u*, *v* and *x*. If we let *α(w _{a}) = 1 or -1* indicate the result of the measurement on particle

*a*along an axis denoted by

*w*, and

*β(w*a similar result for particle

_{b})*b*, then the average correlation between the two spin measurements,

*C(w*, is -1 if

^{a},w^{b})*w*and 1/2 otherwise.

_{a}= w_{b}
If we try to construct a hidden variable model which can reproduce the
correlation function. The simplest way of doing this is to construct a
construction set giving the spin along each of the three axes, e.g. (1,1,1) to indicate positive spin in each direction. There are 8 such sets for particle
*a* and 8 such sets for particle *b*, giving a total of 64
possibilities. The perfect anti-correlation if *w _{a} = w_{b}* indicates that the set for particle

*b*must be anti-correlated to that of particle

*a*, reducing us to 8 options.

Denoting P(1,1,1) as the probability that particle *a* has the
instruction set (1,1,1), one can show that the sum of the three correlation functions is

*C(u,v) + C(u,x) + C(x,v) = - 3P(1,1,1) -3P(-1,-1,-1) + P(1,1,-1) + P(1,-1,1) + P(-1,1,1) + P(1,-1,-1) + P(-1,1,-1) + P(-1,-1,1)*

Given that the probabilities add up to 1, this must lie between 1 and -3. However, the quantum mechanical result is 3/2. Thus this hidden variable approach cannot reproduce the results of quantum physics.

This argument (by Clauser, Horne, Shimony and Holt) can be generalised.
Suppose that the result depends on both the orientation of the detectors
and a set of hidden variables *λ*, e.g. *α(w _{a},λ) = +1 or -1*. If the probability for a particular configuration of
hidden variables is

*p(λ)>*. The correlation function between the two results is then

*C(w*

_{a},w_{b}) = ∑_{λ}p(λ)α(w_{a},λ)β(w_{b},λ)It can then be shown that the correlation functions can be combined such that

*|C(a,b) + C(a,b') + C(a',b) - C(a',b')| ≤ 2*

This inequality is violated by quantum physics (backed up by experiment), so one of the assumptions made in the derivation must be incorrect.

These include that there are no non-local influences on the measurements (which is what people tend to focus on). But the first and most basic assumption is that the existence of hidden variables with a mathematical structure that differs from the standard Hilbert space of quantum physics. In classical physics, this assumption is plausible. In quantum physics, one can only consider properties as pre-existing if they are defined in the framework used to construct the system. But this is only allowed if all the properties under consideration exist in the same framework. The first error made is in the assumption that a function*α(w*can be constructed for different directions

_{a},λ)*w*, and that these can legitimately be combined into a single equation.

_{a}
The single framework rule of quantum physics states that one can only combine histories (or amplitudes derived from those histories)
if they are expressed in the same framework, with consistent bases used at the key steps. To derive the inequality, we need to combine
histories derived from two incompatible frameworks. In this example, we are combining histories where in one the final result is expressed in the *u*
basis and in the other where they are expressed in the *w* basis. The *u* and *w* bases are incompatible with each other.
The experiment violates the inequalities because the reasoning used to derive the inequality violates the single framework rule and is thus invalid
for a quantum system.

In other words, the various inequalities (Bell's and others) are violated in quantum physics because the reasoning used to derive them is invalid (when applied to quantum physics). It violates the single-framework rule, the most fundamental rule of quantum reasoning. In any logical argument, we start with various premises, manipulate those premises with various well-defined rules, and reach a conclusion. If the conclusion is wrong, then there are two possible explanations: 1) one of the premises is incorrect; or 2) we made a mistake in applying the rules of reasoning. For example, I might reason premise 1) I start with two beans. Premise 2) you start with 3 beans. Conclusion: between us we have less than 4 beans. Clearly the conclusion is wrong; but it is not wrong because either of the premises is wrong; but because I am incapable of adding 2 and 3. I have violated the rules than govern arithmetical reasoning. The process of reasoning leading to Bell's inequalities is a more subtle analogue of this. Griffiths has stated that reasoning in quantum and classical physics obey different rules. Bell's mistake was to confuse this, and to suppose that the same rules that apply to a classical reasoning also apply when we consider a quantum system. The violation of Bell's inequality shows that a "mistake" is made somewhere in the derivation of that inequality. The mistake, however, was not in the stated premises, but that wrong rules were used to argue from premise to conclusion, and thus the reasoning used to derive the conclusions from the premises is invalid in a quantum system. This means that the violation of Bell's inequalities does not show that any of Bell's stated premises are incorrect, including the premise concerning non-local interactions.

Since the assumption that quantum physics can be described by a hidden variables theory when incompatible frameworks are needed to parametrise the measurement results is incorrect, it is not necessary to say that the other assumptions in the derivation of the inequality are wrong, such as locality.

The claim is often made that the quantum theory must be non-local simply
because its predictions violate the inequalities. But this is incorrect.
What follows is that they must be non-local or have some other
peculiarity. But they do have that peculiarity: they supplement the
standard quantum Hilbert space with something else that obeys different
rules. As such, the analysis tells us nothing about a pure quantum theory
that only uses the quantum Hilbert space. As seen, the measurement on
particle *a* does not influence particle *b*. It merely
restricts the family of histories, allowing us to construct probability
for the outcome of the measurement on particle *b* that is
conditional on both the initial state and the measurement on particle
*a*. The actual result for particle *b* in a particular run of
the experiment is still entirely indeterminate. The conditional
probabilities emerge when we try to predict results for an ensemble of
particles, it does not predict the result of a single experiment. The
result we get when the two axes are aligned is simply a limiting case.

The lesson to be learnt from the Bell inequalities is that it is difficult to construct a plausible hidden variables theory which will mimic the results of quantum physics. Such a theory must either exhibit non-localities which violate relativity (this is Griffiths' claim), or have backward in time causation, or some other pathology contrary to everyday experience. This is a high price to pay to have a theory which is just a little more classical than quantum physics.

## Critiques

The strengths of the consistent histories approach are obvious. Firstly, it is a very natural interpretation of quantum physics: it is basically just the path integral formulation of quantum physics interpreted literally. There are no issues around wavefunction collapse or measurement. There are no non-intuitive things such as particles being in two places at once. It is fully consistent with relativistic quantum field theory. We do not need to postulate non-observable hidden variables. The interpretation fits most comfortably with the logical interpretation of probability. To my mind that is an advantage, as that interpretation has the most rigorously basis and fewest problems.

We have to accept a few differences from the philosophy of classical physics. The motion of particles is indeterminate. We have to consider that the system can be parametrised in incompatible bases. We have to express our uncertainty primarily in terms of amplitudes, or pre-probabilities (in Griffith’s language), or likelihoods (in my own terminology). None of these, to my mind, are a massive deal breaker or unintuitive. Alone among the interpretations of quantum physics (which are consistent with the physics), the consistent history approach does not challenge any of our fundamental intuitions about reality.

The central object of quantum physics, the wavefunction, is an expression of our uncertainty rather than something physically real. Again, I don't see a fundamental problem with this. If we view the motion of physical particles as wholly indeterminate, then we will need some way of expressing the uncertainty in our knowledge of the system. We would thus expect the central object of the theory to be an expression of our knowledge of the state of the beables, rather than the representation of a beable in itself.

Are there any drawbacks or criticisms? There are, I think, a few main ones. The first is whether the philosophy comprehensively enough captures the beables of the system. It is one thing saying that they are the particles, but beyond that it does not really go into details. The quantum formalism is used to predict the results of measurements. Outside of decoherence, we cannot say how best to describe the state of the particle. I personally do not view this as that much of a problem; after all, all the interpretation requires is that the particle evolves into some state at each time. Without measurement, we cannot say what this state is. We cannot even say what basis this is. So we represent the evolution in terms of families of histories; and are thankful that in quantum physics the results work out the same way whichever basis we use for the intermediate states. This is the best we can do to capture a wholly indeterminate system. And I don't consider it a problem to say that the mathematical representation of reality doesn't capture every aspect of reality. There are a family of different frameworks that could represent reality in the intermediate times between measurements, and we have to arbitrarily choose one of them. But this is one of those aspects where the theory doesn't correspond perfectly to reality. In reality, one framework at each moment in time is sufficient to describe the particle state. We just don't know what it is.

Of course, if one really needs something more concrete, one can always say that there is a preferred basis in reality (such as the location basis), and that the free particles are really always at a particular location, even if we don't know what that is. Obviously, when they become entangled with a larger substance and there is decoherence, then there is a different well-defined basis which describes the particle. One can thus say that the particles are always in a defined state in a given basis. The freedom to choose different bases is merely an artifact of the mathematical representation: convenient to perform calculations, but not really having ontological significance. I don't think Griffiths would agree with this way of phrasing the interpretation, but there is nothing wrong with it if one requires more ontological certitude.

The risk, of course, with this is that the consistent histories approach might be seen as being no better than the instrumentalist Copenhagen interpretation: a prescription to extract predictions from experiment, without saying anything concrete about reality itself. There are differences, most importantly in how the wavefunction is regarded. In Copenhagen, it is interpreted ontologically; here it is interpreted as an expression of our knowledge. Indeed, consistent histories is sometimes referred to as Copenhagen done right. There are two things to say in response to this. The first is that consistent histories is not entirely instrumentalist: it does specify what the beables of the system are, and does describe how their dynamics should be understood. The issue is that because the dynamics is entirely indeterminate, and because of the single framework rule and we parametrise possible observations, we are very limited in what we can say about the particles between measurements. Secondly, the overall framework can be thought of in terms of a prescription for how we make predictions for the outcome of experimental measurements given a particular initial state. But then, that's the goal of quantum theory itself. It is thus an advantage that the end-point of consistent histories is such a prescription. The goal of the philosophy of quantum physics is to explain why the prescription works in terms of the fundamental physical beables and a few postulates about their motion. The consistent histories approach does that. It states that the beables are the particles themselves -- the quarks, electrons, photons, gluons and so on. That the particles move indeterminately, but that each possible motion is accompanied by a specific weight that describes the likelihood of that outcome. That leads naturally to the concept of histories. Then there is a prescription for combining the histories that lead to a possible outcome such that we can extract a probabilistic prediction for the frequency distribution when the experiment is repeated a sufficient number of times. There are a few different ways one could interpret the intermediate states in this framework, when the precise state of the physical system is not known, but I don't think that vagueness is sufficient to say that the interpretation is just instrumentalist.

Secondly, one can criticise the single framework rule, which seems to be an addition to the basic structure of quantum physics. It certainly goes beyond our classical intuitions -- although in reality it is the idea that the same quantum state can be expressed in different incompatible representations which is an offence to classical intuition. We can't avoid that idea in quantum physics -- it is built into the mathematical structure. The single framework rule is then added in order to make sense of these incompatible representations, and ensure consistency. This can be defended since it is little more than the requirement that we don’t compare apples with oranges. The pre-probabilities (and thus also the probabilities) are conditional; we can add to these conditions the framework used to express those amplitudes. Even in classical probability theory, one cannot combine two probabilities which are conditional on different things. The equation is not in itself a probability. The single-framework rule is simply an extension of this to the case of pre-probabilities.

Thirdly, I am not fully convinced that the approach deals successfully with the problems concerning non-locality. Griffiths' analysis of Bell's and similar equations is correct. There is an additional assumption in the derivation of the inequalities which violates the single framework rule, and is thus inadmissible in quantum physics. Thus Bell's inequality does not formally prove that a quantum system must be non-local (or have backwards causation, or violate measurement independence, or one of the other standard assumptions). But this does not show that there are no non-local influences. It merely shows that Bell's theorem and the others like it by themselves do not prove that there are non-local influences. It could be that the locality assumption in the derivation of the inequalities is violated as well. In the case of the EPR experiment, if we suppose that the motion of each particle is independent of each other (i.e. there are no non-local influences), and that the motion of each particle is indeterminate, and that the basis is not fixed at the moment of decay, then there is no obvious reason why they should always emerge with opposite spins, no matter which axis we use to measure the spin. True, the mathematics shows that there is a zero probability for every other outcome, but this is merely a means to predict the result. It describes what could happen to the particles, given the rules of quantum physics. But there is still the question of what actually happens to them and why those predictions are validated in practice. I do not think the consistent histories approach answers this question to my satisfaction; it just describes that this is what happens.

We rule out classical hidden variables theories (as they are inconsistent with the single framework rule). We rule out physical non-local influences (as they seem to violate at least the spirit of special relativity). We rule out backwards causation and multiple results being realised as they are too big a price to pay. But then we come to the problem: either the beables are in a well-defined physical state between measurements, even if we don't know what basis that state is in, or they are not. If they are not, then the interpretation leaves important questions unanswered. If they are, then we have a problem in explaining why there are these apparent non-local correlations between physical results.

## Conclusion

I think that there is a lot that is good in the consistent histories approach. It answers most of the paradoxes of quantum physics. Indeed, it basically just interprets the mathematics literally (and, unlike the Everett interpretation which makes the same claim, it correctly interprets the wavefunction as epistemic rather than ontic and thus has no issues in the interpretation of probability). But I don't think, at least in Griffith's presentation, that it fully resolves the issues around the long-range correlations we see in entanglement. This doesn't mean that the approach is wholly incorrect; and I do think that it is a clear step in the right direction. But it does need supplementing with some additional ideas to resolve the few problems that remain in it.

Equally, this interpretation only discusses the dynamics of quantum physics. We also need to be able to understand how the properties of complex substances emerge from their simpler parts, and consistent histories (like most of the interpretations discussed) does not really address this question. I will turn to the question of statics in a few posts time.

**Reader Comments:**

No, Joe, you aren't. There are two possible meanings, one epistemic, the other ontic:

1) The particles' motion is well-defined, but impossible to predict exactly (epistemic);

2) The particles' motion isn't even defined; a particle is potentially in any state allowed by conservation laws, but actually in none, until measurement forces it to be something (ontic).

IMO 1) is basically the pilot wave interpretation - you need the pilot wave, or something like it, to account for EPR correlations. And 2) is an updated Copenhagen interpretation - it solves some of Copenhagen's issues, but it still leaves up in the air what the heck a "measurement" is, out in the real world.

The consistent histories approach is compatible with both readings; but that means it's not an *interpretation*, but only a reformulation of the physical theory. It doesn't tell us, as an interpretation does, what the beables quantum theory describes actually are. It only rephrases what the mathematics of QFT tells us, laying out the assumptions on which the theory depends so we understand why the "paradoxes" arise. (For example, the single framework rule is really telling us when we must expect taking one measurement to disturb the value of another.) That is, the approach is an advance in our understanding of the physical theory, but it doesn't resolve the metaphysical question of just what the theory is *about*.

**Indeterminate motion**

When I speak of indeterminate motion, I mean it in contrast to what we see in Newtonian physics.

In Newtonian physics, you have a set of particles, parametrised by a location and momentum. There are various forces, such as the gravitational force, F = - Gm1 m2/r^2. Then the equation of motion, F = ma. The system boils down to a differential equation, which can be solved exactly (with the initial conditions supplying the constants of integration), so the future trajectory of the particles is determined exactly and precisely. Given the initial conditions, the particles are certain to be in a specified location at the next moment in time. Set up two experimental runs with exactly the same initial conditions and you will get precisely the same result.

For indeterministic motion in quantum physics, you have a set of particles parametrised by location (and spin states, etc.) at the first moment in time. But, aside from the various restrictions due to locality and symmetry etc., there is no rule saying where the particles will be at the next moment in time. If they start at location (0,0,0) at time 0, then at time delta t they could be at location (delta x,0,0), or (-delta x,0,0), or (0,delta y,0), or (0,0,0), and any rotation of those vectors, and all those options are permitted in the theory. There is no way of predicting which one will happen in practice (albeit that some options are more likely than others). Set up two experimental runs with exactly the same initial conditions and you are pretty much garanteed to get different results. The wavefunction is simply a means by which we parametrise the various options. One happens in practice; we can't predict which one it is, so we write down a wavefunction from which we can construct conditional probabilities which can be compared to a frequency distribution and compared against experiment.

This is not the Copenhagen interpretation because the Copenhagen interpretation treats the wavefunction as ontological, and requires a physical collapse mechanism to make predictions. This is not the Pilot wave interpretation because a) it doesn't require a Pilot wave -- just the particles; and b) the Pilot wave interpretation (at least in quantum wave mechanics) is deterministic, so if you start with the same initial conditions for both the observed and hidden variables you will get the same result.

I agree that EPR correlations are an issue for the interpretation as set forward by the likes of Griffiths and Omnes, as mentioned at the end of the post, but I don't think that a Pilot wave is required to account for them, as I will discuss in a subsequent post.

The consistent histories approach does have beables (even if its creators don't focus on that) -- the physical particles we observe. These are interpreted as point like particles, although once entangled with larger systems they become absorbed into those systems.

**Summary and basic objection**

I’m going to summarize how I understand the consistent histories interpretation (CHI) based on this post and other resources; please correct me if I’m wrong. CHI holds that the probabilities we extract from the Born rule are not merely probabilities for measurement results (as instrumentalist readings of QM would say) but are probabilities for the physical system to actually be some way. However, QM shows us that we can’t consistently attribute probabilities for all combinations of properties the way we naively expect: we can only do so within a framework as defined in the post. The weirdness of QM comes from this fact, and the fact that no one framework is the “correct” one; all frameworks are valid if used correctly.

Despite protestations to the contrary, this looks a lot like philosophical relativism to me. You say no, because the probabilities are all objective; merely conditional. But what are they conditional on? On the framework being used… and frameworks, by all appearances, are human constructs. (Moreover, unless I’m mistaken, the consistency condition for families of histories that can be jointly assigned probabilities depends on the supposedly epistemic quantum state.) In short, CHI is saying that reality depends on how we choose to describe it. And that is abject nonsense. The objectivity of the probabilities does not remove this fundamental relativity from CHI (note that objective and relative are not opposites, strictly speaking; the correct pairs of opposites are objective vs subjective and absolute vs relative).

In other words, “Copenhagen done right” retains the most philosophically pernicious feature of Copenhagen. In Copenhagen the moon is only there if you’re looking at it; in Consistent Histories the moon is only there if you’re describing reality in a framework that includes moon-existence as a property. In either case, reality has an essential dependence on the observer/describer.

**Ways to avoid the relativism objection**

One could try to avoid this objection by saying that frameworks/consistent families of histories/etc are not human constructs, but are part of the objective structure of nature. Very well; let's look at this structure a little more closely.

The probabilities are supposed to be probabilities for the physical system to be a certain way; that means that the system is a certain way even though we usually don't know what way that is. The quantum state doesn’t tell us what it is, so this is a hidden variables theory. The catch is that the ways the system can be depend on the framework being used. The only way I can make sense of this is if the true, complete description of the physical system is expressible as a function from frameworks to sets of properties and the values that those properties have. (I.e., unlike a PWT where the hidden variables are values of a given set of properties - prototypically, the positions of particles - the hidden variables in CHI are functions from frameworks to sets of property-value pairs.)

Without the hidden variable structure I just described, CHI collapses back into philosophical relativism or mere instrumentalism. With that structure - well, maybe one could build a coherent theory with it, but it is an exceedingly odd and ad-hoc looking structure, and I do wonder if it actually does provide a reasonable ontology and explanation for the world of everyday experience. I think work would need to be done to show that it does not face similar troubles to Copenhagen or MWI in resolving the basic “ontology” problem of QM, though I think the “framework->(property,value)” structure does have resources that the bare quantum state does not have in that regard. So perhaps it can be done.

The other alternative, of course, is to take one framework to be the physically correct one - a framework where the sample space is the space of possible particle positions, for example. If such a framework is valid and there is a reasonable consistent family of histories to work with (and that might be a big “if”), such a theory could be on par with PWT in my books. (In fact, it might even look a bit like a PWT, maybe with a stochastic guidance law.)

(For the ordinary PWT of non-relativistic QM, it is actually possible to eliminate the pilot wave and only work with the particle positions and the guidance law, by solving an equation for the configuration space velocity field directly instead of deriving that velocity field from the wavefunction. So a “privileged framework” version of CHI could be quite close to PWT, despite the epistemic vs. ontological status assigned to the quantum state in those theories.)

**A comment on probabilities and psi-epistemism**

Your remarks on probability in this post do go some way to alleviate my concerns about psi-epistemism at a general level, though I still would not say that the quantum states/amplitudes are representations of uncertainty. It seems, rather, that they are a tool for assigning probabilities; it is still the probabilities themselves that are the representation of uncertainty. But maybe this is merely a verbal disagreement or a difference of emphasis between us.

**Non-locality**

I believe more can be said regarding the way CHI handles the non-locality objection. For one, by reproducing the standard QM probabilities, CHI demonstrably still violates Bell’s criterion of locality (which is not the same thing as the Bell inequalities). And for two, even allowing for the weird framework-dependence of reality according to CHI, any theory which does satisfy Bell locality can be shown to obey the Bell inequalities without assuming anything about hidden variables, via an EPR-type argument.

Once again, I refer to Travis Norsen’s articles on the subject, particularly “J.S. Bell’s Concept of Local Causality”, “EPR and Bell Locality”, and “Bell Locality and the Non-Local Character of Nature”. Feel free to comment if you disagree, but (assuming I am correct that CHI can be described with the unusual hidden variable structure I discussed above) Norsen’s demonstrations go through even for CHI. CHI does not avoid non-locality.

**Conclusion**

Thanks for this post; I had not looked very deeply into CHI prior to this, and it was helpful. Looking forward to your exposition of your own interpretation. Best regards.

Philosophical relativism is the theory that propositions can be true for one person and false for another. Consistent histories doesn't say *that*; it says there are pairs of quantum properties whose actual values can't both be known. A framework is just a maximal set of questions that *can* all be answered at once. That there are multiple such sets for any quantum system, and that some pairs of questions aren't both in the same set, is already implied in the math of quantum theory - the single framework rule just restates that in plain language.

To say that you can know any one thing about a quantum system, but you can't know everything about it (which is what the consistent histories approach says) is far short of relativism.

It seems to be a general rule, that every interpretation of quantum theory has to go beyond the equations and introduce an extra factor. When the wavefunction is taken as ontic (Copenhagen, objective collapse, many worlds) the extra factor is measurement, which by changing the wavefunction has to be a physical process. When the wavefunction is seen as epistemic measurement is just receiving new information, and ceases to be an issue, but then EPR correlations become a mystery that requires hidden variables to explain - a pilot wave, for de Broglie and Bohm; or, in the transactional interpretation Dr. Cundy hasn't yet discussed, the unknowable future.

Griffiths and Omnes (and our host) are clearly on the epistemic side, but Griffiths did not commit himself to any sort of hidden variables, leaving his interpretation incomplete. I too look forward to how Dr. Cundy proposes to account for EPR correlations.

Michael,

*"Philosophical relativism is the theory that propositions can be true for one person and false for another. Consistent histories doesn't say that; it says there are pairs of quantum properties whose actual values can't both be known."*

I would characterize philosophical relativism as the position that reality is perspective-dependent in some fundamental way. And CHI clearly entails such dependence if the frameworks are merely human constructs - if the true and complete description of reality depends on what set of questions you ask about it. (Rather than, as I suggested, the true and complete description of reality consisting of sets of compatible properties, and their values given such a set - which is weird, but at least it is not relativism.)

*"A framework is just a maximal set of questions that can all be answered at once. That there are multiple such sets for any quantum system, and that some pairs of questions aren't both in the same set, is already implied in the math of quantum theory - the single framework rule just restates that in plain language."*

This is tendentious. What is implied in the math of quantum theory is the existence of certain kinds of operators on Hilbert space. Whether those operators are equivalent to "questions you can ask about a quantum system" is a matter of interpretation, not something that comes from the math alone. The operators don't play any such role in PWT at the fundamental level, for example.

The fact that you may interpret CHI as merely being about what we can know about quantum systems (an interpretation that leans more anti-realist) shows that it is not as clear of an interpretation as it could/should be. As I was introduced to CHI (see the section on decoherent histories in Sheldon Goldstein's article "Quantum Theory Without Observers"), it is about the way quantum systems actually are (an interpretation that leans more in the realist direction), not just about what we can know.

**Philosophical relativism**

I would disagree that consistent history implies philosophical relativism.

When we construct a framework, we do so in order to make predictions for measurements (or, more precisely, when the particle encounters a larger system and decoheres, but I will just discuss measurements as a phrase for all such events).

The framework entails selecting a particular basis at a given moment of time. For the final state, just before the measurement, that is determined by what measurement we perform. So, for example, in a Stern-Gerlach experiment set up to measure the z-spin of a particle, we express the basis of that particle in terms of the z-spin. From that we compute a probability distribution which can be used to compare with the frequency distribution we obtain after repeating the experiment a large number of times. We could, of course, use a framework where at that final time the basis used describes the x-spin. But the distribution derived from this would be meaningless (in the sense that it says nothing about reality), as we are not measuring the x-spin. So, in this example, the choice of framework is not arbitary, but depends on what system (whether a measuring device or something else) the particle is going to encounter which will cause it to decohere. I have no objection to saying that the prediction for a measurement is conditional on what property is being measured.

Of course, the framework could also assign a basis to intermediate times where there is no measurement (or other source of decoherence). This choice is arbitrary, but as long as you obey the single framework rule etc., it makes no difference to the final calculation of the probability distribution for the final measurement. You will get the same result no matter which basis you choose for those intermediate times. By choosing a basis, you are not making any ontological statement about the state of the particle, but merely doing in the theorist's notebook so as part of the calculation of the final probability distribution. The statement is either that you don't know what the spin state is at those intermediate times and thus which basis describes the particle (which would be my position), or (in the more extreme position) that the concept of spin states is only meaningful when the particle decoheres, and until then it is just wrong to talk about it. The final probability is not conditional on the choice of basis used for these intermediate times, since you get the same result no matter what basis you use.

This looks to be similar to your way to avoid relativism. I do have a few quibbles about what you write. "The probabilities are supposed to be probabilities for the physical system to be a certain way; that means that the system is a certain way even though we usually don't know what way that is." Don't forget that probabilities don't apply to individual systems, but are only used to predict a frequency distribution. The probability doesn't say anything about whether a particular run of the experiment is in a certain way. "The system is in a certain way even though we usually don't know what way that is" would correspond to my own view (although possibly not that of Griffiths or Omnes). "The catch is that the ways the system can be depend on the framework being used." Here I disagree. Except where there is a decoherence event and the choice of that part of the framework is forced on us, the ways the system can be depend on the possible frameworks which could be used. We choose one basis for the sake of making the calculation, but with the knowledge that we would get the final result no matter which basis is used. (Essentially we expand the identity operator into a complete set of projectors, and use each of those projectors to construct an individual history. Because we start with the identity operator and consider all the states, and because of the unitary nature of time evolution in quantum physics, this ensures that no matter how we expand the identity operator, when we combine the pre-probabilities for each history that leads to the same final outcome we will get the same amplitude for the final result no matter what basis we chose for those intermediate times (as long as we don't violate the consistency conditions, i.e. we have to use the same basis for each history for the same particle at the same time).

Of course, if we should make a measurement at an intermediate time (in additional to the final measurement), then that would determine the basis we should use at that time, and if we include the result of that measurement it does affect the conditional probabilities. But once again, saying that a probability is conditional on a measurement and the result of that measurement doesn't strike me as being problematic.

I agree that there are some similarities between this and the pilot wave approach. As far as wave mechanics is concerned, possibly the only significant difference is the ontological status of the wavefunction: whether it is something physically real guiding the particle, or merely an expression of our uncertainty concerning the ("random") motion of the particle. When it comes to relativistic QFT it is a bit harder to say, and probably depends on precisely how one adapts pilot wave theory to cope with particle creation and annihilation. But I don't want to get into that debate again, at least not until I have time to think more deeply about our last discussion.

Incidently, I am about to start work on my next post, which is about Quantum Bayesianism. I reserve the right to change my mind once I have actually done my research, but based on what I know about it before I start, my principle objection to that interpretation is likely to be that it does collapse into philosophical relativism.

**Re: Philosophical relativism**

Thanks for your reply, Dr. Cundy. To me it looks like your defence of CHI against the relativism objection makes it collpase into an anti-realist, instrumentalist interpretation. All it is doing is making predictions for certain events, without providing any real insight into the physical structure of the world, or an explanation of the causes of those events. I see no other way to interpret your focus on the "final state" and the framework being determined by the "larger system" that the particle interacts with. CHI, like Copenhagen, has not provided the ontological resources to say what counts as a larger system for that kind of analysis to apply - those notions have to be imported from outside the theory. And so CHI (though perhaps only in what you call "the more extreme position") still has not solved the measurement problem; it still suffers from what Bell called the "shifty split".

I think you try to avoid this by going at least part of the way towards one on my suggested resolutions, which is to say that there is an objectively correct basis or framework even at the "intermediate" times (though the whole notion of "intermediate" vs "final", like the notion of a "larger system", still seems too observer-dependent, unless the larger system is itself defined as part of the objectively correct framework) - we simply don't know what the preferred framework is. (At least, I think this is what you meant when you said that your response was similar to my way of avoiding relativism; I had to re-read your comment a couple of times to see the similarity!) This may be workable.

I have a couple quibbles with the quibbles that you have with what I wrote, but I don't want to draw out this discussion, so (for once?) I will refrain from voicing them. Regards!

**Platonism in Heisenberg**

Professor Nigel Cundy, I have a question:

A short time ago I decided to do some research about Werner Heisenberg, as I had seen on several occasions that some quotes from him were used to appeal to an approach of making Aristotle compatible with quantum mechanics, my surprise comes when in a biography of him called "Uncertainty : the life and science of Werner Heisenberg" I read the following:

"By the vinter of 1955-1956, when Heisenberg delivered the Gifford Lectures on physics and philosophy at the University of St. Andrews in Scotland, he had already distinguished contemporary elementary particle physics from nineteenth-century atomism. For him, the latter was a form of repugnant mechanistic materialism derived from the atomic theories of

Democritus and Leucippus; the former held closest affinity to the work of the sagacious Aristotle. The underlying matter field of Heisenberg’s unified field theory bore similarities to the notion of substance in Aristotelianism, an intermediate type of reality. Measuring the properties of elementary particles seemed closest to the Aristotelian notion of potentia, since the particle comes into being only in the act of measurement.

By the 1960s, particle qualities had succumbed to the symmetry properties of field equations, and Aristotle had succumbed to Plato. The Platonic atoms of his remembered youth were now fundamental. “The particles of modern physics are representations of symmetry groups and to that extent they resemble the symmetrical bodies of Plato’s philosophy,” he declared in

one of his last publications. 73 In his 1969 memoirs, written as a Platonic

dialogue, he claimed that Platonism had dominated his thinking throughout

his career. Toward the end of the memoir he wrote of his happy days in the

old Urfeld cottage during the 1960s when — with Colonel Pash and the war

far behind him —“we could once again meditate peacefully about the great

questions Plato had once asked, questions that had perhaps found their

answer in the contemporary physics of elementary particles,” a physics that

found its meaning in the ancient idealism and transcendent philosophy of

Plato".

Do you know anything about it?

**Heisenburg and Platoism**

My knowledge of Heisenberg's philosophical views basically just extends to his work "Physics and Philosophy," where he relates quantum states to Aristotlean potentia, and reflects the early period in his life as you discuss above. So I don't know much about the views he held later in life. So everything below is just my own opinion on the quote you provided based on what I know about physics today.

I would partially agree with him in that quote. Symmetry is hugely important in contempoary physics, and that is something more that Plato would have argued for than Aristotle. I don't think, however, that it is a choice between either Plato or Aristotle. Plato, as I understand it, and his successors even more so, favoured a mathematical approach to physics. Aristotle advocated for an approach based on non-mathematical causal explanations. In practice, I think we need elements from both approaches, albeit obviously the application to quantum physics and calculus/group theory means that both Plato's and Aristotle's ideas need adaptation. There is no reason why you can't say that both Aristotle's potentia and a more Platonic symmetry are ideas with parallels in the philosophy of quantum physics.

But I don't entirely agree with him when he writes "The particles of modern physics are representations of symmetry groups." Without the context, I can't really judge what he was saying, but if this is the 1960s then the big idea in particle physics at the time was Gell Mann's eight-fold way, which I think was rapidly accepted by the particle physics community. The quark model was proposed in the mid 60s, but not widely accepted until QCD in the early 1970s, and the electroweak model was proposed in a form which actually worked in the late 1960s, so possibly a little bit too late to shape Heisenberg's change in views. QED was well established at the time, but its early formulation doesn't really emphasise symmetry as much as the non-Abelian theories. Symmetry is important to all these theories and models, but I am guessing that Heisenberg had the eight-fold way in mind when he made that comment, as it is most in line with what is said. If I have that wrong, then please ignore the below (although a similar comment could be made for the symmetries behind QCD or the electroweak theory).

Basically, the 8-fold way notes various relations between the various Hadrons known at the time which can have close affinities to the generators of the SU(2) (for the pions) and the SU(3) (for pions, Kaons, and eta) symmetry groups. There is a Similar categorisation for the Baryons. The relationship is not exact, but it is pretty good. Today, we understand this in terms of the approximate chiral symmetry of QCD with either two flavours of quarks (for the pions), or 3 flavours (for the pions plus other particles). But before QCD, it was not known why this pattern arose, so it would not have been unreasonable to say that the particles are representations of symmetry groups, and that the symmetry groups are the fundamental and the particles the result of that. Today, I would say that the symmetry is important, but it reflects the different ways you can put together the quarks to make composite particles. So I wouldn't say that the particles are represented by the symmetry, but that the symmetry is helpful in explaining the properties of the composite particles. But we are talking about non-mechanistic particles as the fundamental principle again (as in the wave mechanics and QED which shaped his original position), so we are moving back towards Aristotle. So I can see why Heisenberg wrote what he did, and have some sympathy with it, but think that it was superceded by the developments of the 1970s.

**Interesting critique of consistent histories**

Hello again, I recently came across a paper by Fay Dowker and Adrian Kent, "On the Consistent Histories Approach to Quantum Mechanics", which offers a critique of the interpretation which you or your readers may find interesting.

https://arxiv.org/abs/gr-qc/9412067

The gist of it is that CHI fails to account for the appearance of the quasi-classical world we observe.

**Post Comment:**

**for bold text**, < em>... < /em>

*for italics*, and <blockquote> ... </blockquote> for a quotation

All fields are optional

Comments are generally unmoderated, and only represent the views of the person who posted them.

I reserve the right to delete or edit spam messages, obsene language,or personal attacks.

However, that I do not delete such a message does not mean that I approve of the content.

It just means that I am a lazy little bugger who can't be bothered to police his own blog.

Weblinks are only published with moderator approval

Posts with links are only published with moderator approval (provide an email address to allow automatic approval)

Am I the only one who has a hard time wrapping my head around what “the motion of particles is indeterminate” means?