The Quantum Thomist

Musings about quantum physics, classical philosophy, and the connection between the two.
The Philosophy of Quantum Physics 5: Consistent histories


The Philosophy of Quantum Physics 6: Quantum Bayesianism
Last modified on Sun Mar 17 19:40:04 2024


Introduction

I am having a look at different philosophical interpretations of quantum physics. This is the sixth post in the series. The first post gave a general introduction to quantum wave mechanics, and presented the Copenhagen interpretations. I have subsequently discussed spontaneous collapse, the Everett interpretation, Pilot Wave, and consistent histories interpretations. Today I intend to discuss the Quantum Bayesianism approach.

Like consistent histories, Quantum Bayesianism or QBism (as the more mature form is known) is a psi-epistemic approach, treating the wavefunction as a reflection of our knowledge. Unlike consistent histories, which uses the logical interpretation of probability, QBism uses the Bayesian interpretation (more precisely, one specific variation of the Bayesian interpretation). In this interpretation, probability is subjective and based on the individual knowledge of the person making the statement.

I should admit that before writing this post, my knowledge of QBism was rather superficial. Indeed, one of my reasons for writing this post was to give me an excuse to study it in more depth. After writing this post, my knowledge is probably only a little less superficial, but I will do my best to present it accurately. This is another of those interpretations which I don't see discussed much in the philosophical literature, which is a shame, because while I disagree with it, it does raise some interesting ideas.

The Stanford Encyclopedia has an article which serves as a decent introduction. There is a popular-level textbook available here , and numerous research papers. The Stanford Encyclopedia has a reasonable biography. I will primarily base this post on two research papers, Quantum-Bayesian Coherence, by Fuchs and Schack, and an introduction to QBism by Fuchs, Mermin and Schack. I recommend that interested readers consult the original literature to get a better understanding of the interpretation.

I should add that QBism is something of a work in progress, and my understanding of it (such that I have) is based on a selection of research papers, which might now be out of date. I also found the papers I looked at in my research for this post somewhat disorganised and unclear. So it is quite possible that some of my questions about it have been answered, or that I have misunderstood some points. This is just my initial foray into the interpretation, and I am bound to have got some things wrong.

Summary

QBism is based on two ideas. Firstly, a particular mathematical reformulation of quantum theory that is based on the density matrix rather than the quantum state. In the density matrix formulation, probabilities are calculated by the trace of the product of various operators. These traces involve the density matrix, a positive semi-definite Hermitian operator of trace 1. The simplest example of a density operator would be diagonal with each term constructed from the projection operators used in the standard (amplitude) formulation of quantum physics, multiplied by the probability that the system is in that state. This reformulation thus avoids the use of amplitudes and wavefunctions: everything is expressed as a probability and in terms of the density matrix. There are various ways in which this set-up is expanded, and some of it gets a little complicated, but it is formally equivalent to the standard formulation in terms of wavefunctions and amplitudes, and will make exactly the same predictions. The density matrix formulation tends to be used in quantum information theory. Since I have never really studied QIT, this way of looking at quantum physics is new to me, so I will go into some mathematical detail below.

The second part of the interpretation is to interpret these probabilities in terms of a radical subjective Bayesian understanding of probability. Here probabilities are treated as in decision theory. They provide a guide to how we should respond to certain circumstances in the face of uncertainty. In other words, if we were to bet some money on the output of a quantum measurement, they tell us the optimal amount we ought to bet on each particular result to ensure that we (on average) break even against a perfect bookmaker, and win against a flawed one. The probabilities of quantum physics thus are just a means to guide our actions, and do not tell us anything concrete about the underlying reality. The motivation is from a statement by de Finetti: probabilities do not exist. By this he means that nothing intrinsically has a probability. A probability is not an existent object, and nor is it any property of any existent object. You can't point to anything physical and say "that's a probability," in the same way that you can point to something and say "That is an apple," or "That is red." It is a purely abstract concept, and, like all abstract concepts, only resides in someone's intellect. A probability is merely an expression of uncertainty, and therefore resides in the thinker rather than the object being thought about. The advocates of Quantum Bayesianism expand this with the slogan that quantum states do not exist, with the same underlying meaning. The quantum state merely exists in the mind of the theorist trying to understand measurement results. According to QBism, quantum physics is not a description of reality, but of one particular agent's knowledge. What that knowledge is of is a little more ambiguous in the literature, but some papers state it as actual and possible measurement results. Probability is a tool to tell the agent how to update that knowledge in the face of various stimuli, and how to act in the face of uncertainty.

This means, of course, that there are as many quantum states as there are people thinking about the underlying system. Each individual theorist updates their knowledge when he or she gets new information from the wider system. That wider system includes other researchers. So, to me, my colleague has exactly the same status as another measurement device or an individual quantum particle. Just a part of a general quantum system, and a source for information updates. Quantum physics, in this interpretation, is thus a single-user theory, existing only in an individual mind. This is, I think, akin to some of the more radical versions of empiricism.

What does this tell us about the underlying reality? Quantum Bayesians do not deny that this reality exists, but they are less certain that we can know very much concrete about it, beyond what is directly observed. Quantum physics happens only in our minds. The Schroedinger equation thus describes how our knowledge evolves in time, and belongs in the world of ideas rather than the world of reality.

Probability in terms of a gambler's bet

In the particular Bayesian interpretation used in QBism, probability is a degree of belief and is subjective. So, if there is a lottery ticket worth 1 unit of currency which wins if condition A is satisfied, then a rational person would at most pay P(A) for that ticket, where P(A) represents the probability assigned to the proposition by the person. There is nothing intrinsic to A that can help assign this probability; it depends solely on the judgement of the agent who holds it. Probability, in this view, is not concerned with objective relations between propositions but degrees of belief, and the rules governing probability are just those which ensure that the overall framework remains consistent. The means of performing calculations with probabilities belongs to formal logic. The input into those calculations, the Bayesian priors, are just arise from our human experience.

This is backed up by a normative principle, that nobody should seek to gamble so as to incur a loss. This is known as Dutch-book coherence. This interpretation of probability theory was advocated by the likes of Ramsey and de Finetti.

The standard axioms of probability are that probabilities lie between 0 and 1, sum to 1, and that for non-overlapping A and B the probability of A or B is the probability of A added to the probability of B. These axioms can be derived from the definition and normative principle above.

We know, for example, that the probability must lie between 0 and 1. It cannot be greater than one, because then the agent will pay more for the ticket than he would receive on winning the lottery, and is guaranteed a loss. It cannot be less than zero, because then the agent will pay someone to take the ticket away from him (and consequently can't win), also guaranteeing a loss.

To establish the sum rule, we see that if there are three tickets. The first is worth 1 unit of currency if either A or B is true. The second is worth 1 unit of currency if A is true. The third is worth 1 unity of currency if B is true. And A and B cannot both be true. In this case, it is clear that the value of the first ticket to us is the same as the value of the second and the third together, as we have the same chance of winning. Consequently we will be willing to pay P(A) + P(B) in order to buy it, which, from the definition of probability, means that P(A∪B) = P(A) + P(B). ∩ is the symbol I use for "and", and ∪ is the symbol I use for "Or".

Finally, we can consider the rule that the probability for every outcome adds up to 1. This again follows from the definition. If we consider the lottery ticket where we are guaranteed to win, i.e. we win if either A or B or C or … is satisfied, where the … indicates every other possible outcome, then clearly we will not incur a loss if we pay up to 1 unit of currency for this. So,

P(A∪B∪C∪…) = 1

From the sum rule, we know that,

P(A∪B∪C∪…) = P(A) + P(B) + P(C) + …

And together these show that the total sum of all the probabilities must be 1.

Then we need the definition of the conditional probability, this time for overlapping outcomes A and B (so they can both be true at the same time, but need not be).

P(A∩B) = P(B|A) P(A)

This can also be derived using the Dutch book. If we imagine a ticket that is worth 1 unit if A and B are satisfied, but return the price if it is not A. We define the price willing to be paid for this as P(B|A). The price for this ticket must be the same as the price for two tickets, one where we win if A∩B are satisfied, and one which is worth P(B|A) if it is not A. Putting this together (using ! to denote "Not") we see that

P(B|A) = P(A∩ B) + P(B|A)P(!A) = P(A∩ B) + P(B|A)(1-P(A))

From which we immediately derive the definition of the conditional probability.

So this definition of probability is formally equivalent to that derived from Kolmogorov's axioms. The mathematics is exactly the same. The underlying justification for what probability is is very different, with Kolmogorov's axioms derived from considering frequency distributions, and this interpretation of probability related to whether or not we should place a bet. Indeed, the two main uses of probability are in making predictions for frequency distributions and in decision theory, and the two different derivations arise from which of these two uses we take as primary.

Suppose there are two measurement events, a and b. a comes first, and b depends on it in some way. A represents an outcome for a, and B an outcome for b. And suppose that we are just interested in the result of outcome B, and don't really care about what happened at A. It is fairly easy to show that

P(B) = ∑A P(A) P(B|A)

where the sum is over all possible outcomes for a. So, one can calculate this by calculating the probabilities for each A and each conditional probability.

Since we are ignoring the measurement a, we can also answer the question of what if the event never actually happened in the first place? In this case, we can also assign a probability to B, Q(B), which in general could be different from P(B), if we think that the fact that we made a measurement on a affects the result of b even if we don't know what that measurement result was.

Even if they are not the same, there is clearly going to be some relationship between P(B) and Q(B), as they just refer to different ways of getting to the same event. So, there might well be a function F which maps from one to the other.

Q(B) = F(P(B))

Obviously, if we find out that there was a measurement on a makes no difference to the probability assigned to event B, we can just set F(x) = x: introducing the distinction between P and Q just makes the discussion more general. F is obviously constrained by the rules that govern probability. One can also suppose that the situation is more complex, and that the function depends on both P(B) and the conditional probabilities P(B|A), so

Q(B) = F(P(B),P(B|A))

These two different mappings have different uses in QBism.

Framework of the system

The question then becomes how to apply this to quantum physics. The quantum system is described in terms of a d dimensional Hilbert space. In particular, QBism seems to focus on the density matrix formulation of quantum physics.

I tend not to focus much on density matrices, preferring to discuss things in terms of wavefunctions, quantum states and amplitudes, as that is the formulation I have always tended to use in my own work. Density matrices tend to be used to give a partial description of a quantum system, for example when the exact initial state is unknown and one wants to use a probability as an initial state.

A positive operator is a Hermitian operator whose eigenvalues are all greater or equal to 0. A density matrix (or density operator) is a positive operator whose eigenvalues add up to 1. If one of the eigenvalues of the density matrix is 1, then the operator is said to describe a pure state. Otherwise it is a mixed state.

If Pj are a complete set of projectors (so their sum gives the identity operator) and ρ a density matrix then the quantity pj = Tr(ρPj) can function as a probability. It lies between 0 and 1, and the sum over all the pj adds up to 1. In the simple example, where we write Pj = |j⟩⟨j| where |j⟩ represents the basis state for some observable, and we set ρ as a pure state corresponding to the particle wavefunction, ρ = |ψ⟩⟨ψ|, then pj is just the probability given by the Born rule.

This can be extended to cover cases where we are unsure over the initial state of the system. For example, suppose that the initial state could have started in in a state | a⟩ with probability a or a state | b⟩ with probability b, and | a⟩ and | b⟩ are orthogonal. And suppose that T represents the time evolution operator between the initial and final times of the experiment. Then we can construct a density matrix of the form ρ = T (a |a⟩⟨a| + b |b⟩⟨b|)T, and the probability that we finish by measuring the observable corresponding to the state |j⟩ is still just Tr(ρ Pj). The initial state of the density matrix is expressed in a diagonal basis, but there is no reason why it would remain so as we evolve in time, and we can rotate the initial state to any basis we choose. The density matrix formulation can also be used to study partial systems. For example, if the initial state is a product of two states (for example, we are considering two particles in the experiment), but we are only interested in what happens with one of the particles, we can take the partial trace over the parts of the system we are not interested in.

As far as I can tell, QBism uses the density matrix formulation because it can avoid the need to use amplitudes or the bare wavefunctions during the calculation of the probability. Instead the density matrix itself is seen as representing the quantum state (which, recall, is interpreted as being merely a way of parametrising someone's knowledge).

We can select a set of projectors that form a complete basis for the complex Hilbert space that defines the quantum system. The projectors, recall, represent the states that correspond to possible measurement results. These are not just necessarily the measurement results which are consistent with each other, i.e. orthogonal states. So we are not just considering spin-z projectors (for example), but the minimum set of projectors that are linearly independent. I will give an example later. In the most general case, we have the sum over the Pj is 1, but the projectors themselves are not necessarily orthogonal, so Tr(PjPk) can be non-zero even when j and k are not equal to each other. But as the projectors form a complete basis, we can express the density matrix in terms of them,

ρ = ∑ αj Pj.

This makes the probabilities

pi = ∑ αj Tr(PjPi).

Defining the matrix M as Mij =Tr(PjPi) allows us to calculate the set of alphas in terms of the initial probabilities.

αi = (M-1)ijpj

In this way one can construct an initial density matrix in terms of an initial probability distribution. Since the density matrix can be used to represent the quantum state, this means that we have every right to think of a quantum state as a probability distribution and nothing more.

The calculations would be most convenient if M were diagonal, but this is not possible. But we can rotate into a bases where it is as close to diagonal as possible. For a d dimensional Hilbert space and thus d2 projectors, this occurs if we define the projectors as

Pj = (1/d) Πj = (1/d) |ψj><ψj|

for some linearly independent but not orthogonal states j> which satisfy

Tr( Πj) = 1/d

Tr( Πj Πk) = (dδjk+1)/(d+1).

The proof is given in Fuch's paper.

There is no reason to express the system in this basis other than convenience and that it simplifies various calculations. It is just one way of parametrising the quantum state. This type of parametrisation (obeying the conditions in the two equations above) is widely used in quantum information theory, and have come to be known as "symmetric informationally complete" or "SIC" quantum measurements. Generally speaking, in QBism the initial state is expressed in a SIC parametrisation. After applying the time evolution operator, the density operator will generally change, and can no longer be expressed in terms of these projectors if we stick to the same overall basis. The density matrix will still be a positive matrix, but not necessarily representable in a SIC basis.

The QBism papers tend to focus on two sets of probabilities. First of all, there are the probabilities pj used to construct the initial density matrix. This they call the sky system. Then there are the probabilities qj which represent the final measurement outcome. This is known as the ground system. There is a straight-forward mapping from these two probabilities, as I will discuss below, involving the time evolution operator. The measurement on the ground, to measure the qj, is some potential measurement that could be performed in the laboratory. The sky measurement represents a conjecture concerning the initial state, expressed as a SIC. The mapping between p and q can be represented in terms of a conditional probability r(i|j). The probability distributions p and r represent how an agent would gamble if a conditional lottery were performed based on the situation in the sky. The probability distribution q represents how the agent would gamble based on the ground measurement if he was unaware of what was happening in the sky. In pure Bayesian reasoning, there is no necessity that these are related, but in quantum physics, as I will discuss below, there is a relation connecting them.

SICs -- quantum physics without amplitudes

SIC example

The image above is shamelessly taken from this paper mainly because I can't be bothered to type it all out in HTML. The basis satisfies the fundamental trace conditions outlined above, and thus can be used as a basis for a density matrix in a 3 dimensional Hilbert space. It has not been proven that there is such a basis in every dimension (as far as I know), but it does not seem an unreasonable conjecture.

The advantage of using a SIC basis is that it greatly simplifies many expressions. For example, the density matrix (for valid probabilities pj) can be written as


ρ = ∑jΠj((d+1)pj - 1/d)

Equally, the inner product of two quantum states can be expressed simply when using SICs. If the density matrix ρ is mapped to probabilities pi and the density matrix σ is mapped to qi, then

Tr ρ σ = d(d+1) ∑ipiqi - 1

The next question is how to write the dynamics in terms of SICs. If T is the (unitary) time evolution operator, then the density operator evolves according to ρ' = TρT. The probabilities then evolve according to

p'k = (1/d)∑i((d+1)pi - 1/d)Tr(TΠi TΠj)

We then define the following quality

t(j|i) = Tr(TΠi TΠj)/d

and

p'k = (d+1)∑ipit(j|i) - 1/d

If we interpret r as a conditional probability, then this is close to the mathematics of a simple stochastic evolution.

The Born rule

For a set of projectors Aj corresponding to measurement outcomes for an observable A, the probabilities derived from the Born rule are expressed as

qj = Tr ρ Aj

Writing r(j|i) = Tr Πi Aj, the Born Rule becomes

qj = (d+1)∑i pi r(j|i) - 1.

This has obvious similarities to the equation for the dynamical evolution of the quantum state.

If we suppose wavefunction collapse, then then the density state would transform to the projector Πi (i.e one of the SIC projectors)), when the outcome corresponding to the projector occurs, then r(j|i) would be the conditional probability for outcome j (in the ground, i.e. the actual measurement) conditional on state i (as the hypothetical internal state of the system). With this, Dutch book coherence would then demand an assignment sj for the outcome on the ground

sj = ∑ pi r(j|i)

which implies that the probability q is

qj = (d+1) sj - 1

This creates bounds on the valid values of sj, (and consequently p) but provides an example of a functional mapping between probabilities.

If the measurement on the ground does not transform the system into one of the Π projectors, then the relationship between the probabilities is slightly more complicated

qj = (d+1)∑i pi r(j|i) - (1/d) ∑i r(j|i).

If r(j|i) sums up to one, this has exactly the same mathematical form as the unitary time evolution.

The above expression of the Born rule allows one to think of the Born rule as an addition to Dutch-book coherence. But why is this advantageous over introducing objective quantum states or objective probability distributions?

The expression for the probability qj for the measurement outcomes gives a restriction on the range of the probabilities. But this does not undermine the subjectivism of the probability, since it depends on the subjective probability pi and conditional probability r(j|i).

An alternative to subjective probability is found in the Principal Principle of David Lewis, which basically states that if an event has an objective chance of happening, then the subjective probability an agent should ascribe to that probability is the same as the chance. Applied to quantum physics, the objective chance is determined by the quantum state, and in particular its overlap with whatever state represents the event we are considering. Beliefs are one thing, but the quantum state is a fact of nature that powers a quantum version of the Principal Principle. In this view quantum states are not just part of our human outlook, arising from the mind of whoever is studying them.

But the advocate of QBism cannot accept this. Just as the principle from Bayesian probability (accepted, I think by all strands of Bayesian probability) that there is no such thing as a probability, they also state that there is no such thing as the quantum state. There are potentially as many states for a given quantum system as there are agents.

The reason given from this arises from statistical practice. Quantum states are assigned on the basis of Bayesian priors, based on measurement, updating, calibration, computation, and other work. They only have definite values in the textbook exercises, where one starts with a given initial state as an assumption. But outside the textbook, two agents looking at the same data but with different prior beliefs will assign distinct quantum states for the same system. The basis for the assignment is always outside the formal mathematics.

With no objective way of deciding which quantum state should be chosen, the Born rule is just treated as a Dutch book of subjective Bayesianism. The Born rule is just a guide to how we should act in the face of unknown measurement results to avoid inconsistencies or undesirable consequences.

So the QBism project provides a new way of thinking of quantum interference. In particular it can be seen as an addition to Bayesian coherence. It reformulates quantum physics in a way that never requires an amplitude, which brings the idea of interference closer to the underlying philosophy of probability. The Born rule is viewed as a relationship between probabilities, rather than something that sets probabilities from something more real than the probability itself. The Born rule is just an extension to the requirement of coherence. This expresses the old idea that unperformed measurements have no outcomes.

In QBism, the underlying probabilities are simply an expression of subjective knowledge. But knowledge of what? The glib answer is measurement results. This answer, of course, brings in the old problem of defining what is meant by measurement. The traditional reason why this is seen as a problem is that the concept of measurement is vague. If an understanding of measurement is needed to express the axioms of quantum physics, and every physical process is ultimately described by quantum physics, then since measurement is an example of a physical process we are clearly arguing in a circle if we make measurement fundamental to the understanding of quantum physics. A similar argument can be used if we are to introduce agents into the axioms of quantum theory. Fuchs agrees that the use of the word is problematic, but for the different reason that it is not within the scope of quantum physics to explain such things. In particular, the word measurement takes the focus away from the agent and places it on the external world.

Bayesian probability theory concerns how agents update their beliefs on the basis of new data. But the notions of agent and data are not derived from the theory; their definitions are not part of the subject matter of the theory. Measurement in QBism is analogous to new data acquisition, and Fuchs argues that QBism needs define the concepts no more than Bayesian probability theory does. Quantum physics is, in this interpretation, not a theory of the world but a theory for the use of agents immersed in and interacting with the world. There may or not be an external world independent on human minds (Fuchs thinks that there is), but, according to QBism, quantum physics may be conditioned by this world but it is not a a theory of it.

Outcomes to measurements are subjective to the user as well. When an agent writes down degrees of belief for the outcomes of a measurement, this involves personal experiences concerning the external world.

The axioms of the theory describe the agent, the systems external to the agent, the actions on those systems, and the consequences of those actions. The formal structure is a theory of how the agent should organise their subjective Bayesian probabilities for the consequences of all the things happening around the agent. When considering a system, it is placed in a Hilbert space. Actions on the system are captured by positive operator valued measures on the Hilbert space. Quantum physics organises the beliefs by trying to find a density operator such as the conditional probability for a consequence, given an action is related to the density operator and the appropriate measure. Unitary time evolution and other operations do not represent the underlying dynamics, but instead address the way in which the ideal agent's beliefs change over time and in consequence to any actions. This can be extended to have multiple aspects of the system, by increasing the Hilbert space. The notion of the action on a single aspect can be isolated with a particular choice of the measure. Resolving the consequence on one of the aspects can lead to updates on the degrees of belief for the other aspect.

Non-locality

How does this play in with the non-locality that follows from EPR type experiments? I will base this section on this paper by Fuchs, Mermin and Schack. As in the previous sections, this section is intended as an abridged summary of their position, which I do not personally agree with. I will leave my own commentary until later.

QBism postulates that the concept of experience is primitive and fundamental to an understanding of science. A measurement in QBism is any action an agent takes to elicit a set of possible experiences. The measurement outcome is a particular experience used to update prior probabilities for subsequent measurements. These measurements need not, of course, be limited to those done in the laboratory. Any sensation will be used in the same way. A measurement does not reveal a pre-existing state of affairs. It is an action that creates a new experience for the agent. The collapse of the wave-function is merely the agent updating the state assignment on the basis of experience. The only phenomena not treated in this way is one's own internal awareness of private experience. This means, of course, that other researchers and their reports of their own experiments are treated just as additional quantum systems, providing new information to update their calculations.

Reality thus differs from one agent to another, and rests on what that particular agent experiences. This is only constrained that different agents can communicate in part their experiences to each other, and through this communication the experiences of one agent can be indirectly based on those of another.

It is often claimed that quantum theory is non-local. However, it would be better to say that various interpretations of quantum physics are non-local. The consistent histories and many worlds interpretations are cited as examples of interpretations which violate other premises of Bell's (and related) theorems apart from non-locality. And many physicists in general are troubled by the idea that quantum physics is non-local.

QBism avoids non-locality in a different way. Its purpose is to enable an agent to organise beliefs based on personal experience. That experience is restricted to the past light cone. There can be no non-local experiences influencing the agent. When an agent uses quantum physics to calculate correlations between two different experiences, those experiences cannot be outside the light cone. Quantum physics cannot assign correlations to space like separated events, because an agent cannot experience these events.

Take, for example, Bohm's classic example of entanglement leading to measurement results which seem to imply some non-local correlation. A spin zero particle decays into two spin-half Fermions, which travel in opposite directions until they encounter detectors which measure their spin. Those measurements might be space-like separated. Because the two spin-half Fermions come from the same source their spins are correlated. If the spins are measured along the same axis, then they will have opposite spin. If they are measured along different axes, then they might be measured one as spin up and the other spin down, or they might both be measured with the same spin (albeit in different directions). But when we repeat the experiment a large enough number of times, we will notice various correlations between the frequency distributions of the measured value of the spin at each detector. This might in principle be explained by a classical hidden variables theory, where the spins for each direction are set at the moment of decay, but under some fairly mild assumptions, including that there are no non-local interactions influencing the measurements, that hidden variables theory predicts certain bounds on the correlations which are violated by both theory and experiment. Thus at least one of the premises of the calculation of those bounds is incorrect, and the obvious one to pick is that there is no non-local influence between the two measurements.

But QBism is a theory of agents and their beliefs rather than about measurements. The two detectors are space-like separated, so there must be two different agents performing each experiment. Agent A performs his experiment on his detector, and updates his beliefs accordingly. These beliefs will be both about his own particle and its entangled partner. Agent B performs the other experiment. But agent A does not know what the result of that other experiment is until B communicates it to him. It is only at that point that the second result enters into A's personal understanding of the quantum state. The experiment is repeated a large number of times; individual results become a frequency distribution; and we compare correlations. But for agent A there are no non-local information updates, and therefore no non-local correlations between the quantum states of the two particles, because the quantum states are purely subjective and purely in A's mind or notebook. Equally, the correlations only exist in A's mind. The state is only updated, and the correlations only arise, when A receives B's report on the measurement of the other particle. The same is true for agent B.

Those who claim that quantum physics is non-local must deny one of three fundamental precepts of QBism.

  1. A measurement outcome does not preexist the measurement, and is only created for an agent when it enters the experience of that agent.
  2. A probability of 1 only expresses an agent's certainty concerning an event; it is still a judgement. It does not imply the existence of an objective mechanism that brings about the event.
  3. Parameters which do not appear to the quantum theory are not experienced play no role in the interpretation of quantum physics.

With regards to the first point, the outcomes of the EPR experiment are assumed to be created when each particle hits its respective detector. But in QBism, this is false. The outcomes are created when each agent learns the result of the interaction between the particle and the detector, either by reading a dial themselves, or from a colleague's communication of their own reading of the dial. Each outcome is only valid for the agent who experiences it, and different agents can receive those experience those outcomes in different orders and in the middle period they will have a different perception of reality. And in QBism there is nothing problematic about that.

When the detectors are aligned in the same direction, agent A will make his own measurement, and assign the result of the other experiment with probability 1. But this probability is still a personal judgement, which exists only in agent A's head or notebook. It does not imply anything about an objective state of the second particle. In QBism there is no objective state of the particle, merely the experience and judgement of agents. Obviously B will make their own measurement, and then communicate their result to A, and A will then update his beliefs accordingly, which in this case will have no effect on the probability, which will remain 1. But communicating the result of one subjective judgement to update another agent's subjective judgement does not make things suddenly pop into an objective reality.

The derivation of Bell's inequalities depends on the assumption that there are certain hidden variables which influence the measurement outcome. These hidden variables are unknown, so we can only treat them according to a probability distribution. But these hidden variables violate the third of the three principles. They are not part of quantum theory; neither can they be observed, so therefore they don't exist. Only those things which can be experienced or which are directly implied by the theory can exist, and if they can't be experienced then they don't exist.

Criticisms

General Criticisms

There are several criticisms of QBism. The most obvious one, in my view, is that it doesn't actually explain what we would want a philosophy of quantum to explain. That is because it is a theory of the knowledge of agents concerning measurement results or other observations which update their data. What we want to understand is why those results take the value that they do in terms of the fundamental physical beables. The response might be that QBism is a philosophy of quantum physics, and if quantum physics is about knowledge more than physical reality, then we ought not expect it to tell us about any reality that exists behind the scenes. And that is fair enough, but it is also unsatisfying. Quantum physics is the most fundamental theory we have, and the most successful at making predictions. If this theory, as the QBists want to claim, does not ultimately tell us about reality, then we have no hope of understanding reality. At least, not unless a more fundamental theory is developed which goes beyond experimental knowledge and breaks the theoretical formalism of quantum physics.

Indeed, given that QBism is a theory of our knowledge rather than reality, I don't see why it is mutually exclusive with those interpretations of quantum physics which concern the underlying reality. Is it possible, for example, to accept both QBism and the Everett interpretation, one as a theory of our knowledge of reality, and the second as a theory of what is happening behind the scenes? I can't see why not, although I would have to be more of an expert on both interpretations for my opinion to be authoritative.

It is also unsatisfying because while quantum physics might make predictions concerning measurements, we want to understand why the universe behaves in conformity to those predictions. What is it about the universe that makes it work in this way? Most interpretations of quantum physics at least attempt to provide an answer to this, but QBism does not.

Indeed, the dependence on QBism on agents raises another difficulty. Aside from God, rational agents have only been around for a small fraction of the universe's history, and exist in only a small fraction of its volume, at least as far as we are aware. The universe existed before the emergence of such agents, and, we have good reason for supposing, evolved according to the same laws as we observe today. Thus agents (with the exception of God) are not necessary for the evolution of the universe. Making the theory that describes the evolution of the universe depend on agents thus seems problematic, as it would not explain the universe before there were any such agents. With the exception of God. It does not seem unreasonable to exclude God from this discussion since God's knowledge is complete, while the Bayesian system is a parametrisation of incomplete knowledge. God would not need QBism to understand the universe.

The subjectivism of QBism is also uncomfortable to me, and seems to come close to a philosophical relativism. I find this problematic because the universe does have all the appearances of being objective, and it is an assumption that we all make when proceeding in life. A very strong argument needs to overturn that common experience, and QBism is not it, given the existence of other interpretations which preserve objectivity.

QBism does not prove that its fundamental premise -- that results drawn from quantum physics are merely a matter of individual agents belief -- is correct. At best it shows that it is merely a valid possibility. To prove it correct would mean showing all other interpretations -- including those which nobody has yet thought of -- wrong, or at least worse than QBism. Such a task seems rather difficulty. And yet we have this strong a priori belief against the pure subjectivity of quantum physics.

I usually think of predictive calculations in physics as the result of three processes. Firstly there is a map from physical reality to a (partial) abstract representation of that reality. That gives us an initial state for the calculations. This might involve some measurement of that initial state, or a careful experimental preparation. Without inputting the correct initial state, we will get the wrong predictions. Then we have the calculation itself, in the abstract representation. Finally a map back from the representation to reality, where we can compare against experiment. To be valid, the representation needs to share certain features in common with reality: geometry, topology, underlying symmetries, and so on. We know that if we construct a theory with the wrong symmetries, then it will make the wrong predictions. This does strongly imply that we need an objective reality in order to correctly construct our representation of that reality. That objective reality constrains the symmetries which a successful theory needs to respect, otherwise we could arbitrarily pick whatever we feel like. Equally, for us to claim that the theory makes successful predictions, there needs to be some connection between the model and reality. This is provided by the first and final steps of this 3-stage process. It seems to me that QBism is only a theory of that middle step, namely the calculation. It does not explain why we should use the particular time evolution operator that we do. It does not explain why we need to construct the theory which we do, with the various symmetries and so on to parametrise it. It does not explain why we should update our beliefs (parametrised by quantum states) using some particular mathematical form of the Schroedinger evolution, if that update only represents the agents beliefs with no link to objective states in the physical world. It does not explain why we need to start with that particular initial state. It does not even explain what it is about the objective world that measurements measure. Of course, the QBists object to the word "measurement" precisely because it implies that it involves an interaction that tries to determine pre-existent properties of an observer-independent world, while they see it more as one side of a joint agent-object event. But without such an interaction, QBism is only a theory of our ideas, and ideas without testing or application are meaningless (and usually misleading and dangerous).

I am also concerned that QBism has a serious mind-body problem. In QBism, there is a duality between the agent and the object. In some other interpretations the agent becomes entangled with the object when a measurement is taken, but in QBism there is a chasm between them. Quantum physics takes place in the mind of the agent, with beliefs updated through experiences in the external world. This requires a division between the mind of the agent and the external world. In particular, the mind of the agent is not governed by the rules of QBism, but merely processes the information. But the normal expectation is that our mental activity arises from our brain activity, which is ultimately expressible in terms of quantum processes. So the agent is themselves a quantum being, and explicable by quantum processes. This seems to undermine the required duality between the agent and the world of quantum objects. I don't think the moderate dualism of Thomist philosophy is sufficient to overcome this problem; which means that the QBist would have to be committed either to Cartesian dualism, or to some form of Idealism. Both of these philosophies have notable problems.

A further objection is that QBism states that there are no physical states in existence, but only as a means to express subjective beliefs. However, it reintroduces those quantum states in order to update probabilities. Why then does it do so? If those states are linked to something in the real world, then the fundamental premise of QBism is false. If they are not, and only an expression of the agents beliefs, then the agent is only updating one subjective belief using another (which, since it only exists in the agent's head, could be anything that satisfies the bare requirements of consistency), to give something which can only be regarded as meaningless.

In QBism, there are as many quantum states as there are agents. However, quantum state preparation is usually said to apply a unique objective state. In the logical interpretation of probability, this is not an issue. This is not as radical as to say there are no quantum states in reality, but to say that there are such states, but that we do not always know what they are, and thus can only calculate conditional probabilities contingent on the initial state and the laws of physics. We then substitute in the actual initial state (or better, a probability distribution that mirrors the frequency of states that arises from a preparation method conditional on various assumptions about the preparation mechanism), and that gives us a prediction for the actual distribution of states we will get when we repeat the experiment a sufficiently large number of times. This presupposes that there is an actual, observer independent, state. It also presupposes various other assumptions, which can be tested and refined by comparing the results of the experiment against the predictions. But the radical Bayesianism that lies behind QBism forbids us this initial actual state. So then what does that state that we input into the calculation represent? Our subjective knowledge? But then our subjective knowledge of what?

The response to this is that the preparation device is itself a quantum device, and thus anything that spews out of it can only be subject to the subjective rules of quantum physics. But this is just arguing in a circle. It does not show anything. People who assert the objectivity of the prepared quantum state do not deny that the preparation device is ultimately subject to the laws of quantum physics. But at some point our ideas must confront something outside those ideas against which they can be applied. Suppose, for example, we want to prepare a stream of spin up particles along some particular axis. We would input the stream of those particles into a device that causes it to decohere into spin up and spin down states along the right axis, and then apply an appropriate magnetic field to separate the spin up and spin down particles. We discard the spin down particles, and we have our stream of spin up particles to use as the basis of the experiment. All the steps here are ultimately quantum processes, but that does not change that the end result is a stream of particles which are necessarily all in the same state. The only subjectivity is that an observer might not know which way the magnets are aligned -- but that is classical ignorance, not the sort of ignorance which is modelled by quantum physics and QBism. And the ignorance of someone outside the laboratory does not change the spin of the particles within it.

If quantum states are just subjective expressions of belief, then why would we want to update them in time according to the Schroedinger evolution? In the standard Bayesian picture, beliefs are only updated when new information is received; but no new information is involved in a temporal update. Again, in the logical interpretation where there are objective states in the real world which have certainly possibilities of evolving in a particular way over time, the quantum state in the notebook is updated according to the Schroedinger evolution because we want it to keep in line with the possible values of the objective states. But in QBism there is no underlying objective state, so this approach would not work. So what does the temporal evolution of the quantum state represent? The approach would be to move the Schroedinger evolution into the realm of personal beliefs. It describes how we should update those beliefs over time. But equally the underlying physical particle also changes through time. So how are these changes related to that of the Schroedinger evolution?

Then there is the problem with entangled particles. The QBist response to this is to say that there is no non-locality in quantum physics because it is a theory of how agents should update their beliefs, and the information gathering mechanisms of particular agents are always local. But this only sidesteps rather than resolves the problem. The problem is why are there these correlations over a distance, given that we can rule out a classical hidden variables theory where every possible measurement outcome is determined at the moment the entangled particles separate. QBism, is, at its heart, a mathematical reformulation of quantum physics which leads to a different natural interpretation compared to other re-formulations. This is similar to what is done in the Pilot-wave, Everett and Consistent Histories approaches. But my fear is that in side-stepping the question of locality, QBism simply argues in a circle. Indeed, the approach is worse than that of Griffiths for Consistent Histories. Griffiths found an additional assumption in the derivation of Bell's and related inequalities which is violated by quantum physics. I was not convinced that this explained how we get the non-local correlations. The QBism approach does not even provide such an analysis. Even if it just refers to the beliefs of an agent, that agent still needs an explanation of why the two entangled particles have a correlated spin. Even if they don't find out about the second measurement until a later time, it is still performed at a time when information could not have passed between the two detectors. There still needs to be an explanation of why there is that correlation. Measurement in QBism relates to the updating of the agent's beliefs when there is an object-agent interaction. In the case of the EPR experiment, we have agents A and B who perform two measurements on entangled particles. Agent A receives a communication from agent B, which states "at this time I performed this measurement and got this result." So there is an agent-object interaction between agent A and the communication; an agent object interaction between the communication and agent B, and an object agent interaction between agent B and his measurement device and ultimately the second entangled particle. We can collapse the intermediate steps of this chain (which we suppose to perfectly preserve the information), and treat this as an object-agent interaction between agent A and the second measurement. Saying that agent A will predict the correlation between the two results because that's what the rules of dutch book coherence tell him to bet on simply states that there is the correlation because quantum physics predicts that there will be the correlation. It does not explain what it is about the entangled particles that makes the quantum physics prediction agree with the measurements.

Why a subjective Bayesianism

There are various different forms of a Bayesian interpretation of probability. All treat probability as an expression of knowledge of a system rather than something which exists in its own right. The maxim is that probability does not exist. The natural consequence of this maxim is that, since the wavefunction is directly related to the probability, the wavefunction does not exist. It is merely an expression of knowledge. This maxim strikes me as being obviously true. We do not observe probabilities subsisting in any physical being; either classical or quantum. Measurement outcomes take on one definite value or another. Probabilities can be compared against a frequency distribution after an infinite number of samples, but that frequency distribution also doesn't subsist in any being: it too is merely something scribbled in a notebook, this time belonging to an experimental rather than theoretical scientist.

This strikes me as a major problem for almost all psi-ontic interpretations of quantum physics. The pilot wave interpretation avoids it, because in the pilot wave interpretation the uncertainty arises from the lack of knowledge of the hidden variables rather than the Born rule magically changing the wavefunction. Possibly the Everett interpretation also evades the objection, although that has other issues in the interpretation of probability. But a wavefunction collapse interpretation has to treat the wavefunction as existent feature of the physical being that wavefunction represents, and consequently the probability distribution must also be real.

But one can take Bayesianism in different directions, and I tend to focus on two of them. Firstly, there is subjective Bayesianism, which is the idea adopted in QBism, where probability belongs to each agent. Then, there is the logical interpretation, where all probability is conditional on various assumptions, and is used to predict frequency distributions. This is objective, because the final result just depends on those assumptions and not on any particular beliefs held by any agent. All can agree that should one particular set of assumptions be true (an objective statement made by comparison against reality), regardless of whether the agent doing the calculations believes them to be true absolutely or even if he knows them to be false, then the calculated frequency distribution would also be correct (another objective statement which can be confirmed by comparing against reality). The logical interpretation that requires that there is an objective reality, and applied to quantum physics will say that the physical theory tells us something about that reality. The subjective interpretation, on the other hand, is only about the experiences of one particular agent and does not make any statement about objective reality. Indeed it does not require that there is any objective reality at the basis of the experiences.

To my mind, the logical Bayesian interpretation is a natural fit to quantum physics. So why do the QBists reject it? Unfortunately, that's not something I have been able to find in my search through the literature. If someone knows the answer, I would be glad to hear it.

Pusey, Barrett and Rudolph

I also want to discuss a paper by Pusey, Barrett, and Rudolph (PBR), which attempts to show that any model where a quantum state represents information about an underlying physical state of the system. (See also this paper). This references a QBism paper as one of those which adopt the view which it is attacking, so it is intended as an attack against QBism among others. I think it is also needs to be answered by those advocating a consistent histories approach, or relational quantum mechanics (among others).

The argument is based on three assumptions. The first is that the system has a real physical state, objective and independent of the observer. The second is that systems which are prepared independently have independent physical states. The third is that a measurement device only responds to the properties of the particular system being measured.

The state is completely specified by a number of parameters. Other physical properties are either fixed constants (which the paper isn't concerned with), or functions of these parameters. Sometimes the exact physical state of the particle might be uncertain, but there is a well-defined probability distribution, μ, over the parameters which represents a state of knowledge.

In a epistemic interpretation of the wavefunction, the wavefunction represents a state of knowledge of a physical state λ. The physical state need not be fixed uniquely by the preparation, so our knowledge of the physical state is parametrised by a probability distribution μψ(λ). So, for example, if our knowledge of the state is described by some wavefunction ψ, then μψ(λ) represents a distribution over all the physical states that are represented by the same wavefunction ψ. If there are two different wavefunctions ψ and ψ', then there will be two different probability distributions μψ(λ) and μψ'(λ). These distributions will be zero for some members of λ and non-zero for others. If, for every possible physical parameter λ, there is only one wavefunction for which the probability distribution is zero, then the wavefunctions can represent a physical property. In this case, they represent something which is physically real, so we have a psi-ontic interpretation of the wavefunction. If, on the other hand, there is a λ such that both μψ(λ) and μψ'(λ) are non-zero then the wavefunctions are not physical properties as defined above, which implies a psi-epistemic interpretation.

PBR's result is that for distinct quantum states ψ1 and ψ2, if the corresponding distributions μ1(λ) and μ2(λ) overlap then there is a contradiction with the predictions of quantum theory. They present different arguments of various generality. I'll just present the simplest case since it conveys the basic idea well enough.

Suppose that we construct a basis such that


0⟩ = |0⟩

1⟩ = (|0⟩ + |1⟩)/√2 = |+⟩

And suppose that the corresponding distributions μ1(λ) and μ2(λ) overlap. Then there exists a q > 0 such that there is a λ in the overlap region with probability at least q.

We now prepare two systems whose physical states are uncorrelated. Each system can be prepared such that its quantum state is either |ψ0⟩ or |ψ1⟩. This means that there is a probability q2 that both physical states are in the overlap region. The physical state of the two systems is compatible with any of the four possible quantum states,


|0⟩⊗ |0⟩

|0⟩⊗ |+⟩

|+⟩⊗ |0⟩

|+⟩⊗ |+⟩

The two systems are brought together and measured. The measurement projects onto the four orthogonal states


|s1⟩ = (|0⟩⊗ |1⟩ +|1⟩⊗ |0⟩)

|s2⟩ = (|0⟩⊗ |-⟩ +|1⟩⊗ |+⟩)

|s3⟩ = (|0⟩⊗ |-⟩ +|1⟩⊗ |+⟩)

|s4⟩ = (|+⟩⊗ |-⟩ +|-⟩⊗ |+⟩)

with


|-⟩ = (|0⟩ - |1⟩)/√2

Each of these states has probability zero for a particular choice of input state. This means that at least q2 of the time, the measuring device is uncertain about which of the four preparation methods was used, and on these occasions it risks giving an outcome that quantum theory predicts should occur with probability zero. In other words, for each possible outcome of the measurement, we rule out one of the four possible states of the system. But if the underlying parameters are in the overlapping region, then there is a non-zero probability that we started in any of the four possible states. There is thus a non-zero probability that we started in one of the states ruled out by the experiment. This is a clear contradiction, suggesting that one of the assumptions behind the argument must be false. Two assumptions were stated: firstly that the two systems could be prepared independently of each other, and secondly that the wavefunction is epistemic. The question, of course, is if there are any further hidden assumptions.

As stated, there are extensions to this argument which cover general overlapping states, and also allows for experimental noise.

So the claim of this paper is that ψ-epistemic models are impossible.

So how might we avoid the conclusion? Firstly, one can take the easy solution and suppose that the two probability distributions do not overlap. This leads to a theory where the wavefunction represents a physical property of the system. The psi-ontic models obviously fit into this category. However, the requirement of not overlapping just concerns the initial state of the system as it is prepared, and there are psi-epistemic models which will affirm a one-to-one mapping between the physical parameters and the quantum state for the initial state (i.e. we start by either knowing the initial state with certainty or assuming it as a premise for the calculated conditional probability), but as this evolves in time, a divergence might open up between the quantum state (expressing our knowledge) and the actual physical state of the particle. So I think these would also escape the PBR theorem.

Secondly, one can suppose that the wavefunction does not provide knowledge of an underlying physical state, but of, for example, only possible measurement outcomes. This undermines one of the assumptions behind the argument. This is, I think, the approach that would be taken by QBism. But then the question of what the underlying beables are which give rise to those measurement outcomes becomes more acute. There must be a reason why that particular frequency distribution for the measurement results is produced and not others.

In the consistent histories approach, most apparent paradoxes fail because at some point they violate the single framework rule, or because they treat probabilities as properties of single particles rather than the predictors for the results after a calculation has been repeated numerous times. I think the PBR argument fails on both counts. Firstly, the quantum state in consistent histories does correspond to the possible states of the particle. So, for the initialisation of the PBR experiment, there is a one-to-one correspondence between the quantum state and the physical state of the particle. The uncertainty then arises when the system evolves in time, or a measurement is performed. For a single particle, we cannot predict what happens over time or after a measurement, except to rule out those histories which are forbidden by, for example, conservation rules. So when the PBR argument talks about a certain probability that the underlying parameters are in a certain region that allows for all four states, this is interpreted to mean that a certain proportion of the numerous runs of the experiment are in that range. Likewise, when (in the more sophisticated versions of the argument) they discuss the probability of the final measurement result being below a certain amount, they mean that a certain proportion of the numerous runs of the experiment are below that amount. But these refer to the ensemble as a whole, not to individual runs of the experiment. The consistent historian will have no difficulty in saying that if for a particular run of the experiment the measurement result forbids a particular initial state (i.e. there are no histories with a non-zero pre-probability containing both the initial state and the measurement result), then the particle on that particular run did not start in that initial state. Also, both the four different initial states are in incompatible bases,so to attempt to calculate a probability from these states violates the single framework rule. To calculate probabilities, one would have to use two distinct calculations per particle, one in the (|0⟩,|1⟩) basis and the other in the (|+⟩,|-⟩) basis, and either expand the measurement operators and initial states into one basis or another, or to treat these as two entirely separate calculations of the probability and use whichever basis is appropriate for the particular initial state which was generated. The probabilities for the two calculations can then be combined at the end. I don't see that this approach would lead to consistent histories predicting a non-zero probability for a measurement result that cannot occur.

Conclusion

So what are my own personal views on this interpretation?

Firstly, I think there are some good things about it. I agree (somewhat controversially) that a generally psi-epistemic approach is superior to a psi-ontic way of understanding the uncertainties in quantum physics. Making the indeterminism of quantum physics fundamental to the physical, and our parametrisation of it in the theory of quantum physics a result of our lack of knowledge rather than something in-built into the physical system is more in line with the best interpretations of probabilities, and the measurement problem simply ceases to be a problem. (You still have problems from non-local influences between entangled particles, but these are not less of a problem in most psi-ontic interpretations.) The arguments which are usually cited to deny psi-epistemic models -- I referenced the PBR paper above -- have enough loopholes that they can be evaded.

I was also quite intrigued by the mathematical reformulation of quantum physics used in QBism. It is always good to have a new way to look at the mathematics, and each one can open up new insights. I have wondered whether other psi-epistemic interpretations, such as consistent histories, could be expressed in this formulation. That would avoid the need to treat the amplitude, or pre-probability, as a statement of our uncertainty (a concept which has caused some discomfort), and instead one would just have conditional probabilities. In my own work, I will keep to the amplitude formulation, as it is what I am more familiar with, but that this alternative picture exists is certainly interesting. I am not sure how easily one could adapt this formulation for the various psi-ontic interpretations.

However, I find the subjectivism of QBism less appealing. To my mind, the logical interpretation of probability is both more intuitively appealing than radical subjective Bayesianism. In the logical interpretation, a probability is treated as a predictor for a frequency distribution,in the face of unknown causes, and is always conditional on various assumptions. The assumptions can come from known experimental observation, or from a model that describes how to parametrise the unknown causes. If two people agree on the assumptions, they will come to the same frequency distribution, and thus this is an objective form of Bayesianism. If the assumptions correspond to reality, then the predicted frequency distribution will match the observations. The logical interpretation, used in consistent histories and related interpretations, strikes me as being in line with what we do in science. We form hypothesis concerning the laws of physics (i.e. the model parametrising the unknown causes), make a prediction based on that hypothesis, set up an initial state, let it run, measure the result, and compare it to our prediction to fine-tune the model. The initial measurements, final measurements, and predictions are all objective, since none of them rely on the subjective beliefs of a researcher. One can make predictions using a theory one believes to be false.

On the other hand, the radical subjective interpretation does not correspond to this methodology. There is no initial state because the initial state is a quantum state, and in this interpretation, quantum states do not exist. There is no prediction of a frequency distribution for final states, because those final states do not exist. There is no known objective fact about the world, because quantum states are single user, and depend on that user's beliefs.

And, of course, there is the question of what is the knowledge of? Measurement results? Then what is measured in a measurement? The answer to this is that it is merely an update of data for the agent. Fuchs believes that QBism need not go further than this anymore than the underlying Bayesian probability. I see this as an evasion rather than an answer. After all, one cannot evade it by appealing to the subjective Bayesian interpretation of probability if the subjective Bayesian interpretation itself does not resolve the problem.

And this leads to the most fundamental problem. What I want to understand in the philosophy of quantum physics is what is the nature of reality if quantum physics is correct. QBism doesn't answer that question. Indeed it states that it cannot be answered, as quantum physics is a theory that describes agents rather than reality. But ultimately there has to be something which describes why we have these experiences; there must be something explaining the data with which the agent updates his knowledge. The data should be able to give us insight into the underlying reality (even if we are not yet certain about the correct way to interpret that data). QBism strikes me as no better than a Humean empiricism, or a Hegelian idealism, and has the same problems as those philosophies, which evade rather than answer the questions of the philosophy of physics.

Reader Comments:

1. Michael Brazier
Posted at 05:59:20 Wednesday March 27 2024

Is QBism really Bayesian?

The Bayesian interpretation of probability does indeed describe it as the degree of belief an agent has, or should have, that a proposition is true. But while an agent's beliefs are subjective, existing only in his mind, the propositions he believes are about the outer world, and are assumed to be definitely true or false in mind-independent reality. The QBists seem to have lost sight of that.

Quantum theory is radically a theory of conditional probability - given that the particles of interest are in a certain state at an initial time, the math gives the probability that, upon measurement, they will be found to have the property of interest, at either that time or after time is allowed to pass. On the frequentist definition this probability approximates the fraction of ensembles like the one of interest that do turn out to have that property when measured (and the more ensembles you measure, the closer the approximation will be.) On the Bayesian definition the probability is the experimenter's expectation, before measuring, that the ensemble in front of him will have that property if he does measure it. Both definitions, however, assign a parameter "probability" to a proposition - this ensemble will have this property - about the actual particles, not the state of the experimenter's mind. Doubting that the particles exist is too subjective even for the most radical Bayesian.

QBism, I think, is perspectivist, following Nietzsche more than either Hume or Hegel. Neitzsche wasn't relativist enough to declare that objective facts don't exist, but he did say that we couldn't really learn them. Similarly, QBists don't quite deny that particles exist, but by placing the theory wholly within the minds of agents they make the real state of the particles unknowable.

2. Nigel Cundy
Posted at 18:20:26 Wednesday March 27 2024

Re: Is QBism really Bayesian?

I think it is, but only because the Bayesian interpretation of probability is very broad. The radical subjective wing which the QBists follow is certainly part of that movement, although not the only part. The logical/conditional view of probability is another wing, and I think more reasonable. But I agree with the rest of what you say, and don't think QBism is the way forward because it is too far from the real world.

3. Joe
Posted at 13:34:01 Friday March 29 2024



I was looking forward to the continuation of this series!

I believe you mentioned it briefly in the first part introduction, but I was hoping to know if you could expand some more on the idea that we would expect the correct philosophical interpretation of QM to be representative of reality. It isn't obvious to me that we should expect that to be the case. Since we know that QM is not a complete and perfect model, how do we know that the philosophical interpretation does't hinge on a part of the model that is not representative of the underlying reality? Or put another way, since we know QM isn't complete and perfect, how do we know that reality "looks enough like" the QM model to make its philosophical interpretation meaningful? Is there a principled reason why we can say that QM is a good enough model that we can draw meaningful philosophical conclusions that wouldn't potentially be overturned by the discovery of a better model in the same way that QM undermines some of the philosophical interpretations of classical mechanics?

4. Nigel Cundy
Posted at 23:48:20 Saturday March 30 2024

Overturning Quantum Physics

You ask whether it is possible that a theory of quantum gravity would undermine any philosophy of quantum physics. And the answer is that it is entirely possible, and a risk that all who work in this field take. If that is the case, then we would have to tear up all this work, and start again from scratch. However, there are a few mitigating factors that mean thinking about the philosophy of quantum physics is still worthwhile:

1) It is still important to think about the philosophy of physics. In part because it might give insights to quantum gravity -- after all, the first three revolutions in physics, Aristotle's, Newton's and Einstein's relativity, were led by philosophy as well as by experiment. That wasn't the case with quantum phyiscs, which was experiment driven entirely, but we are now at a point where it is quite possible that we might not get clues from experiment. Certainly the energy scale where quantum gravity effects certainly become unavoidable, the Planck scale, is way out of our reach. Maybe some clue will turn up at lower energy scales (or from astrophysics) -- I certainly hope so -- but maybe it won't. In part because insights from the philosophy of physics should inform the rest of philosophy. In part because the scientific method is underlied by philosophical assumptions, so we always have to think about the philosophy of physics to protect us from going into some incoherent fantasy land.

We also (in my view obviously) need to base our philosophy of physics on the best established physics we currently have, which is a (minimally extended) standard model of particle physics, coupled with general relativity, the big bang model, and inflation. In particular, having a well developed philosophy of quantum physics would protect us from speculation from badly developed philosophies of quantum physics.

2) The move from special relativity to general relativity didn't significantly change the underlying philosophy. There are a few minor changes in how we think about space and time, but much smaller than the shift from Euclidean to Minkowski space time (the only important philosophical idea I can think of from GR being the proof in classical general relativity that the universe has a beginning in time). It is not unreasonable to suppose the change from quantum field theory in Minkowski space time to quantum gravity will also only require minor modifications to the philosophy. In which case, any work we do now will not be wasted.

3) Quantum gravity would reduce to QFT in Minkowski space time in the limit of low space time curvature. QFT would still be an interesting theory in its own right, so thinking about its philosophy would also be interesting. And, because it will have this limit, we do have constraints on what the quantum gravity theory could look like. It is very unlikely that the most interesting features of QFT philosophically -- its non-local correlations of events, (apparent) indeterminism, creation and annihilation of particles, the exclusion principle, etc. would disappear in the full theory of quantum gravity. And understanding QFT in Minkowski space time is still an interesting intellectual exercise, quite apart from any practical use.

4) It may well be that a workable philosophy of QFT will help inform a workable philosophy of quantum gravity, even if it has to be modified. If so, then the work we do today will not be wasted.

So, yes, there is a risk that we will have to tear all this up and start again. But, in my view, the risk that there will be nothing salvageable from the philosophy of QFT is very low. On the other hand, either abandoning the philosophy of physics, or letting the field be dominated by ideas based on a classical physics we know for certain is false, for a few decades until we have an established theory of quantum gravity would be an even greater error. Firstly, because philosophy of physics (with its grounding in reality) is needed to correct the many bad ideas circulating around the philosophy departments. It is a sub-discipline that is already not given enough attention, and weakening it further will just make that worse. Secondly, to be a good philosopher of physics requires a particular skillset, as you need to understand both philosophy, and the physics (which involves some high-level mathematics) to a high level. There aren't that many physicists interested in philosophy (or who have the time and patience to learn enough about it), and not that many philosophers who have the desire or capability to understand QFT. Those few physicists and philosophers who are able to research into it need to continue, if nothing else to pass down that skillset to the next generation of students, who will then be able to take on board the developments from quantum gravity. Otherwise the skillset would have to be relearnt from scratch, which will be much harder. Even if the ideas that come up in interpreting quantum physics need to be abandoned, the skills needed to come up with those ideas will still be transferable to the study of quantum gravity.

5. Matthew
Posted at 19:56:39 Tuesday June 25 2024

Stochastic-quantum correspondence

Hello Dr. Cundy,

I haven't commented on this latest post in the series yet, and I don't actually have anything to say about it at the moment. But I did want to drop you a line about a paper I came across recently, "The Stochastic-Quantum Correspondence" by Jacob Barandes. (Available on arxiv, link here: https://arxiv.org/abs/2302.10778)

I think this paper have some great potential for the field of quantum foundations and I hope that people take notice and discuss it. (I do think there is something conceptually puzzling about the key idea of *indivisible* stochastic dynamics, but I won't get into that now.) I would certainly be interested in your take on it; maybe in another post in this series?

Regards,

Matthew

6. Nigel Cundy
Posted at 17:44:20 Wednesday June 26 2024

The Stochastic-Quantum Correspondence

Dear Matthew,

Thanks for sharing that paper. It looks interesting. I'll read over it properly when I get the chance.



Post Comment:

Some html formatting is supported,such as <b> ... <b> for bold text , < em>... < /em> for italics, and <blockquote> ... </blockquote> for a quotation
All fields are optional
Comments are generally unmoderated, and only represent the views of the person who posted them.
I reserve the right to delete or edit spam messages, obsene language,or personal attacks.
However, that I do not delete such a message does not mean that I approve of the content.
It just means that I am a lazy little bugger who can't be bothered to police his own blog.
Weblinks are only published with moderator approval
Posts with links are only published with moderator approval (provide an email address to allow automatic approval)

Name:
Email:
Website:
Title:
Comment:
What is 4×7+15?