The Quantum Thomist

Musings about quantum physics, classical philosophy, and the connection between the two.
The Philosophy of Quantum Physics 2: spontaneous collapse models


The Philosophy of Quantum Physics 3: The Everett model
Last modified on Sun Sep 3 20:59:01 2023


Introduction

I am having a look at different philosophical interpretations of quantum physics. This is the third post in the series. The first post gave a general introduction to quantum wave mechanics, and presented the Copenhagen interpretations. I have subsequently looked at spontaneous collapse models. Today it is the turn of the Everett interpretation.

The Everett, or many worlds, interpretation is one of the more popular interpretations of quantum physics, at least among physicists (and writers of science fiction). Its advocates are certainly very vocal, and seem to be particularly insistent that it is the only natural way of interpreting the theory and that other approaches, if they have any merit, collapse into it. The interpretation was first proposed in the 1950s, but it started to become popular in the 1980s and 1990s. There are several different models of the theory. To keep this post manageable, I am not going to discuss all of them, but I will focus on the model of David Wallace, with commentary by Tim Maudlin (a philosopher of physics whom I rate highly), and some of my own thoughts. Wallace was a physicist who turned philosopher. Why pick on him? Firstly, because I happen to have several of his works available for reference (although I will primarily use just one of them, his contribution to the Oxford Handbook of Philosophy of Physics). Secondly, he is one of the leading advocates for this interpretation, so a reasonable example to study. Thirdly, there is a personal connection, as he was a couple of years above me at university (albeit that I haven't interacted with him since that time). Obviously I can't do his view full justice in a short post (let alone the full scope of the Everett interpretation), so I recommend reading his work in detail for more information (and possibly more accurate information). I should also say that I don't keep close tabs on the literature on this topic, so what I present might be out of date.

So what is the many worlds interpretation? It is that a quantum superposition does not represent our uncertainty in knowledge of the particle, or that the single "particle" itself is spread out over space, but that there are several different copies of the particle. These copies, after decoherence, do not interact with each other. Other particles which interact with the particle also branch into multiple copies, each of which only interacts with one copy of the particle. One way of thinking about this is to say that each copy exists in its own self-contained universe. So, when we take a quantum measurement, most interpretations suppose that we only get a single result, as naively implied by what we observe. In practice, every possible result occurs, but in different universes. (The term "universes" ought to be used with the caveat that there are divergent views of how the multiplicity occurs and manifests itself; some of them call for literally different universes, but others are somewhat more subtle, but I will use the expression as a convenient shorthand.) The reason we appear to observe a single result is because we are also quantum objects in a superposition. There are numerous copies of ourselves in different universes. So I might observe the result of the experiment as spin up, but my cousin, created when I become entangled with the superposition, observes the result as spin down. Thus each version of me only observes the single result, even though in practice both occur.

The Everett interpretation is not to be confused with the multi-verse, which is the belief that there are multiple universes with different physical constants (either from a string landscape, or different inflationary bubbles, or ...), which is sometimes used as a possible response to the anthropic principle. The two ideas are independent of each other.

The interpretation

The claim is made that the Everett interpretation is just quantum mechanics interpreted in a traditionally realist fashion. While there are philosophical puzzles in how classical (non-quantum) theories are to be interpreted, it is agreed that there are no paradoxes. The objects of those theories are mathematical objects, which in some way (where the disagreement arises) represent the physical world. Different states represented in the mathematics represent different ways in which the world could be.

Quantum physics, on the other hand, is usually held to be different. Here one can have a particle in a superposition. ψ = α ψ1 + β ψ2. Here ψ1 and ψ2 correspond to different states representing different possible measurement outcomes. For example, they might refer to two different possible locations of a quantum particle, with ψ1 representing that it is here and ψ2 representing that it is over there. (Wallace uses the example of live and dead cats, following Schroedinger.) α and β represent constants, whose modulus square gives the probability that that outcome is observed. The problem is that the superposition state, especially when applied to macroscopic objects, is difficult to interpret. Does it represent a cat that is both alive and dead at the same time? Neither alive or dead? It is not something which makes much sense.

So what are the options? One can try to change the philosophy of science. A more instrumentalist approach, such as the Copenhagen interpretation is not attractive to philosophers, who want to know what the underlying beables are. A epistemic approach in effect denies that the wavefunction is a representation of reality, breaking a link that was crucial in the philosophy of pre-quantum physics. The alternative is to change the physics to make it something more palatable. Examples of this are the spontaneous collapse or pilot wave interpretations. These are liked by philosophers, but less so by physicists, who are aware of the difficulties in fine-tuning these so that they match empirical observation. (I discussed the spontaneous collapse models in the previous post; I will look at the pilot wave models presently. In summary -- paraphrasing Wallace here, although I agree with him -- the Pilot-wave model was developed out of non-relativistic wave mechanics, and it has not yet been shown that it can be adapted to reproduce the standard model of particle physics as a relativistic quantum field theory -- which most other interpretations have no problems with.)

So what are the alternatives? Wallace suggests that in Everett interpretation, we need to neither change the philosophy from that used to understand pre-quantum physics, nor change the physics away from the standard model. The issue he thinks in the reasoning that led us to a change the physics or change the philosophy approach was in supposing that the indefinite states in the mathematical description imply indefinite states in realty. He uses the analogy of a classical field, which can be written as the sum of two appropriately weighted components. One wouldn't say that the field is in an indefinite state, a superposition between the two components. Instead, one argues that it describes two pulses, but in different locations and with different momentum. Superpositions in classical physics refer to some sort of multiplicity. So why not also in quantum physics?

Here I have to interrupt Wallace's presentation of the Everett interpretation, and make an observation. There is an important difference between the classical and quantum state. In classical physics, when we sum up two parts of the field, the weights given to them behave according to the rules of probabilities or frequencies. It is thus natural to interpret them as frequencies, and consequently in terms of a multiplicity of states. In quantum physics, one can have the same sort of addition of states as seen in classical physics -- for example, after particles have decohered, or the wavefunction is mixed between different states representing different types of particle. Here one can interpret the wavefunction as representing different particles, but that's not controversial, as we observe multiple particles. However, in the sort of superposition Wallace is referring to, the weights are amplitudes. Although related to probabilities (via Born's rule), they do not obey the same mathematical rules as probabilities (or frequencies), and therefore it is rather rash to link them to multiplicities. He analogy he uses thus breaks down, at the very point which he uses to draw his conclusion. I therefore think that this analogy is more misleading than useful.

So back to Wallace's presentation.

We say that of a macroscopic object that is in a given Hilbert space, appropriate for that object. Some states in that Hilbert space are mathematically definite (i.e. correspond to things that we actually observe, such as a particle being in a given location), but others are indefinite, so represent things we don't observe (such as a superposition between two different particle locations). To say that the Hilbert space represents the possible states of the object thus contradicts observation: it also contains states which we don't or even can't observe. There is also the problem that, for compound objects, the same Hilbert space will describe entirely different objects which we would normally consider distinct, such as cats and dogs. So it is misleading to say that the Hilbert space represents that particular object. It is also misleading to say that the indefinite state is a a quantum particle in a superposed state of being here and being over there. Better to say that it the state is a superposition of a quantum particle that is here and a quantum particle that is over there.

Do we observe such superpositions? No, but then the universe is a big place and it would be foolish to say that we observe all of it. A theory that claims that microscopic objects are in indefinite states seems to make a mockery of our usual understanding. But saying that there are multiple such objects does not. When the superposition interacts with its surroundings, it rapidly becomes entangled with them, so we do not just have the superposition of the single particle, but an extended superposition of much more than that. That represents some worlds where the particle is here and other worlds where it is over there. These worlds are in superposition with each other, and if we follow through with the idea that superposition implies multiplicity, then there are multiple worlds.

So, the Everett interpretation is dependent on two assumptions: the physical postulate that the physical world is represented by a unitarily evolving quantum state, and a philosophical claim that if the quantum state is to be interpreted realistically, then a superposition must be understood as describing multiplicity.

There are no additional physical postulates describing a division into different worlds: just quantum mechanical wavefunctions (or Fock states) evolving under the Schroedinger equation. It is then claimed that it makes no sense to even think of the Everett interpretation as one of many interpretations: it is just quantum mechanics itself, interpreted as we have always interpreted physical theories. As such, the interpretation is tightly constrained. If there are any problems in it, they can be resolved by hard study of quantum physics itself.

Born's rule and probability

I will use the language of "branching into different universes" to describe what happens in the Everett interpretation when a superposition in the wavefunction is created. This terminology is perhaps misleading, since various supporters of this interpretation have different understandings of it, and while some incorporate the idea of a "branching" event, others are a bit more nuanced. However, all agree that there is some distinction that changes a single eigenstate representing a single quantum particle into the multiplicity of particles implied by a superposition (in this interpretation), and I will use "branching" to denote that change, whatever it is.

One of the most important criticisms of the Everett interpretation concerns how it reproduces the Born rule, and the notion of probability. The simplistic way of addresses this (and this is a straw man, as it is not what those who propose the Everett interpretation support, but I need to get this discussion out of the way first to dispel an illusion) is to suppose that the number of universes generated when a superposition is created is proportional to the standard quantum probability for that particular outcome. So, for example, if you measure the spin of a spin-half Fermion, then in the standard calculation spin up is measured 50% of the time, and spin down the other 50% of the time. So one would think that half of the universes are spin up and the other half spin-down, and that explains that. Then there is a 50% chance that the "you" that encountered that particular measurement is in one set of universes rather than another, and that explains the probability.

Except, this does not work, for various reasons. Firstly, many probabilities in quantum physics are irrational numbers. One would need an infinite number of branched universes so that the relative frequencies would be precise fractions of the number of universes.

Secondly, there is the issue that the branching into universes is generated when the wavefunction enters into a superposition of states (under the assumption that superposition implies multiplicity), but the distinct probabilities don't arise until there is decoherence. After decoherence, we can talk about probabilities for various quantum states, and parametrising the uncertainty in quantum physics in terms of counting universes would make sense, baring the caveat of the third point below. Before decoherence, we parametrise the superposition in terms of amplitudes. Probabilities map to frequencies, and can be used to predict frequency distributions. Amplitudes don't and can't, at least not directly, and not without losing information crucial to the parametrisation of the quantum state. If the uncertainty in quantum physics arises from counting universes, then that can't be captured accurately in an amplitude. The different universes, recall, are a way of describing the multiplicities in the different physical beables. In reality these universes, and the particles in them, contain all the information contained within the superposition. But the amplitude contains additional information compared to than the simple counting of universes. This information is important when predicting interference effects. When two wavefronts (each with their own superposition) coincide, the total probability is generated from the sum of the amplitudes. If the superposition contained within each wavefront implies a multiplicity of universes, when there is interference these would have to be either multiplied or destroyed (depending on whether there is constructive or destructive interference) as the two wavefronts pass through each other. It just turns into a huge mess.

Thirdly, decoherence is only an approximate process. It largely picks the basis, removing superpositions in that basis, but not quite completely. So even after decoherence, there is still the problem that the superposition requires amplitudes rather than probabilities to parametrise it. The amplitudes for those states not in the basis, of course, are so small we normally would not bother with them; certainly they would never be detected in an experiment. But they still exist, and thus pose a problem for any philosophy of physics which requires that they do not exist.

Thus we need something a bit more sophisticated to correctly account for Born's rule in the many worlds interpretation. Rather than counting branches, we need to assign an amplitude to each branch. Which is, of course, what happens in standard quantum formalism. So if there are only two states in the superposition, then there are only two branches, each with its own amplitude. But then, what does that amplitude physically represent? How do we relate it to experimental results?

In the Everett interpretation, there is nothing but the dynamics of the quantum state, and this dynamics is deterministic. This determinism is also true for the Pilot-Wave interpretations, but there the uncertainty arises because there are underlying hidden variables, and we use it to parametrise our lack of knowledge of those variables. In the Everett interpretation, however, everything is out in the open. There is only the wavefunction, and in principle, that can be fully known (as long as we don't try to extract knowledge from non-commuting bases). Traditionally, probability is used to parametrise uncertainty. That uncertainty can either arise due to our lack of knowledge of the actual physical state (as in the Pilot Wave interpretation, Quantum Bayesianism, and others); or because the dynamics of the system is indeterminate, and there are multiple possible outcomes from an initial state, only one of which would become actual (as in collapse interpretations and others). Or one can combine these two sources of uncertainty. But in the Everett interpretation, there is only one outcome of the system as a whole (even if that involves multiple universes), and there are (in principle) no hidden variables. So how does the notion of probability arise?

One can, perhaps, argue that it is uncertain at the outset which branch of the universe we ourselves will end up in. There is a 50% chance that we would find ourselves in a spin up branch, and 50% chance that we would find ourselves in a spin down branch. Except, this is not how the Everett interpretations work. We would end up in both branches, with one copy of us measuring spin up, and the other copy measuring spin down. So what does it mean to say that there is a 50% chance that we would measure that particular outcome?

The purpose of Born's rule in quantum physics is to predict frequency distributions, in order to compare against experiment. It is a crucial part of the quantum recipe. Without it we cannot map theory to experiment. But the standard understanding of the Born rule cannot be mapped directly to the Everett interpretation. So there are two problems: how do we derive the Born rule (needed to reproduce probabilities to compare against experimentally observed frequencies) from the principles behind the Everett interpretation (where the only things are the Schroedinger evolution of the wavefunction and the postulate that a superposition implies a multiplicity of states); and secondly what the notion of probability itself means in the context of the Everett interpretation.

Some have tried to interpret this through decision theory. Here we should base our actions based on maximising the utility of the action. So, suppose that we have two boxes, A and B, and we are forced to put our cat into one of them. There is a radioactive trigger, which 2/3 of the time would cause poison to be released into box A, and 1/3 of the time would cause it to be released into box B. Clearly the right thing to do would be to put the cat into box B. There is a greater chance that it will survive. Even should the cat die, we can at least console ourselves that we made the right choice.

In the many worlds interpretation, what this implies is that a superposition is generated, which implies that multiple cats come into being, one of which will survive and the other would die. This happens no matter which box you put the cat in. Obviously, the squared amplitude associated with each branch would be larger in one compared to the other. But why does that concern us? We still have one cat living and the other dying, no matter what we do. We ourselves would, of course, also become entangled with the quantum state, when we look at the cat, and one version of ourselves would be happy and the other sad. After decoherence, the branches evolve independently, which means that the squared amplitude assigned to the version of us in each branch is meaningless to that version of ourselves. So why do we invoke it at all? The challenge is to allow them to derive from their philosophy a principle of decision making that reproduces the straight-forward understanding of other interpretations.

One solution is to suppose that instead of two branches, there are in fact three. In two of these the cat in box A dies, and in the other one the cat in box B dies. We are then justified in putting the cat in box B, because twice as many of its successors will survive. There are ways in which one can set up the experiment so that instead of a single quantum event or superposition determining the odds, there are multiple such events. One can, for example, have a system where there is a radioactive particle with an amplitude of √(2/3) of decaying in a particular period of time. If it does so, then its decay product is also radioactive, with an amplitude of √(1/2) of decaying in the given time window. The poison is only released into box B if this second particle decays. There are thus three branches, each with an amplitude √(1/3), two of which lead to the cat in box A being killed, and one leading to the cat in box A being killed. We now have a symmetry between the branches, and that can be used to count how many cats die when we make a particular choice.

So the idea is that when we have a decision to make which relies on branches with different squared amplitudes, to restate the problem so that you have multiple branches all with the same squared amplitude. The branches are still parametrised by an amplitude, so one doesn't lose the information required to describe interference effects, and avoids the problems associated with multiplicity in terms of counting branches I described above.

There are a few objections to this. Firstly, is it true that a system which splits the wavefunction in this way functionally equivalent to one in which there are naturally just two branches, with amplitudes √(1/3) and √(2/3)? It is not so clear that it is. The approach relies on a principle of indifference between the two experimental set-ups. But is that justified?

Secondly, what about those amplitudes which don't nicely divide into a finite number of equal parts? Clearly this method would not then work.

Thirdly, the approach is based on decision theory. The probabilities simply affect how we are supposed to act in a given circumstance. There is, however, a jump between this and the underlying ontology. If we are supposed to pretend that there are three branches to the universe, when in practice there are only two, our actions might not lead to the best results. Consider the case where we split the √(1/3) branch of the wavefunction, so there are now three branches in reality, one with an amplitude of √(2/3) and two with an amplitude √(1/6) where the cat dies. The way this method would proceed would be to split the √(2/3) branch into four, so all the branches have the same amplitude, and the living cats still outnumber the dead ones 2:1. But if this division is only in our heads rather than reality (as a means to make informed decisions), then in practice acting this way would lead to two of the branch cats dying and one living. If the split of the √(2/3) branch is in reality rather than just in our heads, then you are back to the "counting branches" model of the Everett interpretation which I criticised above.

Indeed, that it is based on decision theory causes another problem (which I have taken from Maudlin). Suppose you are in a restaurant and cannot decide between two deserts. The solution, in the Everett interpretation, is simple: make your decision based on a quantum coin toss. Then you will, in a sense, have both deserts, as one branch of you chooses one and the other version takes the other one. Of course, the quantum coin toss itself comes at a cost -- such experiments are expensive to perform -- but maybe the benefit of having two deserts makes up for it. But, of course, in other interpretations where one can only eat one or other of the deserts nobody would act in this way. It is thus not clear that trying to explain the probabilities in terms of decision theory will always lead to the same actions as someone who believes in the Copenhagen interpretation. Other factors are equally different -- the anxiety one feels about whether or not their cat is going to die, while the Everett advocate who kills his cat can still console himself that his counterparts still have a living cat.

So the idea that we can think of "probabilities" as guiding our decisions in the Everett interpretation fails both because it does not lead to the same actions in the Copenhagen interpretation, and because it does not explain the underlying philosophical problem of what the Born rule probabilities actually represent, and why we need to use the Born rule (rather than some other mechanism) to duplicate experimental results. Nor does it address all the uses we put probability to, for example the use of Bayes theorem to modify our uncertainty about some unknown data.

Other criticisms

The first criticism is with the motivation given for the Everett interpretation. It is claimed that it is just quantum physics interpreted as we interpret classical physics. It also relies on the postulate that a quantum physics superposition represents multiplicity. The problem I have with these two statements is that a quantum physics amplitude is something which is completely alien to classical physics. One cannot then interpret it in the way that one interprets classical physics.

There might be an objection to my last statement: what about waves in classical physics? These are certainly similar to the quantum mechanical wavefunction in certain respects. The closest analogue is perhaps a wave in a classical electromagnetic field. This too is controlled by amplitudes; there are interference effects, and so on. The mathematical wave equation for an electromagnetic field differs from the Schroedinger, Klein-Gordon or Dirac equations used in wave mechanics (and once we get to field theory, things are more complex still, as the evolution of the Fock state is based on the time ordered exponential of an operator rather than a straight-forward differential equation). However, the most important difference is that the classical electromagnetic field is a field and not a particle. It does not come in discrete lumps. One can thus have half of the intensity in one place, and the other half in the other place, in a way that is impossible for particles. We also observe both of those wave-packets in the same universe at the same time, while in the Everett interpretation only one of them is observable for each particular version of ourselves. In the classical analogy, every version of ourselves observes every peak in the field intensity. The analogy between a wave in a classical field and a quantum particle breaks down. I thus cannot see how one can use that analogy to justify the idea that quantum superposition implies multiplicity.

The second part of the justification is that the Everett interpretation allows us to come closest in quantum physics of mirroring the standard philosophy of classical physics; we have to change the least amount. Even if that is true, then why would we necessarily feel the need to keep things the same? What is wrong with having an entirely different philosophical approach? We should not set the philosophical views of Galileo, Kepler, Boyle and Newton in stone as though they were divine writ. They could have been wrong, and if there is a chance that they were, it is right for us to explore alternatives.

The question is how can quantum superpositions be viewed as multiplicities. One way of answering this is to say that they cannot, which either implies that one needs to abandon the Everett interpretation, or accept that it needs something in addition to the evolution of the wavefunction, such as a physical branching between universes. In other words, you need to expand the physics in order to save the interpretation. These modification strategies have fallen into two categories: the many-exact-theories add the existence of worlds to the state vector. So, when there is a superposition, the world literally does split into different realities. In that case you need to suppose a branching mechanism that generates these realities in addition to the Schroedinger evolution of the wavefunction. The alternative is the many-minds theory, the multiplicity is illusionary. The observer becomes entangled with the quantum superposition. Each part of that superposition only sees one outcome, but in reality they are all present as part of the one single wavefunction. So, instead of a basis spanning many exact worlds, we have a basis spanning many different consciousnesses. Both of these approaches have fallen out of favour. Many mind approaches tend to be committed to some form of Cartesian dualism -- if there is a fundamental law that consciousness is associated with a given state, then there is no hope of a non-circular explanation of how consciousness arises from fundamental physics. Equally, the many worlds scenario has similar difficulties related to how macroscopic objects fit into these different worlds. If physics presupposes the existence of these worlds, then the existence of such worlds cannot be derived from fundamental physics.

Wallace also states that both approaches undermine the reason for accepting the Everett interpretation: that it just describes quantum physics, without adding any additional structures. Wallace instead falls back on an appeal to decoherence, which he states allows a clear definition of the many different worlds. After decoherence, interference effects are suppressed, and there is a clear distinction between the worlds. The objection to this are that decoherence is not an exact process, and secondly it does not provide any answers of how we are to think of a quantum superposition before it has decohered. Either multiple worlds or multiple minds are part of our ontology (in which case decoherence is incapable of defining them), or they do not really exist (in which case decoherence is not an explanation for them). Wallace tries to circumvent this dilemma by saying what is fundamental is not necessarily real, appealing to the ideas behind emergence.

There is also the problem of local beables. If all that exists is the quantum state (which is spread across different branches of the universe), then how does this relate to the macroscopic objects which we see around us? In Aristotelian terms, the quantum state can describe the form (albeit in the Everett interpretation the idea that superposition implies multiplicity complicates this), but what about the matter, the stuff that makes existence concrete? The beables would be represented by the sum totals of all the particles in all the branches. But clearly this framework does not lead to local Beables. The existence of a particle in one branch implies the existence of the other part of the superposition; the two cannot be separated. So a single branch of the universe is not localised in the sense that it is independent of what is happening elsewhere in the universe. On the other hand, the superposition as a whole is not localised.

Wallace tries to consider macro-objects as patterns, and the existence of a pattern as a real thing depends on the usefulness -- in particular the explanatory power and predictive reliability -- of theories which admit that pattern in their ontology. He uses examples of temperature and phonons as emergent objects which are useful in explaining phenomena. Are quasi-particles such as phonons real? They can be created and destroyed, they can be scattered, they can be detected. We have no more evidence than this that real particles exist. And yet quasi-particles consist only of a pattern within the constituents of the solid; they are usually invoked to describe the vibrational modes of the atoms.

Wallace claims that the branches which appear after decoherence are the same sort of things as quasi-particles. They are the sort of entities that in other areas of physics we take seriously. They are emergent, robust structures in the quantum state, and as such he states that we ought to take them as ontologically seriously as we do with quasi-particles or macroscopic objects.

Except, I am not sure that the analogy is valid. A phonon emerges from the process of constructing an effective field theory for a solid. Effectively it arises from changing the basis from that describing electrons, quarks, and so on, to something that is more suitable to capture the physics of the solid crystal. This is also the link between the microscopic world of quantum physics and the macroscopic world. However, the different "branches" after decoherence do not arise using this mechanism. The effects of quasi-particles can be observed, and they influence other observed particles. After decoherence, that is not true for the various terms in a quantum physics superposition. Quasi-particles can, of course, themselves be in superposition. Thus the branches in quantum physics are not emergent in the sense that we usually think about emergence.

I also have difficulties with the way the Wallace (following Dennet) describes macroscopic objects as merely patterns which have explanatory power. This seems to be confusing the theory with reality. Obviously, as an Aristotelian, I believe that there is some sense in which the structures in the theory correspond to the form, or some of the potentia within the form, of the physical object, but that does not mean that we should confuse the theory with the physical object itself. After all, in a solid, the physical substance is the solid itself. Phonons are part of our representation of the form of the solid. There is a difference between the representation of the form and the form itself, but even if we neglect this, the thing that exists is the solid object, not patterns that at best subsist in it. And, of course, the substance is the union between form and matter, with the matter not represented in or accessible to the theoretical structure. We can think of an isolated electron, or, in principle at least, an isolated quark (at least at those temperatures and chemical potentials where there is a quark-gluon plasma rather than confined quarks). But once these are bound into a larger substance they loose their identity in favour of the substance they subsist in. Phonons do not have an identity outside of the theoretical description of the solid substance. Do Phonons exist? Not in the same sense that a free electron or cat exists, as they are substances in their own right rather than part of the framework used to represent the internal dynamics of the substance. It is an error to confuse the mathematical description with reality, and Wallace's interpretation relies on this error.

Conclusion

Wallace claims that the Everett interpretation is the natural way to interpret quantum physics while preserving realism and as much as possible of the philosophy behind classical physics. This, of course, begs the question of why we would want to or feel the need to preserve that philosophy.

Any viable interpretation of quantum physics needs to do two things. Firstly, it needs to reproduce the standard model of particle physics, as described through the instrumentalist version of the Copenhagen interpretation, along with an explanation of why that works so well. Or, if it predicts different physical outcomes, it needs experimental evidence to show that it, rather than the accepted theory, is correct. Secondly, it needs to leave no philosophical loose ends.

On the second point, the Everett interpretation has several issues. Wallace's presentation of it relies on two analogies. Firstly, there is an analogy with wavepackets in classical physics, to justify the core assumption that superposition implies multiplicity. Secondly, there is an analogy with the idea of emergent physics to understand the local beables of the theory, and to argue that the unobserved branches of a decohered superposition should be regarded as ontologically real. In any case, an argument from analogy is invariably a weak argument. But that is particularly true in these two cases, where there are certain dissimilarities between the examples that Wallace points to and the quantum wavefunctions used in the Everett interpretation, and this differences are in precisely the areas which allow us to make the conclusion from the analogous examples. Wallace himself admits that alternative approaches to extract local beables, such as many minds and many worlds, are enough problems to make then non-viable, because they require an additional and unmotivated branching mechanism, which leaves the interpretation no better than the non-instrumentalist Copenhagen interpretation with its addition mechanism of wavefunction collapse.

As far as reproducing the physics, the Everett interpretation has a major issue in reproducing the Born rule, which is absolutely fundamental to how we compare the physical theory to the experimental results. Wallace tries to get around this by appealing to decision theory. But, as I have argued, this approach has its problems. The simple method of counting branches to give a frequency distribution fails before decoherence where the branches are associated with amplitudes rather than probabilities, so do not map neatly to a frequency distribution. Assigning an amplitude to a single branch for each term in the superposition, rather than having the number of branches representing the frequency distribution, resolves this problem, and more closely matches the physics, but instead leads to issues in how to interpret those amplitudes in terms of a probability, and consequently a prediction for an observed frequency distribution.

Then, of course, after decoherence, the alternative branches are entirely unobservable. Thus the Everett interpretation cannot, even if it overcomes its problems, be confirmed to be correct. Admittedly, this is also true of most other interpretations (excluding those which deviate from the physics), but it ought to make advocates of the interpretation considerably more humble than they often appear to be.

Acknowledgements

I have drawn on several sources for this post, but the two most important are Maudlin's work on the philosophy of quantum physics, and Wallace's contribution to the Oxford Handbook of the Philosophy of Physics.



The Philosophy of Quantum Physics 4: The pilot wave model


Reader Comments:

1. Marc
Posted at 14:45:37 Wednesday June 28 2023



Very interesting ! Thanks a lot for all your great posts !! I have read a few of them - lastly this one - and they are always very nourishing for mind and faith...

2. Will Worrock
Posted at 14:35:23 Friday June 30 2023

Question regarding an argument between a few commenters on Edward Feser’s blog

Hello there Dr. Cundy, I am sorry to bother you about another argument made by StardustyPsyche, but I feel like I have to get responses to his arguments. The thread in which he makes the argument is in the blog post “The Assoviationist Mindset” and he states that consciousness is a hallucination. When it’s pointed out by other commenters how self-refuting this sounds, he immediately states there is nothing self-refuting about “his” materialism and that they are confusing hallucination with unreality. I’ll let you read the rest of the thread over at the blog post, but I have to ask you, does any of these arguments make sense to you, or is just pure bluster on his part, and he’s just being a troll? If he is a troll and he comes over here to flood the comment section because he saw my comments, I am sorry.

3. Marc
Posted at 14:00:54 Saturday July 1 2023

@Will Worrock

I dare intervene - even though I am certainly not qualified in any way. However the point made by StardustPsyche is obviously self refuting : hallucination assumes a consciousness, a self, that hallucinates. This is absurd, as far as I can tell...

4. Nigel Cundy
Posted at 18:38:27 Saturday July 1 2023



I'm also not so qualified to discuss neuroscience -- I am afraid that the philosophy of (quantum) physics (and quantum physics itself) is more than enough for me without delving into the philosophy and physics of the mind. I usually read Professor Feser's posts, but rarely go into the comment section. From the times I have, I know that StardustyPsyche is a regular contrary voice on his blog, and not usually particularly coherent in his thoughts.

This is the quotation in question:

There absolutely must be some process of me considering me, else who or what would be denying me except me? So, yes, to deny any and all sorts of self awareness would be a self defeating assertion.

However, consciousness is an hallucination. The brain resides in darkness and silence inside the skull. All the brain get is pulse trains, electrochemical variations coming in along nerve cells.

The vivid and detailed sensory show we experience is an hallucination constructed by the brain based on innate faculties inherited through the process of biological evolution, learned throughout our lives, and the sense data stream we get from our senses.

...

The details of our perceptions include many distortions that have been studied and characterized by modern scientific perceptual studies.

To the extent that an illusion is a thing "wrongly perceived" then much of our consciousness is indeed illusory.

Common sense notions of the "will", "wants", "decisions", "choices", "memories" quickly break down and are shown to be at best highly incomplete and in some respects quite false.

The underlying mechanisms of our consciousness cannot be detected or arrived at through introspection. Only modern scientific studies can begin to identify and explain the mechanisms of the human brain.

He seems to be subscribed to a strong version of idealism (or maybe Humean Empiricism). The problem is that he supposes that because the brain recieves "pulse trains, electrochemical variations coming in along nerve cells" that those impulses are nothing but nervous impulses. It is possible that they are both that and the means by which we get accurate information about the external world. After all, photons come from the sun, scatter off an object, pass into our eye, where they are focussed by the lens onto the retina, they trigger a light-sensitive cell and are carried by the optic nerve into the brain. But there is a clear sequence of causality, and the patterns we see do tell us something about the external object. The brain processes the data so that the image we get is a representation in a one-to-one mapping with the physical object, and thus captures details of the actual object. When we think rightly about the representation (whether the one in our head or the more abstract representations used in scientific theories), those thoughts also apply to the object. A hallucination, on the other hand, is when the brain misfires and presents an image without an external cause behind it. The image in our brain might be similiar, but the causes of those image clearly distinguish between the hallucination and genuine sensory input. As our consciousness processes both hallucinations and sensory inputs and our intellectual reasoning and our memory and drives our will causing us to purposely interact with the world, I cannot see how it can be reduced to just a matter of hallucination.

I agree with him that scientific studies of the brain are important. I agree with him that they will (hopefully) one day explain how our thoughts and memories correspond with brain structures. But I cannot agree that such studies will thereby make things like the "will", "wants", "decisions", "choices", "memories" illusionary. After all, Einstein's theory of gravity, at least to a sufficiently good approximation that it hasn't yet seen any counterexamples, describes how things fall to the ground or planets stay in orbit. But that doesn't make things falling to the ground or planets an illusion. It just means that we better understand the processes by which things happen. You can have both the scientific explanation (particles travel along geodesics in curved space time, with space time being curved by the stress-energy tensor that includes contributions from the mass of various other particles) and the more basic-level explanation (that apple just fell to the ground) and they don't contradict each other but complement each other. And, of course, the scientific "explanation" just describes but still leaves gaps in terms of the full philosophical explanation (e.g. why do things follow these particular laws of physics in the first place). It is by its nature not everything.

A scientific exploration of the brain won't show that we don't have a will. It could, in principle, describe how the will arises in terms of more basic physical processes, and it will no doubt express everything in a more abstract language that might obscure the macroscopic picture. But having a theory that explains something does not show that that something is reducible to just that theory or is just an illusion. The more detailed explanation and the general understanding of "the will" will complement each other, two different ways of looking at the same thing. And a complete and accurate scientific model of the brain will find something akin to conciousness or the will (maybe emerging in a subtle way) -- otherwise it will contradict observation and just be another in the long list of disproved scientific models.

So I think StardustyPsyche has a major problem in his philosophy of science if he believes that a scientific description of the mechanisms behind something show that that something doesn't exist. That the underlying mechanisms of our consciousness cannot be detected or arrived at through introspection (who claimed that they can be?) does not show that intraspection has nothing useful to say on the macroscopic side of things.

I am also not quite sure what he means by "consciousness," but it probably isn't the same way that I understand it. Otherwise his statement that "consciousness is a hallucination" would be so obviously contradictory that it is difficult to see how anyone could write it.

Sorry, that's a bit of a rambling rant. In short, his posts are worth considering because they raise interesting questions which are worth pondering, even if they are somewhat lacking in interesting answers.

5. Will Worrock
Posted at 02:51:56 Sunday July 2 2023

Response to the replies

Thank you for the replies. Ever since he’s been showing up in the comments section of Edward Feser’s blog, StardystyPsyche doesn’t seem to have a coherent philosophy. He seems to be a contrarian for contrarians’ sake. Maybe he is a troll.

6. Dominik Kowalski
Posted at 00:42:25 Saturday July 22 2023



Dr. Cundy, what exactly does the idea that fundamental physics doesn't include causation in it's mathematics amount to (Papineau etc)?

I know that I have read about it multiple times, but I really didn't find the treatment of this idea anymore. Where did you write about this idea in your book?

Also I'm wondering, in which way causation is supposed to be represented in macrophysical processes. I'm not sure I understand how the fundamental and the macrophysical mathematical descriptions are supposed to differ so that the latter seems to include causation, while the former does not. Is the reason an account of causation more specific than the Aristotelian one? And doesn't the proposal presuppose the idea that the mathematical equation is an exhaustive representation of the process, instead of an abstraction with a descriptive role? Because if I were prompted to explain my own position right now, I'd guess that the nature of causation is as much mathematically describable as the nature of properties; it's more fundamental and just not something math is concerned with.

What are your thoughts?

Thank you.

7. Nigel Cundy
Posted at 18:10:12 Saturday July 22 2023

Causation

I would disagree that fundamental physics does not teach us about causation. Of course, it depends in part what sort of causation we are discussing, and certainly some forms of causation are not present in the mathematics. I personally like to distinguish between what I call event causality and substance causality. Event causality asks "What is the cause of this event?" This is indeterminate in quantum physcs (at least in most intepretations; not in the Everett or Pilot wave interpretations which deal with the apparent indeterminacy in different ways). Although we can assign probabilities to certain events, we cannot predict which of these events will occur. That would suggest either that the notion of event causality is not useful, or that we should be looking for a cause that lies outside the physical representation.

Hume's idea of causality as a necessary connection is also difficult to reconcile with quantum physics, with the apparent indeterminacy undermining the "necessary" part of that definition.

But then there is also substance causality. This asks, for example, "Which being did this being emerge from?" (One can also generalise this to particular physical states.) Aristotlian effecient causality is an example of substance causality. The principle of substance causality -- that every effect has a cause -- is, to my mind, clearly present in the mathematics of quantum field theory. Here the time evolution operator is constructed from creation and annihilation operators -- I will discuss this in more detail on my next post on the Pilot wave interpretation -- and basically states that every possible physical change involves the destruction (or corruption in Aristotle's language) of one or more physical states and the creation (or generation in Aristotle's language) of other physical states. Of course, there are disputes about how precisely to interpret this in terms of the beables of the system (which is what this series of posts is all about), and the issue becomes slightly clouded by renormalisation. But if we interpret this straight-forwardly it is a mathematical representation of efficient causation. One set of states being corrupted, and another being generated in its place.

I think the issue is clouded by classical mechanics and wave mechanics where the evolution of states are usually described by differential equations, and one doesn't usually use the language of states in the standard formulations. One can rewrite them in a language of creation and annihilation of states, but it is cumbersome and not very natural. As such, physicists don't traditionally make much of the language of causality. And I think this is why some philosophers have claimed there is no real room for causality in modern physics. But the concept is still there nonetheless.

I would also dispute that properties are absent in the contempoary mathematical representation of physics, although this does depend in part on what you mean by properties. But that's another story.

I discuss causality and quantum physics in chapter 11 in my book, from sections 11.4 to 11.6 in particular.

8. Will Worrock
Posted at 08:23:07 Tuesday July 25 2023

Similar Thomistic scientists?

Dr. Cundy, are there scientists who are similar to you in advocating Thomism in conjunction with modern science in the philosophical world?

9. Matthew
Posted at 18:32:07 Friday July 28 2023

Prospsects of the MWI

Hello again Dr. Cundy! As expected, we are in agreement about the Everettian interpretation. But I do have some comments about the interpretation, and I'm curious what you think of them.

I think the best approach for explaining the Born rule within the many-worlds context is the "self-locating uncertainty" approach, an instance of which was developed by Sean Carroll and Charles Sebens in their 2018 paper. There are still difficulties - as a purely epistemological approach, it really only works by assuming that the "ontology" problem is already solved (which they admit). Even then, while it arguably succeeds at showing why there is the appearance of Born-rule probabilities as we live through the branching process (by showing that it makes sense to apportion our credences that way), it is still debatable whether it really solves all of the problems of probability. For example, it is unclear to me whether it can explain why we should expect to find ourselves on a branch where the Born rule has held in the distant past, instead of a low-amplitude branch with deviations from the Born rule. Both kinds of branches exist in the quantum state, there is no determinate ratio between the numbers of them to appeal to for a typicality argument, and no observers in the distant past with which to use the idea of self-locating uncertainty. (Though perhaps one could reference hypothetical observers?)

But to me the ontology problem is the more significant one. It occurs to me that the Everettian model (where there is nothing other than the unitarily-evolving quantum state, and an appeal to emergence is made to explain what we observe), since decoherence is never exact and it seems branches can't be precisely defined, it is indeterminate how many "copies" of me exist. But that means it is indeterminate whether I exist, since I am just one of the copies. But surely this is absurd! In fact, maybe this criticism could extend to any reductive philosophy that treats macro-objects as "useful patterns".

Even if that is overcome, the best case scenario for MWI is one where you have a bunch of isolated, approximately localized wavepackets in configuration space or phase space, moving along approximately classical trajectories and occasionally splitting. Then you can coarse-grain, treat the wavepackets as points in the abstract space, and project each point to 3D space to get a recognizable representation of a physical universe. This is a reasonably clear way of deriving what is actually supposed to exist in the supposed many worlds contained in the quantum state. But the problem is that this best case scenario does not hold - instead of isolated wavepackets, it seems very likely (especially for chaotic systems!) that the wavepackets will be continuously smeared out all over the place, and there won't be anything like isolated worlds moving on approximately classical trajectories. With that in consideration, the project seems fairly hopeless.

I think there are a couple of ways the MWI can proceed, but all of them move away from being "just unitary quantum mechanics" and thus detract from the core appeal of MWI. One could add a precise branching structure (but then it is ontologically more parsimonious to suppose that only one of the branches is realized, turning it into a decoherence-based objective collapse theory). One could say there are a continuous infinity of real universes derived from the quantum state, so that rather than branching, they all evolve along trajectories that diverge as the quantum state splits (but then one could choose to reify only one of those trajectories instead of all infinitely many of them, turning it into a pilot-wave theory). Finally, one could posit a real multitude of universes that interact with each other in such a way that the quantum recipe effectively describes their behaviour (in a theory like this, it is the multitude of worlds and not the quantum state that is fundamental). This last option solves the ontology problem and (if there are only a finite number of universes) the probability problem as well, but is as much if not more in need of fleshing out than pilot-wave or spontaneous collapse theories.

Looking forward to your post on pilot-wave theory; best regards!

10. FM
Posted at 11:16:14 Tuesday August 15 2023

@ Will Worrock

Yes, Stardusty is simply the "latest' troll to come to the blog. Usually it's people who keep asking the same question that has been answered several times already and keep failing to understand the answer (or just keep misrepresenting them). Last he was ranting about the first way IIRC, and I was thinking "he needs to go back and read several posts Feser made that address these very comments"

I remember many through the years. I think Feser himself mostly stop engaging with them. Usually they give up and disappear after some time, although there was one, I forget the name, who actually eventually had his mind changed.

11. FM
Posted at 11:28:50 Tuesday August 15 2023

Re: Causation

Hi Dr. Cundy,

Frankly one of the most irritating things that seem to come from people like Sean Carroll and the atheist milieu quite recently, is an attempt to claim "causation is an emergent phenomenon, hence the idea of God (since He's the first cause) is impossible", which I admit really grinds my gears especially for the reasons you explained above.

Carroll & co. seem to reject time-directed (since the laws of physics are time-symmetric) efficient event causality, which you also reject, but they seem to realize that causality is broader than the very narrow view they are trying to impose.

Frankly it feels more like they are trying to use the old tired trope that "Science^{TM}" has debunked God, and as usual with fallacious arguments that seem only to debunk a very narrow view or some particular ideas that some theists might have held, but not theist thinkers in general (or Thomists in particular).

12. Will Worrock
Posted at 01:56:25 Saturday August 19 2023

@FM

Yes, I agree. Psyche makes it too toxic to hang around Feser’s blog. I hope he doesn’t come over here and makes it bad here as well.



Post Comment:

Some html formatting is supported,such as <b> ... <b> for bold text , < em>... < /em> for italics, and <blockquote> ... </blockquote> for a quotation
All fields are optional
Comments are generally unmoderated, and only represent the views of the person who posted them.
I reserve the right to delete or edit spam messages, obsene language,or personal attacks.
However, that I do not delete such a message does not mean that I approve of the content.
It just means that I am a lazy little bugger who can't be bothered to police his own blog.
Weblinks are only published with moderator approval
Posts with links are only published with moderator approval (provide an email address to allow automatic approval)

Name:
Email:
Website:
Title:
Comment:
What is 4×8-1?