I am having a look at different philosophical interpretations of quantum physics. This is the third post in the series. The first post gave a general introduction to quantum wave mechanics, and presented the Copenhagen interpretations. I have subsequently looked at spontaneous collapse models. Today it is the turn of the Everett interpretation.
The Everett, or many worlds, interpretation is one of the more popular interpretations of quantum physics, at least among physicists (and writers of science fiction). Its advocates are certainly very vocal, and seem to be particularly insistent that it is the only natural way of interpreting the theory and that other approaches, if they have any merit, collapse into it. The interpretation was first proposed in the 1950s, but it started to become popular in the 1980s and 1990s. There are several different models of the theory. To keep this post manageable, I am not going to discuss all of them, but I will focus on the model of David Wallace, with commentary by Tim Maudlin (a philosopher of physics whom I rate highly), and some of my own thoughts. Wallace was a physicist who turned philosopher. Why pick on him? Firstly, because I happen to have several of his works available for reference (although I will primarily use just one of them, his contribution to the Oxford Handbook of Philosophy of Physics). Secondly, he is one of the leading advocates for this interpretation, so a reasonable example to study. Thirdly, there is a personal connection, as he was a couple of years above me at university (albeit that I haven't interacted with him since that time). Obviously I can't do his view full justice in a short post (let alone the full scope of the Everett interpretation), so I recommend reading his work in detail for more information (and possibly more accurate information). I should also say that I don't keep close tabs on the literature on this topic, so what I present might be out of date.
So what is the many worlds interpretation? It is that a quantum superposition does not represent our uncertainty in knowledge of the particle, or that the single "particle" itself is spread out over space, but that there are several different copies of the particle. These copies, after decoherence, do not interact with each other. Other particles which interact with the particle also branch into multiple copies, each of which only interacts with one copy of the particle. One way of thinking about this is to say that each copy exists in its own self-contained universe. So, when we take a quantum measurement, most interpretations suppose that we only get a single result, as naively implied by what we observe. In practice, every possible result occurs, but in different universes. (The term "universes" ought to be used with the caveat that there are divergent views of how the multiplicity occurs and manifests itself; some of them call for literally different universes, but others are somewhat more subtle, but I will use the expression as a convenient shorthand.) The reason we appear to observe a single result is because we are also quantum objects in a superposition. There are numerous copies of ourselves in different universes. So I might observe the result of the experiment as spin up, but my cousin, created when I become entangled with the superposition, observes the result as spin down. Thus each version of me only observes the single result, even though in practice both occur.
The Everett interpretation is not to be confused with the multi-verse, which is the belief that there are multiple universes with different physical constants (either from a string landscape, or different inflationary bubbles, or ...), which is sometimes used as a possible response to the anthropic principle. The two ideas are independent of each other.
The claim is made that the Everett interpretation is just quantum mechanics interpreted in a traditionally realist fashion. While there are philosophical puzzles in how classical (non-quantum) theories are to be interpreted, it is agreed that there are no paradoxes. The objects of those theories are mathematical objects, which in some way (where the disagreement arises) represent the physical world. Different states represented in the mathematics represent different ways in which the world could be.
Quantum physics, on the other hand, is usually held to be different. Here one can have a particle in a superposition. ψ = α ψ1 + β ψ2. Here ψ1 and ψ2 correspond to different states representing different possible measurement outcomes. For example, they might refer to two different possible locations of a quantum particle, with ψ1 representing that it is here and ψ2 representing that it is over there. (Wallace uses the example of live and dead cats, following Schroedinger.) α and β represent constants, whose modulus square gives the probability that that outcome is observed. The problem is that the superposition state, especially when applied to macroscopic objects, is difficult to interpret. Does it represent a cat that is both alive and dead at the same time? Neither alive or dead? It is not something which makes much sense.
So what are the options? One can try to change the philosophy of science. A more instrumentalist approach, such as the Copenhagen interpretation is not attractive to philosophers, who want to know what the underlying beables are. A epistemic approach in effect denies that the wavefunction is a representation of reality, breaking a link that was crucial in the philosophy of pre-quantum physics. The alternative is to change the physics to make it something more palatable. Examples of this are the spontaneous collapse or pilot wave interpretations. These are liked by philosophers, but less so by physicists, who are aware of the difficulties in fine-tuning these so that they match empirical observation. (I discussed the spontaneous collapse models in the previous post; I will look at the pilot wave models presently. In summary -- paraphrasing Wallace here, although I agree with him -- the Pilot-wave model was developed out of non-relativistic wave mechanics, and it has not yet been shown that it can be adapted to reproduce the standard model of particle physics as a relativistic quantum field theory -- which most other interpretations have no problems with.)
So what are the alternatives? Wallace suggests that in Everett interpretation, we need to neither change the philosophy from that used to understand pre-quantum physics, nor change the physics away from the standard model. The issue he thinks in the reasoning that led us to a change the physics or change the philosophy approach was in supposing that the indefinite states in the mathematical description imply indefinite states in realty. He uses the analogy of a classical field, which can be written as the sum of two appropriately weighted components. One wouldn't say that the field is in an indefinite state, a superposition between the two components. Instead, one argues that it describes two pulses, but in different locations and with different momentum. Superpositions in classical physics refer to some sort of multiplicity. So why not also in quantum physics?
Here I have to interrupt Wallace's presentation of the Everett interpretation, and make an observation. There is an important difference between the classical and quantum state. In classical physics, when we sum up two parts of the field, the weights given to them behave according to the rules of probabilities or frequencies. It is thus natural to interpret them as frequencies, and consequently in terms of a multiplicity of states. In quantum physics, one can have the same sort of addition of states as seen in classical physics -- for example, after particles have decohered, or the wavefunction is mixed between different states representing different types of particle. Here one can interpret the wavefunction as representing different particles, but that's not controversial, as we observe multiple particles. However, in the sort of superposition Wallace is referring to, the weights are amplitudes. Although related to probabilities (via Born's rule), they do not obey the same mathematical rules as probabilities (or frequencies), and therefore it is rather rash to link them to multiplicities. He analogy he uses thus breaks down, at the very point which he uses to draw his conclusion. I therefore think that this analogy is more misleading than useful.
So back to Wallace's presentation.
We say that of a macroscopic object that is in a given Hilbert space, appropriate for that object. Some states in that Hilbert space are mathematically definite (i.e. correspond to things that we actually observe, such as a particle being in a given location), but others are indefinite, so represent things we don't observe (such as a superposition between two different particle locations). To say that the Hilbert space represents the possible states of the object thus contradicts observation: it also contains states which we don't or even can't observe. There is also the problem that, for compound objects, the same Hilbert space will describe entirely different objects which we would normally consider distinct, such as cats and dogs. So it is misleading to say that the Hilbert space represents that particular object. It is also misleading to say that the indefinite state is a a quantum particle in a superposed state of being here and being over there. Better to say that it the state is a superposition of a quantum particle that is here and a quantum particle that is over there.
Do we observe such superpositions? No, but then the universe is a big place and it would be foolish to say that we observe all of it. A theory that claims that microscopic objects are in indefinite states seems to make a mockery of our usual understanding. But saying that there are multiple such objects does not. When the superposition interacts with its surroundings, it rapidly becomes entangled with them, so we do not just have the superposition of the single particle, but an extended superposition of much more than that. That represents some worlds where the particle is here and other worlds where it is over there. These worlds are in superposition with each other, and if we follow through with the idea that superposition implies multiplicity, then there are multiple worlds.
So, the Everett interpretation is dependent on two assumptions: the physical postulate that the physical world is represented by a unitarily evolving quantum state, and a philosophical claim that if the quantum state is to be interpreted realistically, then a superposition must be understood as describing multiplicity.
There are no additional physical postulates describing a division into different worlds: just quantum mechanical wavefunctions (or Fock states) evolving under the Schroedinger equation. It is then claimed that it makes no sense to even think of the Everett interpretation as one of many interpretations: it is just quantum mechanics itself, interpreted as we have always interpreted physical theories. As such, the interpretation is tightly constrained. If there are any problems in it, they can be resolved by hard study of quantum physics itself.
Born's rule and probability
I will use the language of "branching into different universes" to describe what happens in the Everett interpretation when a superposition in the wavefunction is created. This terminology is perhaps misleading, since various supporters of this interpretation have different understandings of it, and while some incorporate the idea of a "branching" event, others are a bit more nuanced. However, all agree that there is some distinction that changes a single eigenstate representing a single quantum particle into the multiplicity of particles implied by a superposition (in this interpretation), and I will use "branching" to denote that change, whatever it is.
One of the most important criticisms of the Everett interpretation concerns how it reproduces the Born rule, and the notion of probability. The simplistic way of addresses this (and this is a straw man, as it is not what those who propose the Everett interpretation support, but I need to get this discussion out of the way first to dispel an illusion) is to suppose that the number of universes generated when a superposition is created is proportional to the standard quantum probability for that particular outcome. So, for example, if you measure the spin of a spin-half Fermion, then in the standard calculation spin up is measured 50% of the time, and spin down the other 50% of the time. So one would think that half of the universes are spin up and the other half spin-down, and that explains that. Then there is a 50% chance that the "you" that encountered that particular measurement is in one set of universes rather than another, and that explains the probability.
Except, this does not work, for various reasons. Firstly, many probabilities in quantum physics are irrational numbers. One would need an infinite number of branched universes so that the relative frequencies would be precise fractions of the number of universes.
Secondly, there is the issue that the branching into universes is generated when the wavefunction enters into a superposition of states (under the assumption that superposition implies multiplicity), but the distinct probabilities don't arise until there is decoherence. After decoherence, we can talk about probabilities for various quantum states, and parametrising the uncertainty in quantum physics in terms of counting universes would make sense, baring the caveat of the third point below. Before decoherence, we parametrise the superposition in terms of amplitudes. Probabilities map to frequencies, and can be used to predict frequency distributions. Amplitudes don't and can't, at least not directly, and not without losing information crucial to the parametrisation of the quantum state. If the uncertainty in quantum physics arises from counting universes, then that can't be captured accurately in an amplitude. The different universes, recall, are a way of describing the multiplicities in the different physical beables. In reality these universes, and the particles in them, contain all the information contained within the superposition. But the amplitude contains additional information compared to than the simple counting of universes. This information is important when predicting interference effects. When two wavefronts (each with their own superposition) coincide, the total probability is generated from the sum of the amplitudes. If the superposition contained within each wavefront implies a multiplicity of universes, when there is interference these would have to be either multiplied or destroyed (depending on whether there is constructive or destructive interference) as the two wavefronts pass through each other. It just turns into a huge mess.
Thirdly, decoherence is only an approximate process. It largely picks the basis, removing superpositions in that basis, but not quite completely. So even after decoherence, there is still the problem that the superposition requires amplitudes rather than probabilities to parametrise it. The amplitudes for those states not in the basis, of course, are so small we normally would not bother with them; certainly they would never be detected in an experiment. But they still exist, and thus pose a problem for any philosophy of physics which requires that they do not exist.
Thus we need something a bit more sophisticated to correctly account for Born's rule in the many worlds interpretation. Rather than counting branches, we need to assign an amplitude to each branch. Which is, of course, what happens in standard quantum formalism. So if there are only two states in the superposition, then there are only two branches, each with its own amplitude. But then, what does that amplitude physically represent? How do we relate it to experimental results?
In the Everett interpretation, there is nothing but the dynamics of the quantum state, and this dynamics is deterministic. This determinism is also true for the Pilot-Wave interpretations, but there the uncertainty arises because there are underlying hidden variables, and we use it to parametrise our lack of knowledge of those variables. In the Everett interpretation, however, everything is out in the open. There is only the wavefunction, and in principle, that can be fully known (as long as we don't try to extract knowledge from non-commuting bases). Traditionally, probability is used to parametrise uncertainty. That uncertainty can either arise due to our lack of knowledge of the actual physical state (as in the Pilot Wave interpretation, Quantum Bayesianism, and others); or because the dynamics of the system is indeterminate, and there are multiple possible outcomes from an initial state, only one of which would become actual (as in collapse interpretations and others). Or one can combine these two sources of uncertainty. But in the Everett interpretation, there is only one outcome of the system as a whole (even if that involves multiple universes), and there are (in principle) no hidden variables. So how does the notion of probability arise?
One can, perhaps, argue that it is uncertain at the outset which branch of the universe we ourselves will end up in. There is a 50% chance that we would find ourselves in a spin up branch, and 50% chance that we would find ourselves in a spin down branch. Except, this is not how the Everett interpretations work. We would end up in both branches, with one copy of us measuring spin up, and the other copy measuring spin down. So what does it mean to say that there is a 50% chance that we would measure that particular outcome?
The purpose of Born's rule in quantum physics is to predict frequency distributions, in order to compare against experiment. It is a crucial part of the quantum recipe. Without it we cannot map theory to experiment. But the standard understanding of the Born rule cannot be mapped directly to the Everett interpretation. So there are two problems: how do we derive the Born rule (needed to reproduce probabilities to compare against experimentally observed frequencies) from the principles behind the Everett interpretation (where the only things are the Schroedinger evolution of the wavefunction and the postulate that a superposition implies a multiplicity of states); and secondly what the notion of probability itself means in the context of the Everett interpretation.
Some have tried to interpret this through decision theory. Here we should base our actions based on maximising the utility of the action. So, suppose that we have two boxes, A and B, and we are forced to put our cat into one of them. There is a radioactive trigger, which 2/3 of the time would cause poison to be released into box A, and 1/3 of the time would cause it to be released into box B. Clearly the right thing to do would be to put the cat into box B. There is a greater chance that it will survive. Even should the cat die, we can at least console ourselves that we made the right choice.
In the many worlds interpretation, what this implies is that a superposition is generated, which implies that multiple cats come into being, one of which will survive and the other would die. This happens no matter which box you put the cat in. Obviously, the squared amplitude associated with each branch would be larger in one compared to the other. But why does that concern us? We still have one cat living and the other dying, no matter what we do. We ourselves would, of course, also become entangled with the quantum state, when we look at the cat, and one version of ourselves would be happy and the other sad. After decoherence, the branches evolve independently, which means that the squared amplitude assigned to the version of us in each branch is meaningless to that version of ourselves. So why do we invoke it at all? The challenge is to allow them to derive from their philosophy a principle of decision making that reproduces the straight-forward understanding of other interpretations.
One solution is to suppose that instead of two branches, there are in fact three. In two of these the cat in box A dies, and in the other one the cat in box B dies. We are then justified in putting the cat in box B, because twice as many of its successors will survive. There are ways in which one can set up the experiment so that instead of a single quantum event or superposition determining the odds, there are multiple such events. One can, for example, have a system where there is a radioactive particle with an amplitude of √(2/3) of decaying in a particular period of time. If it does so, then its decay product is also radioactive, with an amplitude of √(1/2) of decaying in the given time window. The poison is only released into box B if this second particle decays. There are thus three branches, each with an amplitude √(1/3), two of which lead to the cat in box A being killed, and one leading to the cat in box A being killed. We now have a symmetry between the branches, and that can be used to count how many cats die when we make a particular choice.
So the idea is that when we have a decision to make which relies on branches with different squared amplitudes, to restate the problem so that you have multiple branches all with the same squared amplitude. The branches are still parametrised by an amplitude, so one doesn't lose the information required to describe interference effects, and avoids the problems associated with multiplicity in terms of counting branches I described above.
There are a few objections to this. Firstly, is it true that a system which splits the wavefunction in this way functionally equivalent to one in which there are naturally just two branches, with amplitudes √(1/3) and √(2/3)? It is not so clear that it is. The approach relies on a principle of indifference between the two experimental set-ups. But is that justified?
Secondly, what about those amplitudes which don't nicely divide into a finite number of equal parts? Clearly this method would not then work.
Thirdly, the approach is based on decision theory. The probabilities simply affect how we are supposed to act in a given circumstance. There is, however, a jump between this and the underlying ontology. If we are supposed to pretend that there are three branches to the universe, when in practice there are only two, our actions might not lead to the best results. Consider the case where we split the √(1/3) branch of the wavefunction, so there are now three branches in reality, one with an amplitude of √(2/3) and two with an amplitude √(1/6) where the cat dies. The way this method would proceed would be to split the √(2/3) branch into four, so all the branches have the same amplitude, and the living cats still outnumber the dead ones 2:1. But if this division is only in our heads rather than reality (as a means to make informed decisions), then in practice acting this way would lead to two of the branch cats dying and one living. If the split of the √(2/3) branch is in reality rather than just in our heads, then you are back to the "counting branches" model of the Everett interpretation which I criticised above.
Indeed, that it is based on decision theory causes another problem (which I have taken from Maudlin). Suppose you are in a restaurant and cannot decide between two deserts. The solution, in the Everett interpretation, is simple: make your decision based on a quantum coin toss. Then you will, in a sense, have both deserts, as one branch of you chooses one and the other version takes the other one. Of course, the quantum coin toss itself comes at a cost -- such experiments are expensive to perform -- but maybe the benefit of having two deserts makes up for it. But, of course, in other interpretations where one can only eat one or other of the deserts nobody would act in this way. It is thus not clear that trying to explain the probabilities in terms of decision theory will always lead to the same actions as someone who believes in the Copenhagen interpretation. Other factors are equally different -- the anxiety one feels about whether or not their cat is going to die, while the Everett advocate who kills his cat can still console himself that his counterparts still have a living cat.
So the idea that we can think of "probabilities" as guiding our decisions in the Everett interpretation fails both because it does not lead to the same actions in the Copenhagen interpretation, and because it does not explain the underlying philosophical problem of what the Born rule probabilities actually represent, and why we need to use the Born rule (rather than some other mechanism) to duplicate experimental results. Nor does it address all the uses we put probability to, for example the use of Bayes theorem to modify our uncertainty about some unknown data.
The first criticism is with the motivation given for the Everett interpretation. It is claimed that it is just quantum physics interpreted as we interpret classical physics. It also relies on the postulate that a quantum physics superposition represents multiplicity. The problem I have with these two statements is that a quantum physics amplitude is something which is completely alien to classical physics. One cannot then interpret it in the way that one interprets classical physics.
There might be an objection to my last statement: what about waves in classical physics? These are certainly similar to the quantum mechanical wavefunction in certain respects. The closest analogue is perhaps a wave in a classical electromagnetic field. This too is controlled by amplitudes; there are interference effects, and so on. The mathematical wave equation for an electromagnetic field differs from the Schroedinger, Klein-Gordon or Dirac equations used in wave mechanics (and once we get to field theory, things are more complex still, as the evolution of the Fock state is based on the time ordered exponential of an operator rather than a straight-forward differential equation). However, the most important difference is that the classical electromagnetic field is a field and not a particle. It does not come in discrete lumps. One can thus have half of the intensity in one place, and the other half in the other place, in a way that is impossible for particles. We also observe both of those wave-packets in the same universe at the same time, while in the Everett interpretation only one of them is observable for each particular version of ourselves. In the classical analogy, every version of ourselves observes every peak in the field intensity. The analogy between a wave in a classical field and a quantum particle breaks down. I thus cannot see how one can use that analogy to justify the idea that quantum superposition implies multiplicity.
The second part of the justification is that the Everett interpretation allows us to come closest in quantum physics of mirroring the standard philosophy of classical physics; we have to change the least amount. Even if that is true, then why would we necessarily feel the need to keep things the same? What is wrong with having an entirely different philosophical approach? We should not set the philosophical views of Galileo, Kepler, Boyle and Newton in stone as though they were divine writ. They could have been wrong, and if there is a chance that they were, it is right for us to explore alternatives.
The question is how can quantum superpositions be viewed as multiplicities. One way of answering this is to say that they cannot, which either implies that one needs to abandon the Everett interpretation, or accept that it needs something in addition to the evolution of the wavefunction, such as a physical branching between universes. In other words, you need to expand the physics in order to save the interpretation. These modification strategies have fallen into two categories: the many-exact-theories add the existence of worlds to the state vector. So, when there is a superposition, the world literally does split into different realities. In that case you need to suppose a branching mechanism that generates these realities in addition to the Schroedinger evolution of the wavefunction. The alternative is the many-minds theory, the multiplicity is illusionary. The observer becomes entangled with the quantum superposition. Each part of that superposition only sees one outcome, but in reality they are all present as part of the one single wavefunction. So, instead of a basis spanning many exact worlds, we have a basis spanning many different consciousnesses. Both of these approaches have fallen out of favour. Many mind approaches tend to be committed to some form of Cartesian dualism -- if there is a fundamental law that consciousness is associated with a given state, then there is no hope of a non-circular explanation of how consciousness arises from fundamental physics. Equally, the many worlds scenario has similar difficulties related to how macroscopic objects fit into these different worlds. If physics presupposes the existence of these worlds, then the existence of such worlds cannot be derived from fundamental physics.
Wallace also states that both approaches undermine the reason for accepting the Everett interpretation: that it just describes quantum physics, without adding any additional structures. Wallace instead falls back on an appeal to decoherence, which he states allows a clear definition of the many different worlds. After decoherence, interference effects are suppressed, and there is a clear distinction between the worlds. The objection to this are that decoherence is not an exact process, and secondly it does not provide any answers of how we are to think of a quantum superposition before it has decohered. Either multiple worlds or multiple minds are part of our ontology (in which case decoherence is incapable of defining them), or they do not really exist (in which case decoherence is not an explanation for them). Wallace tries to circumvent this dilemma by saying what is fundamental is not necessarily real, appealing to the ideas behind emergence.
There is also the problem of local beables. If all that exists is the quantum state (which is spread across different branches of the universe), then how does this relate to the macroscopic objects which we see around us? In Aristotelian terms, the quantum state can describe the form (albeit in the Everett interpretation the idea that superposition implies multiplicity complicates this), but what about the matter, the stuff that makes existence concrete? The beables would be represented by the sum totals of all the particles in all the branches. But clearly this framework does not lead to local Beables. The existence of a particle in one branch implies the existence of the other part of the superposition; the two cannot be separated. So a single branch of the universe is not localised in the sense that it is independent of what is happening elsewhere in the universe. On the other hand, the superposition as a whole is not localised.
Wallace tries to consider macro-objects as patterns, and the existence of a pattern as a real thing depends on the usefulness -- in particular the explanatory power and predictive reliability -- of theories which admit that pattern in their ontology. He uses examples of temperature and phonons as emergent objects which are useful in explaining phenomena. Are quasi-particles such as phonons real? They can be created and destroyed, they can be scattered, they can be detected. We have no more evidence than this that real particles exist. And yet quasi-particles consist only of a pattern within the constituents of the solid; they are usually invoked to describe the vibrational modes of the atoms.
Wallace claims that the branches which appear after decoherence are the same sort of things as quasi-particles. They are the sort of entities that in other areas of physics we take seriously. They are emergent, robust structures in the quantum state, and as such he states that we ought to take them as ontologically seriously as we do with quasi-particles or macroscopic objects.
Except, I am not sure that the analogy is valid. A phonon emerges from the process of constructing an effective field theory for a solid. Effectively it arises from changing the basis from that describing electrons, quarks, and so on, to something that is more suitable to capture the physics of the solid crystal. This is also the link between the microscopic world of quantum physics and the macroscopic world. However, the different "branches" after decoherence do not arise using this mechanism. The effects of quasi-particles can be observed, and they influence other observed particles. After decoherence, that is not true for the various terms in a quantum physics superposition. Quasi-particles can, of course, themselves be in superposition. Thus the branches in quantum physics are not emergent in the sense that we usually think about emergence.
I also have difficulties with the way the Wallace (following Dennet) describes macroscopic objects as merely patterns which have explanatory power. This seems to be confusing the theory with reality. Obviously, as an Aristotelian, I believe that there is some sense in which the structures in the theory correspond to the form, or some of the potentia within the form, of the physical object, but that does not mean that we should confuse the theory with the physical object itself. After all, in a solid, the physical substance is the solid itself. Phonons are part of our representation of the form of the solid. There is a difference between the representation of the form and the form itself, but even if we neglect this, the thing that exists is the solid object, not patterns that at best subsist in it. And, of course, the substance is the union between form and matter, with the matter not represented in or accessible to the theoretical structure. We can think of an isolated electron, or, in principle at least, an isolated quark (at least at those temperatures and chemical potentials where there is a quark-gluon plasma rather than confined quarks). But once these are bound into a larger substance they loose their identity in favour of the substance they subsist in. Phonons do not have an identity outside of the theoretical description of the solid substance. Do Phonons exist? Not in the same sense that a free electron or cat exists, as they are substances in their own right rather than part of the framework used to represent the internal dynamics of the substance. It is an error to confuse the mathematical description with reality, and Wallace's interpretation relies on this error.
Wallace claims that the Everett interpretation is the natural way to interpret quantum physics while preserving realism and as much as possible of the philosophy behind classical physics. This, of course, begs the question of why we would want to or feel the need to preserve that philosophy.
Any viable interpretation of quantum physics needs to do two things. Firstly, it needs to reproduce the standard model of particle physics, as described through the instrumentalist version of the Copenhagen interpretation, along with an explanation of why that works so well. Or, if it predicts different physical outcomes, it needs experimental evidence to show that it, rather than the accepted theory, is correct. Secondly, it needs to leave no philosophical loose ends.
On the second point, the Everett interpretation has several issues. Wallace's presentation of it relies on two analogies. Firstly, there is an analogy with wavepackets in classical physics, to justify the core assumption that superposition implies multiplicity. Secondly, there is an analogy with the idea of emergent physics to understand the local beables of the theory, and to argue that the unobserved branches of a decohered superposition should be regarded as ontologically real. In any case, an argument from analogy is invariably a weak argument. But that is particularly true in these two cases, where there are certain dissimilarities between the examples that Wallace points to and the quantum wavefunctions used in the Everett interpretation, and this differences are in precisely the areas which allow us to make the conclusion from the analogous examples. Wallace himself admits that alternative approaches to extract local beables, such as many minds and many worlds, are enough problems to make then non-viable, because they require an additional and unmotivated branching mechanism, which leaves the interpretation no better than the non-instrumentalist Copenhagen interpretation with its addition mechanism of wavefunction collapse.
As far as reproducing the physics, the Everett interpretation has a major issue in reproducing the Born rule, which is absolutely fundamental to how we compare the physical theory to the experimental results. Wallace tries to get around this by appealing to decision theory. But, as I have argued, this approach has its problems. The simple method of counting branches to give a frequency distribution fails before decoherence where the branches are associated with amplitudes rather than probabilities, so do not map neatly to a frequency distribution. Assigning an amplitude to a single branch for each term in the superposition, rather than having the number of branches representing the frequency distribution, resolves this problem, and more closely matches the physics, but instead leads to issues in how to interpret those amplitudes in terms of a probability, and consequently a prediction for an observed frequency distribution.
Then, of course, after decoherence, the alternative branches are entirely unobservable. Thus the Everett interpretation cannot, even if it overcomes its problems, be confirmed to be correct. Admittedly, this is also true of most other interpretations (excluding those which deviate from the physics), but it ought to make advocates of the interpretation considerably more humble than they often appear to be.
I have drawn on several sources for this post, but the two most important are Maudlin's work on the philosophy of quantum physics, and Wallace's contribution to the Oxford Handbook of the Philosophy of Physics.
All fields are optional
Comments are generally unmoderated, and only represent the views of the person who posted them.
I reserve the right to delete or edit spam messages, obsene language,or personal attacks.
However, that I do not delete such a message does not mean that I approve of the content.
It just means that I am a lazy little bugger who can't be bothered to police his own blog.
Weblinks are only published with moderator approval
Posts with links are only published with moderator approval (provide an email address to allow automatic approval)