The Quantum Thomist

Musings about quantum physics, classical philosophy, and the connection between the two.
The Philosophy of Quantum Physics 1: The Copenhagen Interpretations


The Philosophy of Quantum Physics 2: spontaneous collapse models
Last modified on Sun May 21 16:28:12 2023


Introduction

I am having a look at different philosophical interpretations of quantum physics. This is the second post in the series. The first post gave a general introduction to quantum wave mechanics, and presented the Copenhagen interpretation. In this post, I intend to look at spontaneous collapse models, and in particular the GRW model.

The GRW model is a development of some of the Copenhagen interpretations, which attempts to resolve problems around wavefunction collapse. In the Copenhagen interpretation, wavefunction collapse occurs at measurement. The question is what is so special about the measurement process that induces the collapse? Indeed, what exactly is meant by "measurement," and is there a definition that isn't just circular or ad-hoc? The spontaneous collapse models circumvent this by saying that collapse happens automatically and spontaneously, regardless of any external influence on the quantum system.

Dynamics

The two questions to be answered concern the dynamics of the theory, and what the underlying physical states are. The collapse theories concentrate initially on the first question.

Like the Copenhagen interpretations, the spontaneous collapse models suppose that the fundamental quantum state, the wavefunction in single particle quantum mechanics, is informationally complete -- one can extract every possible physical fact from the wavefunction. Unlike the Copenhagen interpretation, it disputes that the quantum state just evolves (between measurements) according to a deterministic equation. Sometimes it also obeys the indeterministic and non-linear collapse mechanism. Whereas for the Copenhagen interpretations, this collapse is triggered by a particular event, for the spontaneous collapse models it just happens spontaneously.

In the GRW theory, collapses occur with a fixed probability per unit time for each particle in the universe. Since measurement is not invoked, there is no need to carefully define the concept.

Suppose, for example, we have a quantum particle which can either travel to the left or to the right. In terms of the wavefunction, this would be represented by two wave packets, one travelling over time in one direction and the other in the other. This wave packets will not be at precise locations at each moment of time: something about the momentum is determined, and therefore there will be some indeterminacy in the exact location. Both the location and momentum wave-packets will be smeared out over a small region. But each wave packet will have a well defined average location and momentum. The wavefunction itself is a superposition of the left-travelling and right-travelling wave-packets. As long as the wavefunction evolves only under Schroedinger's equation, this superposition would endure, and any symmetry between the left and right travelling parts of the wave-packet in the initial state would be maintained.

Since the collapse theory is committed to the quantum state containing the complete information concerning the particle, one cannot give a definite answer to the question of whether the electron is on the left or the right. One can say that it is on both left and right, or neither, or that the question is nonsensical, but not say that it is either left or right. We know, however, that if we were to measure the location of the particle, then we would always get a definite answer. If we perform a sufficiently large sample of experiments, then half of the time the particle would be found on the left and half of the time on the right. But on every occasion there is a definite result to the question. But in the wavefunction model, the measurement does not and cannot reveal a pre-existent fact about the particle. The measurement does not actually measure the relevant properties of the particle as it had been evolving under Schroedinger's equation. It just makes something up and applies it to the particle. If no measurement occurs, then the electron wavefunction will just continue indefinitely, with the particle never finding out whether it is in fact moving to the left or to the right.

In the spontaneous collapse theory, on the other hand, the above picture is false. Given long enough, it is inevitable that the wavefunction collapse will occur. Once it occurs, the electron will be found in a given set of states and thus will be travelling in a particular direction. How long that is will be a free parameter of the model. It needs to be tuned. Make it too short, and the collapses will be too frequent and (for example) destroy the interference effects seen in QM experiments -- which depend on the wavefunction being spread out and not in a definite location. Too long, and there is no collapse at all, and again that is inconsistent with experiment, since we do observe the particle in definite locations. The time suggested in the original article was a collapse occurring on average about every hundred million years for each particle.

Obviously, this is a very long time. But the point is that it is for each individual particle. When one has a large number of particles, it is inevitable that some collapse occurs within a short time period. If those particles are entangled together, then a collapse in one of them will trigger a collapse in all. For an isolated quantum particle, the collapse period is very long. But once it becomes entangled in a system with numerous particles, such as a measuring device (which in this case just means a large collection of particles which is attached to some macroscopic dial which we can observe), the chances of a collapse within the time period that the particle interacts with that device become almost inevitable. The collapse of any one particle will have the effect of localising the wavefunction and selecting one set of states in the superposition rather than the other.

The next question is which basis should the system collapse into? In the Copenhagen interpretations, this is determined by the measuring device itself. In the spontaneous collapse theory, a different answer is needed. The idea is that a Gaussian filter of small width is multiplied to the wavefunction. This enhances the wavefunction around the center of the filter, and suppresses it away from that filter. This has the effect of localising the wavefunction -- not completely, which would cause problems for the wavefunction in the momentum basis, but it bunches it up into a small region. The location of the center of the filter is determined by the square of the wavefunction, giving us Born's rule (this is over-simplified: there are a few mathematical details I have skipped over, but close enough to provide a general understanding). The effect of this is that if the wavefunction is already reasonably well localised (with a width much smaller than that of the Gaussian filter), then it will be largely unchanged by the collapse and Schroedinger's evolution will continue unchanged. If, on the other hand, it is not well localised, then the wavefunction will change dramatically. In the example of the either left-travelling or right-travelling particle, one of those solutions will be strongly suppressed, and the particle will be either on the left or the right.

The reason a Gaussian filter is used is that it has the same effect on both the location and momentum representations of the wavefunction simultaneously. Whether we apply it in the location basis or the momentum basis, the net result is the same, with certain states in the superposition suppressed and others enhanced for both the location and momentum basis. So, for example, if we have a wavefunction in the location basis with one packet to the left and the other to the right of the origin, transforming to the momentum basis will also have two distinct momentum wave-packets, one with positive momentum and the other with negative momentum. Apply the filter on either basis, and we suppress one of the wave-packets in that basis, but due to the magic of Gaussians and Fourier Transforms, we also suppress the corresponding wave-packet in the other basis. But the filter has to be Gaussian for this whole idea to remain basis independent. Nothing else will work.

So there is a second free parameter of the theory: the width of the Gaussian filter. Make it too broad, and spontaneous collapse has no practical effect. Make it too narrow, and it has too much of an effect, and will change the underlying dynamics of the quantum system.

In other words, without any need to refer to measurement as part of the dynamical process, the spontaneous collapse theory both allows us to explain why superpositions endure for single particles for long enough that they affect our experiments, but also explains why we observe the Born rule and definite states in our experiments. It is, in short, consistent with the quantum mechanical experiments. There is an in-principle measurable difference, due to the spontaneous collapses, between a universe that obeys the GRW model, and one which obeys a Copenhagen model. But that difference is so small that in practice there is no practical chance of us observing it or measuring it.

So, if we take the Schroedinger's cat thought experiment. The initial radioactive vial is in a superposition between the decayed and not decayed states. If it were isolated, as it consists of just a small number of particles (those that make up the radioactive atom), the chances of a spontaneous collapse are very small. The particle will be in a superposition between the decayed and not decayed states. But, in the branch where there is a decay, the poison is released and the cat dies. The radioactive atom is now entangled with the vial of poison and the cat. Very briefly, these will be in superposition, with the cat neither alive nor dead. But this will only last for an immeasurable length of time. There will be some spontaneous collapse, the system is entangled, so the wavefunction for the not decay state will be heavily suppressed, and the cat will be dead regardless of whether we have looked at it. If there is no entanglement with the cat, then the cat is alive. Thus for all practical purposes, the cat is either alive or dead and not in some weird superposition of the two.

Of course, the picture is not quite this clear-cut. Firstly, there will be that brief moment of time before the spontaneous collapse occurs. Secondly, the Gaussian filter only suppresses the other branch of the wavefunction, but does not eliminate it. The suppression will lead to that branch of the wavefunction being so small that for the physicist it will be lost in the usual measurement uncertainty and make no practical difference in any actual experiment (as physicists are too nice to really experiment with a live cat and a vial of poison). Technically, however, the cat would still remain in a superposition between alive and dead states, so it is not clear that the philosophical problem is truly answered. But, for all practical purposes -- which is what we are most interested in, surely -- the cat is either alive or dead.

Local beables

One problem with the spontaneous collapse theory regards the underlying ontology. The collapse is a change in the wavefunction of the system, while the wavefunction itself does not exist in physical space: it is just a mathematical function outlining possibilities, defined in a high dimensional configuration space. This configuration space can include indices for the spatial coordinates (dependent on the basis), but is not limited to that: there is also spin, and the phase associated with gauge symmetry. These parameters all influence each other. It makes no sense to ask for the amplitude or phase of the wavefunction at a particular point in ordinary space, as these refer to measurement results which are undefined in the wavefunction itself. The measurement device upon which the observations are recorded does exist in space, and has a clear and definite value. It measures some but not all of the underlying parameters of the quantum state. The question then becomes how we get from the vague quantum state to the clarity of the measurement outcome.

The term beable was coined by John Bell to denote what is fundamentally real in the interpretation. In the GRW interpretation, the quantum state is a beable. But it is not a local beable, as it is spread out over space. However, the results of our observations are always localised. There is thus, in this interpretation, a difference between the ontology of the quantum state and of the more familiar macroscopic world.

The Gaussian filters responsible for the collapses, on the other hand, are localised. We could propose that these events are the local beables of the theory. These are mathematical counterparts in the theory to real events at definite places and times in the real world.

So, in this flash ontology, the local beables are the spontaneous collapse events. These are scattered quite sparsely through space and time. For example, Maudlin estimates that there are about 10 trillion electron flashes per second in the average human body, which would imply around 70 trillion overall once we include the quarks. This sounds like a lot, until we realise that 40 trillion cells in the human body. If these flashes provide the local beables, then we might only be said to exist when and where a flash occurs. Only a very sparse outline of the human cell would actually exist in a defined state in any given second, with some cells almost certainly passing through several seconds not having any local beables, just a collection of undefined quantum states. Human cells would be largely undefined. As soon as we observe the cell under a microscope, it becomes entangled with ourselves, and we get a much higher resolution. But this flash ontology certainly doesn't fit well with our natural intuition. I am also concerned about making events (rather than substances) the fundamentally existent things.

The alternative is to try to construct a wave-like local beable. The wavefunction of a single particle is a complex function on physical space time. This can serve as an underlying beable. At times it will be spread out over space, and then localise after a spontaneous collapse. Of course, this collapse is over configuration space. Some of this language is a little too vague, but maybe something can be built out of this.

The problem comes when we have a system of multiple particles. Here we have an entangled wavefunction. One cannot treat each particle in isolation; but because there are multiple particles neither can one assign a clear location to the system. The problem is how the structure of the wavefunction can be projected down from configuration space into physical space, and a local beable must be assigned to that projection. The spontaneous collapses would not in this case function as a local beable, since all they do is change the nature of the quantum states, which we are here proposing to be the actual beables of the system.

One alternative would be to regard the squared amplitude of the wavefunction as a matter density, and make this the underlying beable. Each particle's matter would always exist, and be distributed in space. In a double slit experiment, the electron's matter would literally pass through both slits.

The problem then comes in explaining why we observe a definite result for the location of the electron. The spontaneous collapses provide a Gaussian filter to the wavefunction, which strongly suppresses but does not completely eliminate the wavefunction away from the center of the filter. If the wavefunction corresponds to a matter density, then that means that a tiny sliver of the electron will actually be a long distance from where we observe it to be. This is conceptional problematic, and there would be a non-zero probability that the next spontaneous collapse will be centred on that tiny sliver, causing the matter density to shift back towards an undetermined state. In the worst case scenario, we could have the observed particle instantaneously teleporting itself to a distant location: an exceptionally unlikely event, but possible in the GRW interpretation (though not the Copenhagen and other interpretations). Also, regarding a beable as a matter density which occasionally collapses into a single location is difficult to reconcile with special relativity. In any situation, not just extreme cases, part of the matter of the electron will instantaneously teleport itself to another location. But what is meant by instantaneous if there is no preferred reference frame?

However, if we accept this interpretation of an interpretation, then the underlying beable is the matter density. There are problems in asking where the electron is. But the problems are not in the interpretation itself, but because the question itself is phrased in a way that assumes a different ontology. In practice, the electron is always smeared out over all space, rather than being at a particular location, so to ask "where it is" assumes various (in this view false) assumptions.

The next option is to ask whether we need local beables. Some have claimed that the macroscopic world we observe, with its localised physicality, emerges from the underlying quantum state. The problem then with the notion of local beables is that we are taking something which is necessary for the ontology of the macroscopic world, but incorrectly applying it to the microscopic or quantum world. "Emergence" in this technical sense is a description of how properties of larger, composite objects, can be very different and not obviously follow from their microscopic constituents. So, for example, we have the fluid of water (not just the individual water molecule). Ultimately, this is made of various electrons, quarks, and guage Bosons, but the properties of water are not the same as the properties of a collection of those fundamental particles. The energy levels of the water molecules are not just those of the particles of which it consists superimposed on each other; and these are again modified when we consider a fluid body of water. The fundamental particles are absorbed into the water; one can no longer speak of them as having an independent existence. So which is more metaphysically fundamental? The fluid water, or the underlying particles? A case can be made either way (chiefly because the concept of "metaphysically fundamental" is badly defined).

I personally agree that emergence of this sort occurs, and has important philosophical consequences. There is a case to say that the electron ceases to exist (in the most direct meaning of the word "existence") after it is combined with a proton to form a hydrogen atom. One no longer has an isolated electron and an isolated proton. Nor is the hydrogen atom simply an electron and proton in close proximity to each other. It is its own thing. Obviously, in the mathematics, there is a relationship between the electron and proton and the hydrogen atom. But this is not a simple linear addition of the two; it is far more complex than that, and in particular involving changes in basis so the Hamiltonian is no longer described in terms of electron and proton creation and annihilation operators, but those related to the bound states.

It is not clear (at least to me) how emergence of this sort can imply a change in the fundamental ontology; from the need to have a local, concrete thing at any level of physics. It discusses discontinuous changes in properties, or which substance we should concentrate on, but not in the notion of local substances.

Another form of emergence discusses the mathematical description of objects. A mathematical description can clearly describe a complex world containing macroscopic objects and their causal relations. This description does not describe the real macroscopic world. However, if we are to understand the real world functionally, as a web of causal interactions, then this web of causal relations exists just as well in the representation as in the real world. That is also true for any sentient observer in the description. And in this sense, there is no real way to distinguish concrete reality from the mathematical representation, and we can say that reality emerges from the representation. In the same way, we might expect a macroscopic world to emerge from a purely abstract quantum state.

The obvious problem with this argument is that it assumes a purely functionalist ontology. I won't deny that functionalism (as defined here) has a role to play in our description of the world. Actual and possible causal relations between substances are important. But it is not the only thing that matters. There is also the concrete matter that makes us exist in reality. Only part of us is captured by the mathematical representation, and the part that isn't is what makes us different from a scientific model.

But does this solve the problem of jumping from a quantum state without local beables to a macroscopic ontology which relies on them? Clearly not, as both can be described by a mathematical representation. The transition we are interested in -- a difference in scale from the microscopic to macroscopic -- is different from the question of whether a mathematical model, if complex enough, can take on a reality of its own. So I don't think that appealing to emergence can answer the problem of local beables.

Both the matter density ontology and the flash ontology are, I think, internally consistent. But they are both weird, and contradict our common sense. The flash ontology states that the actual resolution of any matter is very low: outside entanglement with a larger system, it can only be said to exist when there is a rare flash. The matter density picture, on the other hand, implies that nothing is localised, and thus it makes no sense to assign particles to a particular location. We cannot ask where something is, and expect a single or simple answer. The issue with all interpretations of quantum physics is always that there is some weirdness at some point in the picture they represent. Something will defy our classical intuitions. The question is which of these bits of weirdness are we most willing to accept? Which of our classical axioms are we willing to let go?

Criticism

There are various criticisms of spontaneous collapse theories. I have discussed a few above -- that it provides a very unintuitive ontology. There are other criticisms as well.

The first is that the theory does not precisely reproduce the results of the instrumentalist understanding of quantum physics. It does so to a very good approximation, but not exactly. It is thus possible in principle to rule out spontaneous collapse theories by experimental check. Such checks have been performed, and, as far as I am aware, none have come out in favour of the spontaneous collapse model. They have not yet ruled it out (just narrowed the ranges of possible rates of spontaneous collapse), but they are reducing the model's room for manoeuvre. It could have been confirmed by those experiments, and has not been, and that must count against it in comparison to other models which do reproduce the standard instrumentalist understanding.

Then there is the issue of motivation. The reason the spontaneous collapse model was proposed was to remove the reliance on dubious definitions of "measurement" or "observation" in some Copenhagen interpretations. But this is a problem that is already solved, through the greater understanding of decoherence that has developed since the spontaneous collapse models were proposed. Decoherence doesn't answer all the problems associated with "measurement" in quantum physics, but it does explain why the system jumps to a preferred and single basis. This removes interference effects and superpositions, meaning that a description in terms of amplitudes becomes mathematically equivalent to a description in terms of probabilities, and thus classical physics. A measurement is just one example of the sort of process which causes decoherence. The question of which state the system jumps to is not answered by decoherence, but neither is it answered in the spontaneous collapse models.

The main measurement problem in quantum physics is why do we need two very different processes to describe the physics: firstly Schroedinger evolution of the wavefunction, and then the Born rule which both calculate probabilities of measurement outcomes from the wavefunction, and describes a process which changes the wavefunction outside of Schroedinger evolution. The spontaneous collapse models still have two separate processes, with no obvious explanation of why we have these two processes both of which affect the wavefunction. Why are there these flashes?

Conclusion

The spontaneous collapse model does not seem to be particularly reasonable to me. It does not really solve any problems with the modern forms of the Copenhagen interpretation, and creates new problems of its own. It does not provide answers to the fundamental problems of ontology or measurement (and the problem with regards to measurement that it does solve is answered in other ways which more naturally flow from the theory). Nor does it do away with the need for non-local jumps in the underlying physical object, in this case taken to be the quantum state. Like all these models, it has not been definitively ruled out (although, as it can be differentiated from those models by experimental test, it might be ruled out or ruled in when those experiments are performed). So one would hope that in the survey of quantum interpretations there are going to be better alternatives.

Acknowledgement

Finally, I should credit Tim Maudlin's book on the philosophy of quantum physics. In particular, I have adapted the sections on dynamics and local beables from his work, sometimes following his wording a little too closely, and sometimes rephrasing his points. Obviously any errors in that rephrasing are my own rather than his.



The Philosophy of Quantum Physics 3: The Everett model


Reader Comments:

1. Michael Brazier
Posted at 17:38:09 Thursday May 25 2023

Penrose

The suggestion of Roger Penrose, that collapses happen when a superposition involves enough mass that gravitational effects become significant, belongs to this family of interpretations. But IMO it avoids one of the problems the flash ontology has: it does offer an explanation for wavefunction collapses, namely the need for an approximately consistent structure of spacetime (and hence a consistent gravitational field.) Of course the other objections remain. But it's better in some ways if a theory of physics can be checked by experiments, isn't it?

Also, I'm not sure that an interpretation not explaining what state a quantum system jumps to (as opposed to what alternatives it has, the system's basis) is a real issue. It would be if we needed physics to be strictly deterministic in principle, but as far as I know the only need for determinism comes from metaphysical naturalism - a philosophical or theological commitment that has nothing to do with any empirical observations.

3. Will Worrock
Posted at 00:26:11 Sunday June 4 2023

Questions as to what a layman can read

Dr. Cundy, I was wondering what would be good for a layman can read as a good introduction to quantum mechanics. Do you have any recommendations?

P.S. Sorry for sending this if you have gotten my previous messages, I just haven’t gotten any notification that they were sent in.

4. Nigel Cundy
Posted at 17:43:45 Sunday June 4 2023

Quantum Physics for laymen

This is a good question, and I'm not sure that I have a good answer. It depends, in part, on how good your mathematics is. If you can handle calculus, geometry, and probability, to A-level or IB level, then I would probably recomend just jumping into some of the textbooks. I am a particular fan of the approach taken in Binney and Skinner; Townsend or Griffiths, although they might not be to everyone's taste.

For a non-mathematical approach, I think you can do worse than Feynman's QED. I also read his lectures before going to university, and they are a good summary of physics, and were at least understandable for me. Volume 3 covers quantum physics. Philosophy of Physics books also tend to cover the basics -- see the introduction by Maudlin I refer to in this post, and there are others.

But otherwise, I can't really advise too much, as I have never really studied quantum physics as a layman. I have just learnt it from the textbooks and research papers (and, of course, my tutors, lecturers, and colleagues). I could google to see what others recommend, but you are just as capable of doing that as me.

5. Will Worrock
Posted at 20:43:10 Sunday June 4 2023

Thanks

Ok, thank you Dr.Cundy.

6. Matthew
Posted at 04:24:19 Tuesday June 13 2023



What basis does the system collapse to?

It might be worth noting the reason that GRW theory has the wavefunction collapsing in the position basis: namely, we can get a reasonable image of the 3D world from the position data. The collapses aren't at all trying to do the same thing in the momentum basis as they are in the position basis, so your statement about why a Gaussian filter is chosen does not sound correct to me. But I might be missing something.

E.g., a superposition of a stationary particle between locations on the right and the left might have a wavefunction f(x+a) + f(x-a) where f(x) is a very narrow Gaussian, and after a collapse near x=a it just looks like f(x-a); the corresponding functions in the momentum basis are something like cos(ap)F(p) before the collapse and exp(-iap)F(p) after the collapse, where F(p) is a very broad Gaussian. More generally, I believe the momentum space representation of the collapse filter should be convolution by the Fourier transform of the Gaussian centered at x=a, i.e., convolution by the product of exp(-iap) with a Gaussian centered at p=0. Maybe this turns out to be equivalent to multiplication by a Gaussian? But that isn't the point of the collapse, pardon the pun, which is to localize the wavefunction around a certain position.

Criticisms of collapse models

I agree that the flash ontology is extremely weird. The matter density ontology is much less weird, though Maudlin points out that it is a bit strange in not affecting the dynamics at all, since it is just derived from the quantum state. (Contrast with pilot-wave theory: the particle positions don't affect the evolution of the quantum state, but they do influence their own evolution, through the guidance equation. So they play a role in the dynamics and not merely the ontology.) The emergence model with only a quantum state and no local beables does not seem tenable to me; I think Maudlin's argument against it (that mere mathematical isomorphism just isn't enough for existence) is solid.

An interesting criticism of spontaneous collapse theories that you did not touch on is related to the brief period of time after decoherence and before a collapse occurs; it afflicts the emergence and matter density versions. The problem is that if the decoherent state persists for long enough (say, because the collapse rate is low) then the theory briefly takes on a many-worlds character, before all but one of the branches are annihilated by a collapse. In this case, our continued survival against random annihilation actually becomes empirical evidence of a sort against the theory. Charles Sebens makes this argument in "Killer Collapse: Empirically Probing the Philosophically Unsatisfactory Region of GRW"; he suggests that GRW might already be ruled out (squeezed out between the lack of evidence for spontaneous collapses that would come from high collapse rates, and the brief many-worlds character from low collapse rates), but more detailed calculations of the timescales of decoherence vs. collapse would have to be made to make this case rigorous.

But I must disagree with you about collapse models being really no better than modern Copenhagen interpretations with decoherence. Decoherence only resolves with what you might call the "practical" measurement problem: it tells you when you are safe to apply the Born rule to find the probability for the system to "actually" be in one of the various states. But it doesn't resolve the fact that it is inconsistent (at least with classical probability) to treat the wavefunction as epistemic in this way prior to decoherence, when interference effects (at least seem to) necessitate treating the wavefunction in that regime as ontic. It helps roughly to locate the "shifty split", but doesn't help the fact that there *is* a shifty split in the first place. Alternatively, if you try to do away with the shifty split, decoherence is no help in removing "Schrodinger's cat" states; the fact that the "alive" and "dead" branches of the wavefunction no longer interfere doesn't mean that one of them disappears. And Copenhagen interpretations provide no solution to the ontology problem at all: it remains rather vague and unclear what the theory is supposed to be about, how this vector in a high-dimensional Hilbert space is supposed to relate to the physical world we observe.

Collapse theories, despite their shortcomings, do solve these issues. There is (provided local beables are specified) a clear ontology and precise dynamics. There are two processes in the dynamics, to be sure - one of them continuous and deterministic, the other discrete and stochastic - but they both apply all the time, and (if you like) they can even be written together in one equation, with the density matrix formulation.

There is still the question of why there are these two processes, and there is still the question of what the wavefunction actually is, so it is true that collapse theories do not improve over Copenhagen on those questions. (I might add that pilot-wave theory seems more satisfactory on both counts.) But that doesn't negate the fact that collapse theories have genuine solutions to the measurement and ontology problems, while, from my perspective, Copenhagen does not. I would also agree with Michael that the intrinsic indeterminism of collapse theories should not be counted as a mark against them; I would not object to a clearly-formulated decoherence-based collapse on the grounds that it was indeterministic either.

Cheers!

7. Nigel Cundy
Posted at 18:35:19 Tuesday June 13 2023



Once again, thanks Matthew for your comments.

Sorry if I gave the impression that I think that indeterminism by itself should be a mark against an interpretation. I certainly don't believe that. My objection is that there were two independent processes affecting the wavefunction, one deterministic (wavefunction evolution) and the other indeterminate (collapse into a particular state), with no obvious way to unify them. I have no objections to an interpretation that has a single indeterminate dynamics of the beables, nor an objection to a model where these two processes each emerge from the same underlying principle, but I see a model where you have two conflicting and independent processes each modifying an ontic wavefunction as just adding complexity and confusion. (So this "objection" would not apply to the Pilot Wave interpretation, for example.) Of course, this is partly prejudice on my part -- I prefer an interpretation where the loose ends are nicely tied up.



Post Comment:

Some html formatting is supported,such as <b> ... <b> for bold text , < em>... < /em> for italics, and <blockquote> ... </blockquote> for a quotation
All fields are optional
Comments are generally unmoderated, and only represent the views of the person who posted them.
I reserve the right to delete or edit spam messages, obsene language,or personal attacks.
However, that I do not delete such a message does not mean that I approve of the content.
It just means that I am a lazy little bugger who can't be bothered to police his own blog.
Weblinks are only published with moderator approval
Posts with links are only published with moderator approval (provide an email address to allow automatic approval)

Name:
Email:
Website:
Title:
Comment:
What is 15+8?