The Quantum Thomist

Musings about quantum physics, classical philosophy, and the connection between the two.
The Philosophy of Quantum Physics 7: Is Aristotelian Philosophy of Nature Obsolete?


The Philosophy of Quantum Physics 8: Theistic epistemological quantum physics
Last modified on Mon Dec 23 10:25:00 2024


Introduction

I am having a look at different philosophical interpretations of quantum physics. This is the sixth post in the series. The first post gave a general introduction to quantum wave mechanics, and presented the Copenhagen interpretations. I have subsequently discussed spontaneous collapse, Everett, Pilot Wave, consistent histories quantum Bayesian interpretations. In the previous post, I looked at hylomorphism as an way of understanding the structure of quantum materials, and the travelling forms interpretation.

In this post, I intend to look at the problem in a different way. It is different because I will make a key assumption that other interpretations either implicitly or directly reject, or at the very least ignore. That assumption is that theism is true. I will then ask what are the implications of that assumption for the physical world. I will have to make a few other assumptions along the way, but the result will be something similar to quantum physics.

First of all, what do I mean by theism? Theism obviously includes belief in God, but there are other philosophies which also incorporate a deity. For example, there is deism, which in its most extreme case has a God who creates the universe and then sits back and lets it unfold by itself with no further interference. To the other extreme, there is occasionalism, which states that everything is solely caused by the unpredictable act of God, and that material objects cannot themselves act as causes. Theism sits in the middle. God is actively involved in upholding and sustaining the universe. Every event in the universe has God among its immediate and direct causes. But the event has more than one cause, and the material objects also act as causes of a different sort. Physics, in this world view, concentrates on the study of the material causes for a particular event. This is an important part of the picture, but it is not the whole picture.

In an Aristotelian picture (and the most rigorous forms of theism are, at least in my view, built on an Aristotelian metaphysics), there are several types of causality. In the previous paragraph I discussed causes of events. One can also think of causes as relationships between different material beings, or different states of matter (to get ahead of myself different quantum eigenstates), what I call substance causality. The states of matter correspond to Aristotelian potentia. A substance can exist in many different potentia. At any given moment of time, if the substance exists at that time, one of the potentia will be actual, while the others are potential. For example, when I am standing (in actuality), I am potentially sitting. When I am sitting, I am potentially standing. This is a way of describing how something can both change and yet remain the same being.

Substance causality is divided into efficient causality and final causality. Efficient causality looks at the past history of a state of matter. What states of matter in the past existed in order to bring about this particular state we are studying? Final causality looks towards the future. What possible states could arise from the particular state we are studying. If we assume the arrow of time (i.e. there is an objective flow of time from past to future), then only one set of physical states will correspond to the efficient cause. However, there can in principle be multiple sets of states corresponding to the final cause. Only one of these will be realised in practice, when we look into the future, but all of them are in principle possible when considering only the present state of the universe. This framework of both efficient and final causality is thus consistent with both a deterministic physics (where for every possible state of matter there is only one final cause) and an indeterministic physics (where there are some situations where there are many final causes, and we cannot predict based only on knowledge of the present and past which of them will be realised).

So substance causality asks the question what being did this being emerges from (efficient causality), or what beings could arise from this particular configuration of beings (final causality). So, for example, when an electron decays into a W- gauge Boson and a neutrino, the electron is the efficient cause of the W- Boson, and the W- gauge Boson and neutrino are together a possible final cause of the electron. There are other final causes of the electron, as there are other changes that can result from it (for example, it might emit a photon).

This can also be expressed in terms of active and passive powers. An active power is the capacity to induce change in another being. A passive power is the capacity for a being to be changed itself (i.e. to move from one potentia to another). Each of the final causes will describe a set of states, some of which will be related to an active power, and others to a passive power.

The sort of Theism I am assuming also supposes that God has various key attributes. God is unchanging in time and omnipresent in space (indeed, exists outside of space and time). Consequently, all other things, such as the precise configuration of matter, being equal, God will have the same probability of acting in the same way at any place or time in the material. God is immaterial. God is omnipotent (able to do anything logically possible and consistent with His nature). God has a free will (which I define by the properties of being unpredictable, and not in any way determined by anything outside God). God has an intellect. God has the ability to create out of nothing or annihilate into nothing. I also assume in the latter stages of the argument that God has the desire to create a universe which has rational living beings in it. I don't think these assumptions about God's nature are particularly controversial, as they follow from various classical arguments for God (e.g. as outlined in part 1 of the Summa Theologica).

I will also assume that the universe can be understood, and represented mathematically. I am not sure that this is much of a step beyond my assumption of theism, since God's intellect and that He directs everything in the universe implies that God can understand the universe, which suggests that it can be understood by a divine intellect. The assumption then merely concerns whether the more limited human intellect is also capable of at least some degree of understanding.

I will need a few additional assumptions which I can only justify through observation. For example, I will suppose that a material universe exists. I will suppose that it has has three spatial and one temporal dimensions, which can be represented in terms of a Riemann geometry with a Minkowski signature. I will suppose that all physical beings exist either at a point or a line segment or within a region in space-time. There are some regions in space where the being is, and other regions where it is not. Any representation of such beings would be parametrised in part with reference to their location in space and time. Immaterial beings (such as God) do not exist in space-time, and thus cannot be represented in terms of a space-time location.

So above I stated that theism implies that each event has multiple causes, some divine and some material. I also stated that physics is the study of material causes. This implies that physics offers an incomplete description of physical events. There are several ways in which this might manifest itself. The first could be to say that in addition to matter, there are also laws of physics which direct how that matter could change over time. Those laws of physics would then be a description of God's sustaining activity in the universe. This possibility could apply in either a deterministic or indeterministic physics. In a deterministic physics, the laws might explain the interactions in the universe, but we still need to explain the laws. Why those laws rather than some other, and why are they obeyed? A full first principles philosophy of physics should be able to answer these questions, and clearly the answer goes beyond a mere restatement of the laws.

However, in an indeterministic physics, there is another way in which we can distinguish between material and divine causes. In an indeterministic physics, a being in any particular state of matter has numerous possible final causes. There is no way from knowledge of the physical universe to predict which of those final causes would be realised. So what I propose is that God's role is to select from the possible final causes which out of the set of outcome states will actually be realised. Since God's will is free, we would have no means of predicting which option he will choose, giving the appearance that the events are random (i.e. not predictable from only knowledge of the material causes). This means of separating divine and material causes can sit instead of or alongside the idea that the laws of physics (which in this case would only lead to probabilities rather than certainties for particular outcomes) are a description of God's sustaining of the universe.

Is it a problem that I am explicitly assuming theism? I don't see why it would be any more of a problem than explicitly or implicitly assuming that theism is false, which is done by many philosophers of physics, who try to explain everything in terms of only material causes. These assumptions are only for the sake of the argument. I make some assumptions, deduce the consequences of those assumptions, and compare against what we observe. If it matches the observations, then that doesn't show that the assumptions are correct, but leaves them as a viable option. If it doesn't match the observations, then at least one of the assumptions must be in error. This is just the scientific method. The approach I am taking is somewhat different from other ways to uncover an alternative interpretation of quantum physics, which often start with the Schroedinger equation, cast it into a particular form, and then interpret the various elements of that form in certain ways. But I cannot see why such interpretations could not do as I have done, by setting out a list of assumptions, drawing conclusions from those assumptions, and comparing against observation. We would then have multiple interpretations all pointing to the same conclusion. We would then have to judge which interpretation contains the fewest ad-hoc assumptions.

Is this a God of the gaps approach? Saying we don't know why there is quantum indeterminacy, therefore God is behind it? I would disagree. Firstly, I am taking the approach of assuming that God is behind it, concluding from that quantum indeterminacy, and comparing against what we observe. Secondly, if you did switch things around and convert this into an inductive argument, the approach would be to say that whatever lies behind quantum indeterminacy must have particular attributes. We can then compare those attributes against what is expected from theism, deism, atheism, and so on. The conclusion would be that there is a being with these particular attributes, and since God is taken to be a being with those same attributes, depending on how many of the divine attributes are implied by the physics, I don't think it unreasonable to equate the two. A God of the gaps approach, on the other, leaves open the possibility of an unknown physical cause. In other words, while we might be able to say something about the attributes the thing that fills the gap must have, what we can say is not sufficient to rule out material causes. It would thus be fallacious to conclude that the gap proves, or offers any indication for, a divine cause. There is obviously more to be said concerning this, but better to leave that discussion until later. A God of the gaps approach invokes God to explain that which is unknown. I invoke God, and from that attempt to deduce what is already known through other means. I just want to say that my approach does not necessarily fall into the God of the gaps fallacy.

Scope and Notation

The scope of this work is just to focus on one question. Given an initial state, what is the probability that the physical system would find itself in a given final state after a specified interval of time? Quantum physics allows us to calculate an answer to this question. This is not the only question answered by quantum physics, but it is an important one, and one which affects many, but not all, of the important philosophical questions. Other important questions include how compound substances emerge from simpler substances, and why particular forms of matter have the properties they do. With regards to the emergence of compound substances, in terms of the physics the answer is provided through effective field theory, and philosophically this pretty much forces us into some conception of hylomorphism. I reviewed Professor Koon's approach to quantum hylomorphism in the previous post, and while I have some differences in how I would express it, my conclusions are broadly similar. The question of physical properties is also well understood in the mathematical construction of quantum physics. This does not rely on any philosophical theory, but has implications for philosophy. I think, however, the answer falls out easily once you accept hylomorphism, and when we turn to this question it doesn't add anything to the philosophy beyond what is derived from the question of dynamics and compound substances.

So, onto the question of dynamics.

The first thing we need is a representation of various fundamental particles in the physical world, and also a means to describe physical processes in the representation. This representation is not the same as reality. It will not capture every detail of reality. But it will allow us to capture some of reality, and enough to answer the question of dynamics. The process involves a bijective map. We extract various physical attributes from observation. We then map from those attributes to our representation. We perform our calculations to update the representation over time, which simulates how the physical system updates over time. We then map back from the representation to reality, which allows us to make predictions for future observations. These predictions are then tested, and if the theory we use to make the calculations is correct, and we have a large enough sample of systems with the same initial states, then the predictions will match reality. At least within the theoretical and experimental precision.

Experimental imprecision arises from various different sources, such as the limited resolution of measuring devices, and limited statistics. Theoretical imprecision arises from various sources, such as not perfectly knowing the values of certain parameters, or controlled approximations used in the calculation. These sources of imprecision are all known, their extent can be estimated, and carried through the calculation. Imprecision does not seriously affect the process above; it just makes the question of whether the final test agrees with the theoretical calculation a bit more ambiguous. One of the most important goals of both experimental and theoretical research is to reduce the imprecision of the measurements and the calculations.

I mentioned probability above. What do I mean by this? I define probability as follows.

  1. A probability is a number assigned to each of a set of possible physical states. These physical states must be orthogonal (i.e. don't overlap), and complete (i.e. the final outcome must be in one of the states). If the states are parametrised by a discrete variable, they should also be either irreducible (i.e. can't be reduced into smaller substates) or can be expressed as the set of various irreducible states. If the states are parametrised by a continuous variable, one usually bins the data into various small intervals, and then, ideally, takes the limit as the interval size tends to zero.
  2. Each probability lies between zero and one.
  3. The sum of all probabilities is equal to one.
  4. The probability of the outcome being in state a or state b when there is no overlap between a and b is the probability of it being in state a plus the probability for it being in state b.

These four properties are one of the two standard and equivalent ways to define probability mathematically. There is nothing new or unique to me in what I am writing here, except perhaps for some of my notational conventions. I will write the probability that the outcome is in state a as P(a). (In practice, the expression P(a) is meaningless, but I will discuss that a bit later.) The set of all probabilities across all the outcomes is known as a probability definition. I then introduce the following definitions. One can also consider the probability of multiple outcomes of the system, i.e. you might have one measurement whose possible outcomes are parametrised by the set of states A and another measurement whose possible outcomes are parametrised by the set of states B. In this case we might be interested in the correlations between an outcome a in A and an outcome b in B.

So what is a probability? Well, ultimately it is just a defined set of numbers which follow various rules when we try to manipulate them. Like many abstract constructs it exists only in our heads and in our notebooks. It does not exist in the same sense that an individual being exists. Nor does it exist as a property of an individual being. You cannot say that a physical system or event has an inherent probability.

So why is the concept of probability useful? Because a normalised frequency distributions follows the same rules. This means that a probability distribution can be used to predict a frequency distribution. Probability is not the same as frequency. A probability is something you calculate. A frequency is something you measure. Not every probability distribution can be used to predict a frequency. For example, the set of numbers {0.112,0.223,0.331,0.334} together form a valid probability distribution. What do they represent? Nothing, because I just made them up. So while a probability can be used to represent or predict a frequency distribution, it will only do so if it is constructed in a particular way that reflects the physical system we are attempting to understand.

So every useful probability is constructed from various premises that in some way reflect the physical system. If we are trying to predict a physical outcome given an initial state (which is what I am trying to do in this post) these premises can be constructed from our knowledge of the set of possible physical states, the initial state of the system, various rules which describe how the system evolves in time, and the various sources of imprecision involved, and perhaps some others. So we cannot ask what is the probability of an outcome a. The expression P(a) is meaningless, because we haven't expressed the assumptions used in calculating it. For a physical system, it will at the very least depend on the initial state. We can only ask what is the probability of an outcome a given an initial state b and various other assumptions X concerning how the system evolves in time. This is expressed as P(a|bX). Obviously, this will have to be expanded to take into account imprecision in the measurement of a and b, and the various other sources of imprecision I alluded to above. But to keep this simple I will neglect those complications, as they don't really affect the main points of I want to say.

So with understanding it is clear that probability is an extension of logic that allows us to make useful statements concerning situations where the outcome is not certain. The outcome might be uncertain because there is some unknown cause affecting the system, whose effects we can only estimate, or some more inherent indeterminism. A probability is thus an expression of our knowledge, based on the various known causes and also a model for how to represent the unknown causes. Ideally that model should be constrained by some symmetry principle. For example, when we model a balanced 6 sided dice, we assign a probability of 1/6 to each outcome to reflect the underlying symmetries of the dice. That is, of course, based on the assumption that the dice is fair and there is no bias induced by how it is thrown.

It does not make sense to ask "what is the frequency distribution for a single event?" A frequency distribution refers to what we observe for a large number of events. Indeed, the mathematics linking probability to frequency assumes that this number is infinite. Obviously we can't create an infinite sample of observations; just a very large one, and this adds to the imprecision of the measurement of the frequency distribution. Again, this is a complication I'll neglect, as correcting for it doesn't really affect the conclusions I want to draw, and I will just suppose we are comparing our probability against an infinite sample. Just as we cannot ask what is the frequency of a single event, neither can we state the probability of a single event. The probability cannot be applied to single particles. It is just a predictor for a frequency for a sample of events generated by repeating the same experiment an infinite number of times.

There is an alternative and mathematically equivalent interpretation of probability, which is its use in decision theory. This is when there is a certain quantifiable risk/reward associated with each outcome which arises from a certain course of action. We multiply the reward (a risk is treated as a negative reward) for each outcome by the probability that the outcome arises, and add them up. Then we add them up to give a total expected reward for the action, and select the action which has the highest expected reward. This understanding of probability is key to QBism, but I won't make use of it here. In this interpretation, probability is still an expression of our uncertainty constructed from various assumptions, but the way it is handled is different from what I need here. I do not take bets on physical outcomes, but try to predict an outcome expressed in terms of a frequency distribution. I find the subjectivity of this interpretation troubling.

But the logical interpretation is objective. The calculation of the probability ultimately only depends on its assumptions. The calculation is the same regardless of whether or not those assumptions are believed, or even whether or not they are true. There is no dependence on what an individual believes or knows, so the calculation of the probability itself is not subjective. Of course, to be useful, we would want to confirm whether or not the assumptions are correct by comparing (directly or indirectly) against experimental observations. But here we are extracting information from objective reality, in order to feed it into an objective calculation of the probability. Thus there is no subjectivity involved so far. Finally, we assume that the unknown causes are best constrained by certain symmetries and not other symmetries. This assumption is not based directly on any individual observation. But the calculated probability distribution will only correspond to the real-world frequency distribution if those assumptions are correct. So we can make predictions for the frequency distribution, test those predictions, and through a process of falsification home in on the symmetries that constrain the unknown causes which correspond to what is a true representation of the objective reality. So the logical interpretation is epistemic (it parametrises our predictions for the eventual frequency distribution, rather than corresponding to some mind-independent property of material particles), but it is also objective (in the sense that the calculation of the probability is the same for all observers, and is not dependent on the individual beliefs or knowledge of those observers). One cannot have two people with contradictory calculations of the probability. If the calculated probabilities are different, they would have been calculated from different premises and are thus not directly comparable.

If the probability is the predictor of a frequency distribution, and that is all we can calculate, then we cannot know anything about the motion of an individual particle, unless the probability is either 1 or 0. The statement that there is a certain probability that the particle will pass through this slit is meaningless. It is only meaningful if we discuss an ensemble of particles, when it means a certain fraction of them will go through this slit, and another fraction will go through a different slit, in proportion to the probabilities. When it comes to individual particles, we just have to say "we don't know, and we can't know unless we directly measured it."

Therefore the biggest error made by most interpretations of quantum physics is to make the wavefunction an expression of ontology. To treat it as a representation of the physical electron itself, rather than as an expression of our knowledge of the electron. The wavefunction can be used to compute a probability, and the probability is epistemic. Therefore the wavefunction also has to be epistemic. I think, of the psi-ontic interpretations, only the pilot wave interpretation avoids this error, because in that interpretation the unpredictability of single events arises from our uncertainty concerning the hidden variables (i.e. we do not know all the causes).

Locality

So why does uncertainty and indeterminism arise in quantum physics? It can only be because there is an unknown cause which does not arise from those things we can measure. The cause would also have to be unpredictable, in the sense that individual events from the same initial configuration could have different results, and yet also regular in the sense that one can still compute frequency distributions which remain constant no matter when or where the experiment is performed. This cause (or causes) must be able to influence every event in the universe, without any difference in the probability. It is not physical (otherwise we would be able to measure it and it would not, from the perspective of the physicist, be unknown), nor can it be controlled by physical substances. So this points to an omnipresent, eternal, immaterial, free will, which lies behind every event in the universe. Which event happens depends both on this (unknown) cause and the (known) configuration of matter. The unknown cause thus matches at least some of the attributes of the theistic God. As stated, I do not regard this interpretation as a God of the gaps model. If we proceed using an inductive appraoch (i.e. in the opposite direction to my main presentation), we would deduce the attributes that the unknown cause must have, and find that they match the attributes of the theistic God. I am deducing that only God, or a being exactly like Him, has the attributes required to explain the physics.

Efficient and final causes can be explained entirely in terms of physical beings (except possibly for an initial act of creation at the start of the universe, or a miracle, but I will not discuss those here). But what we cannot do is explain (in terms of the beings both observed and represented in physics) why the event of this particular decay occurred or why it occurred at that time. To explain this would be to invoke an event cause. An event cause is only partially explicable through matter, and rest of it is explained by the free choices of God.

Because efficient and final causes only involve material beings, which exist at particular locations in space and time, they are constrained by the rules determined by the Minkowskian geometry of space and time. From the arrow of time, we know that events cannot be influenced from future arrangements of matter; only either past or present arrangements of matter. Now, if we had action at a distance for efficient or final causes, this means that matter in one point in the universe at a location x and time t could influence how a particle could decay at location y and time t. However, these coordinates depend on the reference frame, and unless x=y, we can always perform a transformation so that the coordinates of the first particle are at x' and t', such that t'> t. This would imply that we have future causation, which by construction is impossible. Thus at least one of the following premises must be true:

  1. There is a fixed reference frame, so the universe only appears to be Lorentz invariant, but is not so in practice.
  2. There is future causation, so the configuration of matter in the future can influence the final or efficient causes in the present.
  3. Efficient and final causality, i.e. the creation and annihilation of particles, or other movements of a particle, can only happen at specific points in space.

Like (almost?) all physicists, I strongly support option 3, the principle of locality. Option 1 is rejected because it does not explain why the Lagrangian of both general relativity and the standard model of particle physics satisfy Lorentz symmetry. If there was a fixed reference frame in the universe, and a universal notion of time, then the most natural fit would be that laws of physics should satisfy Galilean relativity (and a E3⊗E1 Euclidean geometry and associated symmetries) rather than Einstein's relativity (with its SO(3,1) geometry and associated symmetries). That's because to propose a fixed space and objective and independent time (as implied by an objective fixed reference frame) is simply a different way of expressing a E3⊗E1 Euclidean geometry. Future causation also can be ruled out, because you expect that it would be observed. And, of course, both legs of our best theory, general relativity and the standard model of particle physics, are described by Lagrangians which satisfy locality, so it is very probable that their union would also be local. So we are forced to accept option 3 anyway, and as such there is no need to also suppose option 1 or option 2.

However, this only applies to those types of causality which depend solely on matter, i.e. can be parametrised in terms of a location in space and moment in time. That's not the case for event causality, which (in the philosophy I am describing here) depends in part on an immaterial, i.e. outside space and time, cause. Thus we could in principle observe correlations between events which are space separated. So, for example, if we have two spin 1/2 Fermions and measure their spin along one particular axis. We know that the measurement of the spin of the first particle has to be either +1/2 or -1/2, but cannot predict which it would be because that arises from the free choice of God. Let us say for the sake of argument that there is a 50% chance that it is spin 1/2 and 50% chance that it is spin -1/2; the 50% meaning that if we repeat the experiment a large number of times half the experimental runs will return spin 1/2 and the other half spin -1/2. There is no violation in locality when the particle collapses into one of those states; i.e. before the measurement the particle had an indeterminate spin along that axis, and after the measurement it had either the +1/2 or -1/2 spin, but there is no sudden jump in the particle's location when that happens. The same is true for the second particle; the result will be either +1/2 or -1/2. But God is not constrained by geometry, so it is possible that He would choose, in certain circumstances (and the absence of a miracle), that the two results should be correlated. If they are correlated, then if the result of the first experiment is +1/2, then the probability of the second experiment returning +1/2 would not necessarily be 50%, but could be something else, depending on the precise orientation of the measuring equipment. (There would be the complementary probabilities if the first experiment returned -1/2.) Recall that we can only measure the frequencies (for comparison against the calculated probabilities) after we perform the experiment a large number of times. Unless the detectors are perfectly correlated or anti-correlated, the individual events are each unpredictable. Even if we know one of them, we cannot predict with certainty the result of the other. There does not appear to be deterministic action at a distance between the two particles. It is only when we repeat the experiment a large number of times that we see correlation in the frequency distributions. This is easily explicable in the theistic interpretation. God can freely choose any outcome, but there is no requirement that He should select each of the various combinations of outcomes with the same frequency.

Efficient and final causes are local. Event causes can show non-local correlations in frequency distributions measured after the experiment has been repeated a large number of times.

Creation and annihilation of matter.

God can create the universe out of nothing. The universe exists of many parts, which ultimately reduce to various fundamental particles (we can discuss how these can combine into compound substances at a later stage of the argument). To create the universe means creating these particles. Thus we need an object that represents the vacuum, which contains no particles, |0>. This represents the nothing that exists in the absence of creation. (We do not observe nothing in practice, so this state does not correspond to anything in the physical world, but it is a helpful starting point as an abstract ideal when constructing the representation of the physical world.) We also need an operation that representation of a particular type of particle, as. s here indicates the various parameters that describe the state of the particle, such as its location, spin and so on. There would also be bs and so on to represent the creation of different types of particles. The opposite of this creation operation would be an annihilation operation, which I will denote with as. Thus we can represent any given state of matter by applying the appropriate creation operators to the vacuum state. So, for example, we might write a state |A> to represent as1as2bs3…|0>. Any change of state can be represented through the product of creation and annihilation operators. We annihilate the current state, and create the new state.

The question we want to ask is given this initial state, what is the probability that we would finish in a given final state. There are two different formulations of quantum physics I could use here, which are mathematically equivalent but use different means to represent the uncertainty. The first would be to use density matrices, which just use probabilities throughout the calculation. This is the approach used in QBism, and I could reformulate the below to use the same method. However, I am more familiar with working with amplitudes rather than the density matrix, so that is what I will do.

An amplitude is a complex number (or collection of complex numbers) whose modulus square (or inner product) satisfies the rules that govern probabilities. So we have a set of possible outcomes, which we suppose to be orthogonal to each other, complete, and at the most fundamental layer irreducible. We assign these numbers to each possible state, or configuration of matter. We can express our knowledge of a physical system that could be in one of two states |A> or |B> as the wavestate

|ψ> = α |A> + β |B>

where |α|2 represents the probability that the system is in state |A>, and |β|2 represents the probability that the system is in state |B>. These probabilities are conditional on the initial state of the system and whatever assumptions we have concerning how the system evolves in time. Obviously this can be expanded to a system with any number of physical states.

Each of these states can be constructed from the vacuum state and an appropriate combination of creation operators.

We need a means to extract the amplitudes from a wavestate. After all, the question we want to answer is what is the probability that the system ends up in a particular state. To do this, we introduce the object <0|, which asks is the system in the vacuum state. We define this so that <0|0> = 1 returns the amplitude that a system in the vacuum state is in the vacuum state. We know that creating a particle and then annihilating it leaves you in the state you started from, so as as must be proportional to the identity operator, with the constant of proportionality a complex number of modulus 1 so that the probability of being in a vacuum state after you take a vacuum state, create a particle and then annihilate it, |<0|as as|0>|2 is 1.

If, for example, we define |A> as the state as|0>, then we know that

|<0|as|A>|2 = 1.

This means that the object |<0|as, which I will define as < A|, when applied to a state extracts the probability that the system is in a state |A>. Consequently we also know that |<A|B>|2, the probability that a system in state |B> (which is different from and orthogonal to |A>) is in state |A> is zero.

We can use this tool to extract the probability that the system where we represent our knowledge of that system by a wavestate |ψ> is in practice in a state |A>. This is |< A|ψ>|2 = |α|2.

This is all just notation. It express the idea that

  1. There is a physical system which can exist in various states at a given moment in time.
  2. These states are a parametrisation of the potentia of a set of individual particles each with their own location and perhaps various other properties.
  3. Sometimes we know precisely what the actual state of the system is, for example if we have just prepared it in a given state.
  4. Sometimes we don't know precisely what the actual state of the system is. This might be because the evolution of the system in time is dependent on the free choices of God, which we cannot predict. We don't know which state the system will evolve into, only that it must evolve into some state.
  5. We do, however, know the possible states that God could move the system into, and we want to be able to calculate the probability (based on various assumptions) that the system is in one of those states.
  6. We need some means of representing the state with no particles, and creation and annihilation of physical states, as these can be used to represent God's actions.
  7. We need a means therefore of representing our knowledge of the system mathematically such that we can extract probabilities from it. Our knowledge of the system is expressed as a mathematical object that encapsulates the data terms of the states themselves, and a number that can be converted into a probabilityand is assigned to each of the states.
  8. We need a means to extract probabilities from the object we use to represent our probability.
I don't want to claim that this notation is the only means to achieve these goals -- it certainly isn't. Why use this notation rather than something else? Whatever notation we use it needs to be mathematically consistent, which limits us a bit. Some of the alternative ways we could represent our knowledge are mathematically equivalent to what I am proposing here; in these cases the choice is merely one of personal preference. Others are not mathematically equivalent, and here we need some reason why to choose this means to model nature rather than some other. For example, we could make α and β probabilities rather than amplitudes. However, the amplitude formulation is more flexible, since it allows us to model systems which both exhibit interference effects, and also systems which don't. Making α and β probabilities doesn't allow us to model interference effects, since probabilities are always positive and consequently can't cancel each other out. On the other hand, if the laws governing how the system evolves are such that all the amplitudes keep the same phase (for example, they might be restricted to positive real numbers), then the amplitude representation can simulate a system with no interference and becomes equivalent to one which uses probabilities directly. Using amplitudes rather than probabilities is thus more flexible. We might not need that additional flexibility, but we don't know that when we are just starting out.

In practice (and I am getting ahead of myself here) a probability is calculated from various known causes and a model which parametrises the unknown causes. That model ought to be based on some symmetry. For example, when we model the roll of a dice, we use the symmetries of the dice to constrain the probabilities we assign to each possible outcome. A physical system will also have various symmetries. When we model that physical system, we need to parametrise our uncertainties in a manner where we can represent those symmetries. The amplitude formulation does allow us to represent gauge symmetries (which correspond to a rotation of the amplitude by a complex phase, or by a unitary matrix if the amplitude is best represented by a vector of complex numbers). Using probabilities to parametrise our uncertainty of which state the system is in does not so easily allow us to represent those symmetries (unless we use a density matrix formulation, where the probability is expressed in terms of the eigenvalues of a Hermitian matrix). Our final result needs to be compared against a frequency distribution, so we need a means to convert amplitudes to probabilities. But we have that via the Born rule. But this conversion should only take place when we are ready to compare with a frequency distribution.

Now, the basis used to construct the wavestate is not unique. One can also identify states

|A'> = cos θ |A> + sin θ|B>
|B'> = -
sin θ |A> + cos θ|B>

These primed states satisfy the properties of completeness, orthogonality and irreducibility, so mathematically they are just as valid as the original unprimed states. Like the unprimed states, they can also be expressed in terms of creation operators acting on the vacuum state, only the primed creation operators would be superpositions of the original operators. In practice, this freedom to change basis is very useful, and I will keep coming back to it.

The first effect of this freedom is that it allows us to answer a key question. We know that if you take a vacuum state, and create particle s and then create particle p that should give you the same state as taking the vacuum state and creating particle p and then particle s. But the amplitude need not be the same. Thus

asap|0> = λ asap|0>

What is the constant of proportionality λ? I won't go into the proof here, but by using the freedom to rotate states we can show that it must be either 1 or -1. This indicates that there are two different types of particles, Bosons where λ = 1 and Fermions where λ = -1. The notable thing about Fermions is that you cannot have two Fermions in the same state at the same time. asas|0> = 0 .

Similar constraints apply to creation and annihilation operators. For Fermions

(asap + apas)|X> = δp,s|X>.

For Bosons,

(bsbp - bpbs)|X> = -δp,s|X>.

δp,s here gives the value of 1 if p = s and 0 otherwise. If p and s are continuous variables (such as location), then it is replaced with a Dirac delta function, which is basically the same principle adapted to continuous variables.

This means that when you have a system involving Fermions, you have to be careful about the ordering of the creation and annihilation operators. You have to pick a convention, and stick to it.

Evolution of the system in time.

The question we want to ask is If we start with a system in a given initial state, what is the probability that it will finish in a given final state? Obviously the probability is only defined when we repeat the experiment enough times to reliably measure a frequency distribution. We now have most of the tools we need to answer this question. We know how to construct the initial state, by applying various creation operators to the vacuum state. This initial state represents our knowledge of the system. We then need a time evolution operator which updates our knowledge of the system. This, when applied to the initial state, will change it into a superposition state, containing a sum over each of the possible outcomes, with each associated with the amplitude that the system is in that state. Finally, we know how to construct the comparison operator to extract the amplitude for each final state.

So the question is what is the time evolution operator, which I would label as U? Firstly, we need the conservation of probability, which forces it to be a unitary operator, so in the form ei H, where H is a Hermitian operator, which has to be dimensionless. And we also know what form H must take. We first of all break the evolution into small time increments. For each time increment, H must consist of a sum (or integral, if the parameters are continuous), over various terms, with each term applicable to a particular initial state. This would consist of an annihilation operator to destroy that initial state, an operator to represent evolution in time, and then a sum over all the possible final states. But we know which operator represents evolution of a continuous function from time t to time t + δt. This comes from a standard Taylor expansion.

f(t + δt) = (1 + δt ∂/∂t + (1/2) (δt)22/∂t2 + …) f(t) = eδt ∂/∂t f(t).

The exponential of the operator is a shorthand for its Taylor series expansion. The derivative is anti-symmetric, so if we were to generalise this to a multi-variable system with operators it would be anti-Hermitian.

where ∂/∂t which I shall write as t represents the partial derivative with respect to time (and holding any other variables constant).

So the time evolution operator for an infinitesimal time slice must take the a form similar to

U = e(1/ℏ)δ t ∑p,sc(p,s,t) aptas + …

c(p,s,t) is a weight, which recognises that the creation operators referring to different states could have different contributions to the sum. It is constrained by the need for U to be unitary (to preserve probabilities), but otherwise we need not suppose that every parameter (for example every location in space) is treated the same in the time evolution operator. The weights can also in principle vary in time. The term I listed acts on initial states which only have a single particle in them. I am here adopting a picture where the basis states in a space/time representation remain constant in time, as do the creation and annihilation operators themselves (after we fix a reference frame), but the amplitudes which are an essential part of the superposition which describes describe our knowledge of which states are occupied vary in time. The creation operators themselves (in a space/time basis) will depend on the location coordinates (as they create a particle in a particular location). However, this does depend on a choice of Lorentz basis, and after a Lorentz transformation things could get a bit more mixed up. I thus adopt the Schrödinger picture.

It also seems that we need terms with multiple creation and multiple annihilation operators, to deal with those states. However, in practice these additional terms don't contribute. If I was presenting this rigorously, I will need to wait a few more steps of the argument before I could formally get rid of them. But this is the simplified version and I don't want to keep writing that + … so I will neglect them for now and explain why I can neglect them later.

So, we have a time evolution operator that resembles

U = e(1/ℏ)δ t ∑p,s aptas

This is just for an infinitesimal time slice. To get to a longer duration, we simply multiply these together. We write this as a time ordered integral,

U = T[e(1/ℏ) ∫dt ∑p,sc(p,s,t) aptas]

Time ordering just means that we break it down in infinitesimal sections, and apply them in order. It is required because we are dealing with non-commuting operators, which complicates the usual rules that apply to exponentials of functions.

In this form, it isn't of much use. It just tells us that the amplitudes that parametrise the wave-state evolve in time. What we want is an expression in terms of spatial derivatives, and, perhaps, various other terms:

U = T[e(1/ℏ) ∫dt ∑p,s,ic(p,s,t) ap γi(x,t)∂xias]

This would describe motion in space over time. But what should we choose for the parameters γ? It seems that anything will suffice. We cannot narrow this down further just from the requirement for mathematical consistency.

To progress, we need to remember what this represents. It is a mathematical representation that allows us to construct a superposition that allows us to extract amplitudes which correspond to probabilities that God will evolve a given initial state into a given final state. It describes how God interacts with the universe (in the absence of miracles). We can therefore expect it to reflect some of the attributes of God.

The time evolution operator operates on every state of the universe, and for any state it can be use to compute amplitudes to tell us how that state evolves in time. It thus does not assume that any particular configuration of matter exists actually. The mathematical form of the operator depends only on God and what could exist in the universe, and not on any material beings which actually do exist. God does not depend in any way on any material being. The time evolution operator is a generalised description of the actions of God applicable to all circumstances. It's mathematical form therefore cannot vary depending on whether or not any particular material being exists.

For example, God is omnipresent, in the sense that He does not exist within space and time, but is equally causally connected to every point in space. We would expect this to be reflected in the time evolution operator as a translation symmetry. The mathematical form of H would not contain any mention of an absolute location in space, or prefer one location over another. This means that, if the parameters p and s represent in part spatial locations, the weight c can have no dependence on them. Likewise, God is eternal, which implies that the time evolution operator should have a time translation symmetry (constrained only because the initial and final states are at a particular moment in time). This implies that the weight c can have no dependence on time. Likewise, because God is not constrained in space, one length or duration in time is the same as any other to God (in the absence of material structure). We thus expect the time evolution operator to be symmetric under transformations of the scale. If God is outside space, then he also has no preferred direction, implying a rotational symmetry. If God is both outside space and time, then one velocity is the same as any other to Him, which, together with a Minkowski metric, implies a Lorentz symmetry. Finally, the phase of the amplitude does not make any difference to the probability (which is what we ultimately measure). There is no reason why God would prefer one phase over another. This again leads to a symmetry, in this case a gauge symmetry.

A symmetry can be either global or local. For example, a global symmetry means that whatever we are studying remains unchanged (in this case the mathematical form of the time evolution operator) when we shift one cm to the right. A local symmetry allows the shift to vary from one location to another. So a local translation symmetry would mean that the mathematical form of the time evolution operator is unchanged when we shift one location 1cm to the right, another location 2cm to the right, and so on. We can either think of this in terms of moving the particles themselves, or by changing whatever scale we use to represent the locations of the particles. For example, suppose that we construct our rulers out of elastic, and mark off 1cm marks when it is not under tension. We then place various objects at different marks along the ruler. Next, we stretch the elastic. The distance markers on the ruler will move in relation to the objects. There is no reason why the ruler should be stretched uniformly; some distance markers might move differently from others. So the distance between a 10cm and 11cm mark might become double that between 9cm and 10cm, when previously they were the same. An object which was previously at the 10cm mark will no longer be at that mark, but some other value. We say we are stretching the elastic in this illustration because all of these things, the elastic and the objects, are sitting on a table, and that table is unchanged. But suppose that our only means of visualising the location of the objects was through this ruler, and suppose the ruler is made of a material where we could not tell just be looking at it whether or not it is stretched. All we know is that a certain object was previously alongside the 10cm mark, and now it sits alongside the 8cm mark (for example). There is no way to distinguish whether the object was moved or the ruler was stretched. Both would have the same result.

So I have suggested that the time evolution operator should satisfy various symmetries, on account that it reflects God's nature and God is outside space and time, and thus to God there ought to be no preferred location, time, distance scale, or anything else. But are these global symmetries or local symmetries? Is God's ruler made of some rigid material or elastic? (I speak analogously when discussing the ruler; the term merely denotes the means by which God parametrises distances and it is not really any physical object.) I think it clear that they should be local symmetries. In both global and local symmetries, distances are defined by marks on a ruler. We cannot judge whether or not the ruler is stretched or whether or not it is stretched uniformily without some independent measure which tells us that the distance between marks is different depending on where you are on the ruler. We exist in space and time, and are contained within a certain volume in space. Thus we can use our own bodies (or the bodies of those things around us) to judge whether or not a ruler is stretched. But this option is not open to God, since God is outside space and time, and the mathematical form of the time evolution operator does not depend on the actual existence of any material being. Thus there is no independent measure by which God can distinguish between a stretched ruler and a rigid space time and a rigid ruler and a stretched objects in space. To God, they are simply two different ways of expressing the same thing. Thus the symmetries should be local. A God-eye view of the universe has no means to distinguish whether the gaps between the marks on the ruler are deformed in different ways in different places, because all it has is the ruler.

When we create a physical representation with its coordinate system, we in effect impose a set of rulers on the universe (including the ruler that parametrises the phase of the amplitude). But there are numerous different ways of doing this. Which should we choose? It is an entirely arbitrary choice, so we should get the same result however we do it. This means that there will be a symmetry meaning that the final probability distribution would not be dependent on this arbitrary choice of coordinate. This is a weaker condition than stating that the Hamiltonian (or time evolution) operator should be subject to the symmetries. It is possible to have a symmetry in the results without a symmetry in the Hamiltonian. There are two different places where the symmetry could enter into our equations. There is the process where we map from reality to representation and representation to reality. Since the coordinate system is arbitrary, this should manifest the symmetry. But we can also ask if that symmetry also exists in reality, and in particular constrains how things move in time. If so, then we ought to also build the symmetry into the Hamiltonian to respect the symmetry in reality. But that is a seperate question to point of view invariance, and only loosely related to our choice of coordinate systems.

But in this interpretation we recall that the Hamiltonian operator represents how God interacts with the universe, or how particles in the universe "perceive" of God. Since God is outside of space and time, this cannot depend on where those particles are in the universe, or when they are, or how fast they are travelling, and so on. In other words, in this theistic model, there is a symmetry that describes how things relate to God in reality. Since God moves those things, this symmetry will also constrain how God moves those things. Now we map to the representation, create the coordinate system, and so on, but to probably model the physics we need to reflect this symmetry in our description of how things move. In the model, that description is parametrised by the Hamiltonian. So the Hamiltonian, if it is to correctly represent how God moves things in reality, should also reflect those symmetries. Obviously the Hamiltonian is expressed in terms of a coordinate system, so we will also represent the symmetry in terms of the coordinate system. But nonetheless, the attributes of God and the idea that the Hamiltonian represents a description of how God interacts with matter mean that the Hamiltonian should reflect the various local symmetries.

In particular, with gauge symmetry, one cannot directly compare amplitudes that a particle should be one location with amplitudes that the particle should be at a different location directly. As such there is no reason why we should need to place the zero point of the phase of the amplitude at the same point at two different locations. In fact I don't think it meaningful to compare the zero point of the phase of the amplitude at different locations. Certainly it would make no difference to God. We should also consider a local gauge symmetry.

I will start by focusing on a local gauge symmetry and global Lorentz symmetry. The local Lorentz symmetry and local translation symmetry are important -- they are the symmetries that lie behind general relativity, and thus are key if we want to convert the arguments of this post to a full theory of quantum gravity -- but lie beyond the scope of this post.

As an example, let me consider the amplitude for a two body scattering experiment. So we have two particles in the initial state at time t0, and we wish to calculate the amplitude that the two particles will be in a given final state at time t0. The expression for this amplitude is

A = <0|T[ ay1,t1 ay2,t1 U(t1,t0)ax1,t0 ax2,t0]|0>

I put a specific time index on the operators used to create and annihilate the particles because we start and end the experiment at a particular time, which is going to be important once we perform a Lorentz transformation.

We now perform a Lorentz transformation. This changes the coordinates of the various particles, and it might also modify the time evolution operator. But nothing has physically changed. All we are doing is reparamtrising the coordinate system. It is still the same physical process, the same particles in the initial state, and the same particles in the final state. Consequently the amplitude should be invariant under a Lorentz transformation. In the new basis, the expression for the Amplitude would read

A = <0|T[ ay'1,t'11 ay'2,t'12 U'(t'11,t'02)ax'1,t'01 ax'2,t'02]|0>

The spatial and temporal indices have all changed to the new basis, and, because this is special relativity and the two particles in the initial state are not at the same location, the parametrisation of the creation operators of that initial state would no longer be at the same time in this reference frame. That these two amplitudes (and the amplitude corresponding to every other global Lorentz transformation) are identical places a strong constraint on the possible forms of the time evolution operator U, when expressed in terms of spatial derivatives. There is an even stronger constraint when we force the time evolution operator itself to be subject to those symmetries. There are then only a small number of possibilities, forming 3 families of operators (consistent with the need to renormalise). One of these families describes spin-1 Bosons. Another describes spin 0 Bosons. And the third describes spin 1/2 Fermions. The key constraint is that the Lagrangian from which the Hamiltonian or time evolution operator is constructed should satisfy Lorentz symmetry.

For spin-1/2 Fermions, we are forced by this symmetry to split the Fermion into 4 component spinors (the four components describe particles and anti-particles, and two spin states), and use the Dirac formulation to write

aptas = ∑jap ((γ0 γj)ρσj + i γ0m δρσ)as.

where j runs over the spatial indices and γ are the set of anti-commuting 4x4 Dirac matrices. ρσ between them parametrise the matrix component To keep the amplitude Lorentz invariant. m is a parameter which in practice represents the particle mass. Lorentz invariance also requires locality, i.e. p and s are restricted to parameters at the same location. The remaining parameters in p and s describe the spinor index, and again these are restricted (because we need this to be a scalar for Lorentz invariance to be satisfied) so that the "final" result is

psaptas = ∫dx ∑i, ρσ ax,ρ( (γ0 γi)ρσi + i γ0m δρσ)ax,σ.

If we combine this with local gauge invariance, then there is a problem. Local gauge invariance implies that the amplitude should be unchanged under a transformation such as as → exp(i α(x)) as. The expression above is not invariant under this transformation. The solution to this is to introduce a spin 1 Bosonic field, which (neglecting polarisation for simplicity) can be denoted by the four vector (A0,Ai). (A denotes a sum over creation and annihilation operators for the spin-1 gauge field). If this transforms in the right way under gauge transformation, then the change in A will cancel out the change in the creation/annihilation operators a, leaving the whole expression invariant under a local gauge transformation. This leaves us with

psaptas = ∫dx ∑i, ρσ ax,ρ( (γ0 γi)ρσ ( ∂i + ieAi) + i γ0m δρσ + i γ0eA0)ax,σ.

e is again another parameter, which describes the interaction strength between the Fermion and gauge field.

I won't go into the details, but we can (and should) add to this a Maxwell term to describe the possible motion of the gauge field, and possibly a CP-violating term as well. And this gives us Quantum Electrodynamics, the theory which describes the interaction between electromagnetic radiation (or photons) and electrons.

We can expand this theory further, by introducing additional gauge Bosons. We add additional indices to the Fermion creation and Boson operators, which allows new gauge symmetries rotating between those indices. The gauge symmetry of electromagnetism is known as U(1). The relevant additional symmetries are parametrised as SU(2) (when we have an additional parameter on the creation operator which can take two values), and SU(3) (when we introduce another parameter which can take three values). In principle we can go further, consistent with the symmetries, but that's all that seems to be present in nature and gives us the weak and strong nuclear forces. We can also introduce a set of spin 0 Bosons, and if we couple that with the gauge fields, then through electroweak spontaneous symmetry breaking, we can generate terms that resemble masses for the gauge Bosons, terms which resemble masses for the Fermion fields, and a Higgs Boson which can in principle be observed. We can add additional Fermion fields, and the relative charges of these fields are fixed due to the need to cancel out a potential breaking of another symmetry when we transform the integration measure. This gives us a family of Fermions, gauge Bosons, and spin zero Bosons, all connected together. There is no reason why we are restricted to just one family, and it appears that nature gives us three. If we have more than one family, then it is possible to have some difference in how the mass terms are represented in the U(1) x SU(2) part of the time evolution operator and the SU(3) part of the time evolution operator, and this leads to a bit more freedom in how we construct the time-evolution operator, which is parametrised by the CKM matrix.

And that's the standard model of particle physics (plus some additional right-handed neutrinos which only interact with the Higgs field). It requires around 20 parameters not constrained by the symmetries, and which have to be measured by experiment. Otherwise, the requirements and mathematical consistency, and that we are in 3+1 dimensions, give us very little freedom. We can add or take away various gauge symmetries, or add and take away additional families of particles or some additional Bosons fields, but that is about it.

This approach has numerous similarities to a consistent histories interpretation of quantum field theory. It treats the amplitudes in terms of a parametrisation of our uncertainty. It parametrises the time evolution of a system in terms of families of histories. And it uses pretty much the same mathematics to express it. I haven't discussed it, but the single framework rule still applies. However it is different from consistent histories in how it explains the indeterminacy of the time evolution. In consistent histories this is just supposed. Here it is due to the free action of God. Equally, it distinguishes between substance causes and event causes. In particular, event causes are described by the state of matter just before the event, and also the free choice of God. Since God is not constrained in space and time, there is no philosophical reason why we cannot have non-local correlations between events. Whether we would have such correlations in practice depends on the calculation of the amplitudes for each of the possible outcomes (both those which are correlated and those which are not), which depend on various conservation laws which ultimately are derived from the symmetries implied by the divine attributes. The weakness of consistent histories was in explaining these non-local correlations -- the interpretation basically didn't answer the underlying question. That's not a weakness here.

Renormalisation

There is one additional complication I ought to discuss.

I won't go into details about how to perform calculations of amplitudes. It is a tricky task. The non-interacting theory can be solved exactly and analytically, but that's not much use since it is not the correct theory. There are various construct solutions using controlled approximations.

One of these is lattice QCD. It is possible to construct a quantum gauge field theory on a discrete space time grid. This breaks Lorentz symmetry, so it is not the correct theory, but it is mathematically consistent and we might hope that as the lattice spacing tends towards zero we recover continuum physics. There are, however, several complications with lattice gauge theory.

The first is measuring the lattice spacing. This is usually done by fixing one parameter (measured in terms of the lattice spacing) and comparing it to a physical observable. For example, in a confining theory such as QCD, one can measure the string tension that binds two quarks, and compare it to the value deduced from experimental results. This value of the lattice spacing is then used in every other dimensional measurement.

Secondly, we tend to simulate lattice QCD in a four dimensional Euclidean space time. There is a simple trick to transform the physical Minkowski space time into a Euclidean space time. Known as a Wick rotation, you effectively multiply the time variable by the square root of minus one. There are ways to convert results in the Euclidean space time back into Minkowski space time, and this is all fine in the abstract representation. But it moves the representation one step further away from reality. So we have to be even more careful when drawing an analogy from the representation to reality.

Thirdly, taking the Fourier transform of the derivative operator in the continuum gives the momentum operator with eigenvalues p. However, on the lattice the eigenvalues of the naive derivative operator are sin(pa)/a. At low momentum (in comparison to the lattice spacing), this makes no difference, but at high momentum it does, and in particular there is another point with a zero eigenvalue for the momentum operator at p = 2π/a. This in practice manifests itself as an extra Fermion in the theory, known as a Fermion doubler. So, for example, one would not construct a theory with a single up quark, but two identical up quarks, which is obviously not the same as reality. This can be avoided by modifying the approximation used for the derivative operator in order to give the doublers a mass proportional to the inverse of the lattice spacing. The doublers are still there, but because it requires (in the continuum limit) so much energy to create them, they don't actually affect any physics. However, this also has its drawbacks. There is another symmetry I haven't discussed known as chiral symmetry, which is based on a decomposition of the Fermion spinor into what is (misleadingly) referred to as left and right handed components. This is particularly important in the weak interaction, where the left and right handed Fermions have different couplings to the gauge Bosons that mediate the weak force. Even without this, the spontaneous breaking of chiral symmetry explains why the pions have a low mass. On a more practical level, chiral symmetry is key to the effective field theory known as chiral perturbation theory, which is used to control extrapolations in lattice field theory. The problem is that the various modifications used to remove the Fermion doublers also break chiral symmetry (there is an explicit proof of this). There are work-arounds used for this in lattice QCD calculations -- one can either keep some doublers and take fourth roots of the determinant (which seems to work practically, but is a bit mathematically dodgy and not the correct solution at a foundational level). Or treat the practical effect of the breaking of chiral symmetry as though it were an addition to the Fermion mass, and just subtract that term from the Fermion mass. Or one can modify the chiral symmetry so that the transformations of the creation and annihilation operators under the symmetry are not Hermitian conjugates of each other. This approach is very computationally complex, and only valid in Euclidean space time (i.e. you use an imaginary time axis) otherwise the time evolution operator is no longer unitary and you no longer conserve probability. These work-arounds are fine if you just want to use lattice field theory as a tool to calculate amplitudes. All three methods (known as Staggered Fermions, Wilson Fermions, and Ginsparg-Wilson Fermions) have been used to get high quality and high precision results. But the tricks needed to get these methods to work are not rigorous enough to construct the theory from its foundations. Lattice gauge theory without these tricks is a coherent quantum field theory, but differs from reality. With these workarounds, you can approximate reality in calculations, but it doesn't work as a foundational theory in Minkowski rather than a four dimensional Euclidean space time (especially in the electro-weak sector where you need chiral symmetry). We do not live on a lattice.

The fourth problem in lattice gauge theory is how to understand the continuum limit. We talk about a lattice spacing, but treat it as a dimensional quantity, measured in some length scale. But there is no length scale in the underlying theory. If you have an infinite discrete lattice, then you have an infinite number of lattice sites in each direction with a gap between them. What do you have if you half the lattice spacing, so double the number of lattice sites? An infinite number of lattice sites in each direction, with a gap between them. In other words, since there is no fundamental scale which can be used to measure the gap, you have what seems to be exactly the same theory.

But you can be slightly more sophisticated than this. For example, suppose that you have got some field spread out over the lattice. It will take particular values at each lattice site. Suppose that you then want to increase the lattice spacing by a small amount? Clearly, your lattice sites move a bit, so you need to perform some sort of averaging procedure to adjust the values. This averaging procedure will also modify the couplings in the Hamiltonian operator. From this, we can derive an equation, known as a Callan-Symanzik equation, describing how the various couplings modify with the change in the lattice spacing. Effectively the lattice spacing is reflected by the value of these couplings. This is the only way you can distinguish between theories at two different lattice spacings on an infinite lattice. It is particularly obvious on a finite lattice that this averaging procedure is not reversible. You lose a certain amount of information every time you average. However, one can also modify the couplings in the same way through transformations of the various fields which are reversible. As stated, changes to the couplings are equivalent to changes in the lattice spacing, so these transformations of the fields also represent changes in the lattice spacing. But they are also just a change in basis of the Fermion fields. The advantage of a reversible procedure such as this is that it allows us to either increase or decrease the lattice spacing. One can think of a space of couplings, and the averaging procedure maps out a trajectory in this space of couplings. The precise path of this trajectory will depend on the averaging procedure.

The couplings change as we increase or decrease the lattice spacing by a multiplicative factor. But there are certain values of the set of couplings which do not change as we perform another step in the averaging procedure. These are known as fixed points. Fixed points come in three categories. Relevant fixed points are those where the couplings tend towards that value as you perform more of these transformations of the fields. Irrelevant fixed points are such that you move away from them as you perform more of these transformations. Then you have mixed fixed points where some trajectories lead into the fixed point and others lead away from it. To have a lattice gauge theory with a defined continuum limit, you need there to be a relevant ultra-violet (high energy/short distance) fixed point in the parameter space. That it is relevant means that it will satisfy universality: no matter how precisely you formulate the lattice theory or the averaging procedure you end up in the same place. The non-Abelian parts of the standard model do have such a fixed point, (at a zero gauge coupling), so we can take the continuum limit by increasing the constant in front of the Maxwell term in the Hamiltonian. QED (electromagnetism) does not have this fixed point, so a pure lattice QED is not defined. However, one can simulate QED and QCD together.

The other most common way of calculating amplitudes from the standard model theory is known as Perturbation theory. This is commonly used in QED and the electroweak model, where it works very well, and it is also useful in high energy QCD. Perturbation theory provides a controlled approximation to the amplitude, which can be systematically improved. The theory without any interactions between the gauge Bosons and the Fermions can be solved exactly and analytically. If the parameter describing the strength of the interaction is small enough, then we can expand the equivalent of the time evolution operator in the momentum basis as a series expansion in this constant. We then can use the known commutation relations between the creation and annihilation operators to evaluate each term in this series expansion. There are an infinite number of these terms, so we can't analyse them all, which is why Perturbation theory is an approximation. But we know which terms give the largest contribution, so we can calculate them first to get an approximate answer. Then we add in the terms with the next largest contribution, to get an improved approximation, and so on. We can calculate rough bounds for the remaining contributions which we haven't yet calculated. We know roughly what order of magnitude the remaining contributions are, and how many there are, so the question for each term is merely over the precise value and whether we need to add or subtract it. So this gives us a result within a certain error bound, and we can reduce the error bound by calculating additional terms. We can never get it to zero, but we don't need to because we are comparing the result against an experimental value which also has its own imprecision. We just need the imprecision on the theoretical result to be similar in magnitude or smaller than the experimental imprecision, and we would have something we can usefully compare against experiment.

The problem with Perturbation theory with the bare theory is that it calculates infinite amplitudes. Basically there is an integration over all momentum from zero to infinity of an integrand which does not decrease fast enough with momentum so there are still contributions at infinite momentum. This can be avoided through a systematic process. The first step is to remove the infinities by regulating the theory, by changing some parameter so that you no longer have the infinities, and then at the end of the calculation taking the limit that the parameter approaches its actual value. A momentum cutoff (stopping the integration over momentum at some large finite value) is the simplest way of visualising this process. But this violates gauge symmetry, so it is not used in practice. The renormalisation process adds counter terms to the action to cancel out the infinities. What counter terms can appear is restricted by the symmetries, so breaking an important symmetry such as gauge symmetry with the regulator complicates things immensely, and might make the whole process invalid. The lattice is another means of regulating the theory, which breaks Lorentz symmetry. This also complicates perturbation theory in lattice calculations, but not so much as the violation of gauge symmetry. The preferred means of regularisation are either dimensional regulation or Pauli-Villars regulation. The infinities only appear in four or more dimensional space time, so in dimensional regulation you switch to 4-ε dimensions for small ε. It is difficult to visualise a fractional dimension, but in an abstract mathematical space we can do it, as long as we do not confuse the representation, and especially our intermediate steps in the calculation, with the real world. In Pauli-Villars regulation we introduce a fake particle with a large mass whose contributions cancel those of the genuine particle at momentum much larger than the fake particle mass. We recover the correct theory by sending the mass of the fake particle to infinity.

Having regulated the theory, we perform the calculation, and instead of infinities we have terms dependent on the parameter which controls the regulation which diverge when we take the limit that approaches the genuine theory. The next step is to adjust the parameters of the theory in order to subtract these divergent terms. For example, we multiply the constant in front of the wavefunction, the particle mass, and the interaction term by a function which depends on momentum, this parameter and also a mass scale known as a renormalisation scale. This is known as introducing counterterms into the theory. There are numerous different ways we can do this, but the symmetries of the regulated theory cut down the options, and we use some experimental data to fix what flexibility remains. The theory can then tested through comparison against other experimental results.

Whether we can renormalise depends on what counterterms get introduced into the theory. Not every bare theory consistent with the symmetries can be renormalised. In many possible Lagrangians, the number of counterterms increases to infinity as we move to higher order terms in perturbation theory. This would make the procedure impossible, and we would still be stuck with the mathematical nonsense of infinite amplitudes and thus infinite probabilities. There are a small number of Lagrangians where the number of counterterms remains stable as we advance in perturbation theory. These are the renormalisable Lagrangians. The simplest renormalisable Lagrangian is QED. One can then add the SU(2) and SU(3) gauge symmetries of the weak interaction and the strong interaction. This gives us the standard model of particle physics. There are a few extensions to the standard model, which generally add additional particles (such as supersymmetric partners) which are also renormalisable. But we are constrained to looking at theories such as these. This is why we can't have terms in the Lagrangian proportional to two, three and so on creation operators. Why I felt free to ignore the +… with my first construction of the time evolution operator right at the start of this discussion.

For a long time, renormalisation left a bad taste in physicists mouths, and even more so for mathematicians who tend to advocate for a bit more rigour than physicists. But it works (i.e. leads to results consistent with experiment), so physicists reluctantly accepted it. During the time since then, theoretical work has improved our understanding of the renormalisation process. We recognise that the correlation functions for different renormalisation scales are connected together via a differential equation that maps a trajectory through parameter space. Wilson's work on spin-block transformations I alluded to above, originally in the context of condensed matter physics but also applicable to particle physics, also helps us gain understanding. These mean that the different renormalisation schemes are basically just different ways of expressing the same theory, just with a transformation of various variables. As a very crude analogy, if you measure lengths in inches or centimetres you describe the same physics but your numbers will be different. But you can transform from one representation of the theory to another. The changes needed to the action when we renormalise are the same sort of changes you get when you transform the basis. We can think of renormalisation as a change in basis of the creation and annihilation operators.

In particular, we can think in abstract terms of bare electrons and bare photons, but in practice these are never observed. An electron will always interact with its own electromagnetic field. So what we observe is a mixture of electrons and photons. This observed particle will still have its creation and annihilation operators, but those operators need to be constructed from the creation and annihilation operators of the bare particles. Mathematically this would be via a change in basis. Obviously the transformed electron might also interact with its own transformed electromagnetic field, so we might have to perform the procedure again, but eventually we get a solution where the operators don't change when we consider the self-interaction. This is a fixed point of the renormalisation procedure. When we renormalise, we focus on the mathematical form of the time evolution operator, and add counterterms to this. But we still have to worry about the creation and annihilation operators that represent the initial and final states. These will be in that special fixed point basis. So, to be consistent, we need to also transform the operators in the time evolution operator into that basis. And this ultimately is the reason why we need to renormalise, and why we get infinite results (a sign that there is a mathematical inconsistency somewhere) when we don't.

So today physicists don't worry about renormalisation. We have an argument why it is not only present but is in fact necessary. The need to renormalise does, however, further restrict the possible terms that could be in the time evolution operator. Add more exciting interactions with (say) two Fermion creation operators in the same term and you can no longer renormalise. You lose mathematical consistency. Combined with the symmetries, this very much forces us to accept the standard model of particle physics. There might be additional particles we haven't observed yet, the interaction parameters and masses are not constrained by symmetry or renormalisation (which in turn leads to fine-tuning arguments), and there is a little bit of flexibility in which terms we do or don't include. But very little flexibility. Broadly speaking, our choices are between the standard model, different symmetries, or incoherence. And I have suggested that the symmetries (as well as the indeterminism which characterises quantum field theory) follow directly from the assumption that physics is a description of how a classical theist God sustains the universe.

In short, if you assume the existence of God together with a small number of other premises (including logical consistency), then you finish with the physics we have in this universe (at least, if we don't have gravity). God's free will means that the system is indeterminate. God's ability to create and destroy means that we need to represent changes in terms of creation and annihilation operators. Since things evolve in time we need a time evolution operator. God's attributes and the need for mathematical consistency (renormalisable) constrain the form of the time evolution operator. We then need a means to parametrise our uncertainty over which state God will move the universe to. This can be done either by probabilities or amplitudes/spinors. But given the previous constraints, only a spinors representation of reality allows us to have a gauge symmetry. And we need the freedom of a gauge transformation to construct an interacting theory where there is more than just a boring static universe (which we suppose that God didn't want).

Assume atheism, or some other form of divinity, and there is no need to suppose either fine tuning, indeterminacy, or the underlying symmetries, and you have a far greater range of possible theories. This can, perhaps, be resolved by a multiverse (where we naturally find ourselves in one of the few universes capable of supporting life), but I don't think there are any other options open to the atheist if they want to try to derive the standard model from first principles.

Wavefunction collapse and decoherence

As in conventional consistent histories, wavefunction collapse in this model refers to our knowledge of the physical state, and does not imply any non-local change in the physical state (as is the case in most psi-ontic interpretations of quantum physics). This removes the obvious conceptual problems with regards to wavefunction collapse.

There are some "standard" arguments which claim to disprove epistemic theories of the wavefunction, such as the PSR no-go theorem. However, as I discussed in my post on QBism, these smuggle in assumptions drawn from psi-ontic theories which contradict the epistemic framework. And it should be obvious that such arguments must necessarily fail, as they are derived from quantum physics, and the psi-epistemic philosophy gives rise to the same set of equations. The psi-epistemic philosophy thus predicts the very phenomena which the no go theorems incorrectly suppose rule them out.

So is there no change in state during a measurement? Almost invariably there will be, as the quantum system interacts with the measurement device. As the particle approaches the measurement device, we don't know which state it is in or in which basis (albeit it will be in a location rather than momentum basis, and in a definite location, but the basis for the spin state and so on is usually unknown). But then it becomes entangled with the macroscopic measuring system. As the mathematics behind decoherence shows, this forces it into a particular basis. If it approaches the measurement device in a state in a different basis (i.e. a superposition in the measurement device basis), there will have to be a change of state into into one of the eigenstates of this new basis. This change, like all other changes of state, is ultimately determined by God's free choice and thus unpredictable to us, except stochastically. But there is nothing going on here except the same process I have been describing all along: the destruction of one quantum state by God and creation of another state.

What of the two slit experiment? You have a particle released from the source. God then moves it by whatever path he desires, which will pass through one of the slits, and hit the detector screen at some location. We cannot predict what that location will be (except that it will not be at a point where there is zero probability). We cannot say anything about the paths of single particles between measurements (baring ruling out anything inconsistent with those measurements). However, if we repeat the experiment enough times, then we will observe a frequency distribution. We can make predictions for that frequency distribution based on the mathematics of quantum physics, derived from the assumption that God is free to move particles as He will but (in the absence of miracles) various symmetry constraints will make some outcomes more probable than others. The mistake of psi-ontic interpretations is to take something which relates to the ensemble of particles (the probability for an outcome, or behind that the density matrix or wavefunction depending on which formulation of the theory we are using), and then assume that it describes an individual particle. There is also a misleading analogy with water or sound waves in classical physics. Here the many particles exist at the same time, so it is easy to visualise the interference pattern as due to the interaction between one wavefront and another. Quantum physics, however, operates by different rules. There is no interaction that leads to the interference effects. It arises simply as a consequence of the symmetry constraints that allow us to predict the frequency distribution of outcomes of particles freely moved by God (in the absence of miracles). The peaks and troughs of an interference pattern are a direct consequence of the mathematics developed above.

So what if we observe one of the slits, to identify which slit the particle went through? Why does that destroy the interference pattern? In part because the observation of the particle in itself changes its underlying quantum state, randomising the phase and removing any correlation in calculation of the amplitude (which, as stated, is only a predictor for a frequency distribution) with a potential path going through the other slit. We have to include the information that there is such a measurement in our calculation of the amplitude, and it affects the results. God knows that the observation is there, and it seems foolish to assume that He would move the particles in the same way both in the presence of and in the absence of the observation of one of the slits.

Entanglement

Let's consider the classic case of a spin zero particle decaying into two spin half Fermions, which then travel in opposite directions. We then measure the spin along a particular axis of the two Fermions, and find that they are always anti-correlated. If one of the spins is up, then the other one is down. And vice-versa.

So what is happening behind the scenes in this model? The spin zero-particle decays, and the two Fermions are released. These will be in a definite state in a particular basis. The same basis for each particle, but one will be spin up and the other spin down. We don't know (and will never know) what that basis is, and we don't know which particle is spin up and which is spin down. This basis might be the same as used on the detectors, but the chances are it won't be, and let us suppose that it isn't. So the particles reach the detectors, and they are forced into a spin up or spin down state in the basis of the detector. Which event occurs is determined by the free choice of God. We cannot predict what it will be. But we can predict frequency distributions after the experiment is repeated a large number of times. And we know from this calculation that (in the absence of the miracle) the probability of the same spin being recorded on both detectors is zero. Consequently this won't happen, even for an individual pair of particles. The result will always be anti-correlated.

This correlation is obviously non-local. Does that contravene special relativity? No. Because special relativity implies a symmetry which constrains the Hamiltonian which describes the creation and annihilation of particles. It tells us that there is no creation/annihilation at a space-separated distance. You can't have a particle decaying here and then the results of that decay appearing over there. And that's fine in this case, because such things aren't observed. When the spin zero particle decays, the two Fermions are created in the same location. When one of the Fermions reaches its detector, there is an annihilation of the initial state of the Fermion (whatever that happened to be), and a simultaneous creation of its new state in the basis of the detector. (Or perhaps a sequence of events which for each event the annihilation/creation happens simultaneously if the change takes gradually over a period of time.) The non-local correlation covers the decisions of God. Whether He decides to move the particles into a spin up or spin down state in the given basis. God is obviously outside space and time, and there is no physical reason why He should not be constrained by locality when He chooses how to flip the spins of distant particles. He is free to do so in such a way that they are always anti-correlated. And the prescription used above to calculate the probabilities suggests that He would always do so.

So what if we shift the alignment of the detectors and start working through to derive Bell's theorem? Does this rule out that the particles are in a fixed state after the initial decay? No, because, as discussed in the article on consistent histories, the derivation of Bell's inequalities violates the single framework theorem. It is not true that the particles emerge from the decay with a fixed spin state in the basis of the detectors. We have to use the same basis to describe each of the two particles at each given moment of time. Bell's theorem assumes that it makes sense to say that particle A is spin up in this basis and simultaneously particle B is spin down in that basis, which is not allowed. If particle A is spin up in this basis then particle B will be spin down in this basis, and its spin in that basis is undefined and it is nonsensical to ask what it is (given the formalism of quantum physics). They will arrive at the detector in a superposition when expressed in each detector's basis. As such, God could move them into either a spin up or spin down state. We cannot predict what will happen on any individual run of the particles. Because the detectors are not perfectly aligned, there is no probability zero combination of events, so we cannot even say that something is ruled out. There are four possible outcomes, and any of them might happen on any given individual run of the experiment.

So what happens on an individual run of the experiment? God causes the spin zero particle to decay at a time we can't predict. The two particles from the decay are placed into two spin states, one spin up and one spin down in a given basis, but we don't know what that basis is. He moves the two resultant spin half particles towards the detectors. Once they reach the detectors, He changes their spins and basis to give one of the four possible outcomes consistent with the basis of the detectors. And we can't make any prediction about which of those outcomes will occur.

We can, however, predict frequency distributions when the experiment is run a large number of times, using the framework developed above (and based on its assumptions). We do so by calculating amplitudes for each outcome, converting them to probabilities (or perhaps working in the density matrix formulation and extract the probabilities from the density matrix), and then using these probabilities to make predictions for the frequency distribution. But it is meaningless to apply these probabilities to individual events. If we treat the probability distribution as a predictor for a frequency distribution, then, unless the probability is 0 or 1, an individual event can take any outcome. If we treat the probability distribution as a guide to use to make bets on outcomes; then for each individual event we might either win or lose the bet. It is only after a large number of such bets that we would expect to break even.

Causation

There are many different things people mean when they think of causation. Many are influenced by mechanical physics where a cause is a necessary connection between two events. The cause refers to a certain set of circumstances, then effect another set of circumstances. Then if the laws of physics are such that if the circumstances associated with the cause occur at one moment of time, then the circumstances associated with the effect will inevitably follow. This type of causation is not present in quantum physics (in most interpretations), because of quantum physics' indeterminacy. Exceptions are the pilot wave interpretations, which are deterministic, and the many worlds interpretations, where the effects will occur because all possible outcomes occur in one world or another.

The other type of causation frequently referenced by contemporary physics is that particles or information cannot travel outside the positive light cone. With the forces (with the possible exception of gravity) mediated by particle exchange, that also means that information cannot be shared outside the light cone. There are standard calculations to show that relativistic quantum field theory satisfies this condition. The space-separated (i.e. outside the light cone) propagator describing a particle travelling outside the light cone is necessarily zero. (Indeed, Peskin and Schroeder in what was the standard textbook back when I was learning all this assume this to demonstrate that Fermion creation operators must anti-commute, as it is not satisfied if they commute.) So particle transmission satisfies causality in this sense.

But this leaves the problem of the space-separated correlations seen during entanglement. Bell's inequalities show that the properties of all possible measurements cannot be determined at the moment the entangled particles are created, unless there is some spooky non-local interaction, or one of a number of other assumptions are satisfied. The Pilot wave interpretation allows for non-locality in its equations of motion. The many worlds interpretation denies that there is a single measurement outcome, and avoids the implications of Bell's theorem in that way. For most other interpretations of quantum physics this is a major problem. The standard consistent histories approach notes that the derivation of Bell's theorem violates the single framework assumption, so it is claimed that it also avoids the conclusion. As I remarked on my post on consistent histories this caveat is indubitably correct, but I am not sure that it resolves the underlying philosophical problem. There are still non-local correlations.

There are two usual ways in which I think about causality. The first is in terms of Aristotle's four causes. Material, formal, efficient and final. Material causality isn't really relevant to this discussion. Formal causality certainly is relevant to quantum physics, but since I largely agree with Professor Koons' analysis on this point, I will just refer back to my previous post . I need to discuss efficient and final causality here.

The other distinction I make is between event and substance causes. An event cause asks What is the cause of an event? An event is any change of state of a physical particle, and the question is asking What is sufficient to explain why it changed into this state rather than another? In classical physics, this would arise from the various configuration of particles and the forces between them. In quantum physics, with its (apparent in some interpretations) indeterminacy, this is a harder question to answer. Because in mechanistic physics all questions of causality ultimately reduce to event causality, it is not unreasonable to just think of causality as meaning event causes. But when this attitude is carried over to quantum physics the problem of indeterminacy creates a stumbling block.

Substance causality, on the other hand, bypasses the events. It links one set of particle states with another set of particle states. So the question is What particle state(s) did this particle state emerge from? Or, given that there is creation and annihilation, What particles did this particle emerge from? The answer to these questions are the Aristotelian efficient cause. Alternatively, we can ask What particle states might emerge from this configuration of particle states? The answer to this question is the Aristotelian final cause.

The efficient cause is asking about past history. Quantum indeterminacy concerns our ability to make predictions. But when looking at past history we are not making predictions. There is a simple and single fact of the matter. Thus the notion of efficient causality still stands regardless of whether the physics is deterministic or indeterminate. The only question is whether particles do in fact emerge from other particles, or whether they can pop into existence out of nothing. Leaving aside questions surrounding the initial singularity or big bang, the answer of quantum field theory is that they do not pop into existence out of nothing. This can be seen in various ways.

I usually here turn to the Feynman rules used to calculate amplitudes in perturbation theory. In particular, the conservation of energy momentum, and the absence of disconnected diagrams. If energy and momentum are conserved, and energy is always positive, then to have a final state with positive energy you must have an initial state with positive energy. Since energy is a label for the quantum state of particles, you can't have energy without a initial particle to carry that energy. A disconnected part of a diagram contains propagators and vertices which are not connected to one of the initial states. These are what we would expect to see if particles did pop out of nothing; but they don't contribute to any amplitudes, so they may as well not be there. It could be argued that the Feynman rules are only useful for unrenormalised states in those regions where the perturbation series converges. But they are derived from and reflect the underlying Hamiltonian, and the features of the calculation which give rise to these conditions are valid whether we use perturbation theory or not, or perform the calculation before or after renormalisation. Ultimately (because of the way that Fourier transforms work and integrals over exponentials lead to delta functions, which ensure the conservation of momentum) the conservation of momentum arises because of the locality of the Hamiltonian. This stands in the renormalised theory, and before we get into perturbation theory. One can also consider the expectation value of the object <0|axay|0> representing the amplitude for a transition from a vacuum state to two particles with states x and y. This, of course, gives zero. To get something non-zero, you need to put in some creation operators into the initial state -- but then you are not starting from nothing.

Various experiments are sometimes claimed to demonstrate particle creation from nothing, such as the dynamic Casimir effect. But they do not show this, because the experimental setups do not start with nothing. For example, in the Casimir effect, you have two metal plates. These generate an electric field and exchange photons; some of those photons decay into an electron positron pair, which in turn get absorbed into the plates. To experimentally measure particle creation from nothing you would need to start with nothing. A detector or other measuring device is most definitely something. So one would need to make a measurement without any means to make that measurement, which is obviously impossible.

The notion of final causality is also valid in either a deterministic or indeterminate physics. Once again, indeterminism concerns our ability to make predictions. There isn't necessarily a single final cause; it provides a list of options of what could happen in the future. It does not say which of those options will come to pass. There is nothing inconsistent between this and indeterminacy. Indeed, quantum indeterminacy is not anything goes. There are various conservation rules and selection rules which do restrict the possible decay channels or interactions of a given particle. This is wholly consistent with the notion of final causality.

Some physicists talk about particles being pulled out of the vacuum, when discussing particle creation and annihilation. I really dislike this language. It is not what the mathematics says, and the concept makes no philosophical sense. It seems to rest on a presumption, perhaps held unconsciously, taken from mechanistic physics that the fundamental particles are indestructible. I see no reason why we need to maintain this assumption, and the most natural interpretation is that we should abandon it. When a photon decays into an electron-positron pair, the photon does not disappear into the vacuum and nor does the electron appear from somewhere it was pre-existing in the vacuum. The photon was destroyed. In the same instant the electron and positron were created. Not from nothing; their efficient cause was the photon. Then this electron and positron annihilate each other, and a photon emerges. The electron and positron are destroyed, and at the same instant the photon is created. Its efficient cause are the electron and positron. This is not difficult to visualise; it just requires us to lay aside the assumption that fundamental particles are indestructible.

So substance causality, including efficient and final causality, is not only consistent with quantum physics, but is demanded by it.

What of event causality? Clearly an event in quantum physics is partially explained by the configuration of particle states that existed just before that event. But this is only a partial explanation, not a full or sufficient explanation. So what determines which event actually occurs? Quantum physics (at least outside deterministic interpretations such as the pilot wave or many worlds) gives no physical explanation. That leaves us with two options. The first option is that there is no explanation, i.e. the universe is fundamentally irrational. The second option is that there is a cause, but it is not something represented in physics, i.e. a non-physical or supernatural cause. This is basically a short hand for saying God did it, either directly or through other supernatural intermediaries.

Clearly, this solution is not going to be acceptable to everyone. One has to ask whether the God who did it is the same as the God of classical theism. That's an important question, but it would be too much for me to discuss here. The second objection is whether I am just making the mistake of saying I don't know, therefore God. I would disagree. My approach can be formulated in two ways. Firstly, as a deductive argument. One assumes the existence of a theistic God, and considers what that implies for physics. Obviously this does not lead to a single theory, but a range of theories -- although fine tuning arguments can reduce the imprecision. We then compare the expectation against what is known. This is no different from formulating a hypothesis and subsequently testing it, as in the standard scientific method. Alternatively, one can think in terms of an argument from induction. If we assume that a) the universe is rational (i.e. things don't happen without an explanation; if we need to express things probabilistically that is because of a cause beyond those which we have included in the model); and b) that the universe is indeterminate (so the Pilot Wave, Everett and any other deterministic interpretations are false); and c) physical beings are localised (i.e. we cannot explain the correlations observed in entanglement just from laws arising from the quantum particles themselves), then we find that something is needed to explain why quantum particles behave as expected from quantum physics. We consider what the attributes of that something is, and discover they match many of the traditional attributes of a theistic God.

Does this serve as an argument for God's existence? Not by itself. The deductive argument only shows that theism is consistent with physics as we best understand it. There might, of course, be other worldviews which explain the physics just as well. The inductive argument is based on various assumptions, which need to be fleshed out in full and then justified.

Conclusions

Most attempts to interpret quantum physics assume a methodological atheism. They assume that the laws of physics operate independently of God. It is easy to see why people make this assumption. Firstly, many of the philosophers and physicists responsible for these interpretations are atheist. Secondly, this is the basic assumption which has dominated the philosophy of science for several centuries. But I think it ought to be questioned. After all, if theism is true (and, in my view, there are good reasons for thinking theism true), then God actively upholds and sustains the universe. For example, Psalm 104 states that God makes the grass grow. Modern biology states that various processes which ultimately reduce to physical law make the grass grow. If we want to reconcile these, then we need to suppose that physical law is a description of God's actions, either directly or through various intermediaries. Either way, drill down far enough into the philosophy of physics, and you will eventually reach a point where you need to invoke God to correctly explain what you are observing. Obviously this would only be part of the explanation -- God would be acting on material beings, so the configuration of those beings would also be part of the explanation. We avoid occasionalism because of the need to include material rather than divine causes.

Obviously, if theism is false then we should not expect God to be invoked in the correct philosophy of physics. And it may be that even if theism is true quantum physics is not sufficiently fundamental to find the direct imprint of God.

But it is still worthwhile to consider what physics we might expect if theism is true, and God actively sustains the universe. And equally, what physics we might expect if some form of atheism is true and the universe is a closed system (alongside various other assumptions). Maybe we wouldn't get very far with such an analysis, but I hope this post has outlined that it might be possible. If so, this might serve as a way to put various different foundational philosophies to a clear test. It it might also provide insight as to how to construct a more fundamental physical theory.

The construction I have used is a psi-epistemic approach similar to consistent histories. I believe that it removes the main weakness of consistent histories concerning explaining the correlations at a distance of events involving entangled particles. I prefer the psi-epistemic approach because it fits far more comfortably with the logical interpretation of probability, which best matches what physicists actually do when they calculate amplitudes and compare them to experimental measurements. As di Fenetti pointed out, probabilities do not exist. Nor to they exist as attributes of individual particles. I think both of those statements are obvious once one stops to think about it: we never measure or observe a probability when observing an individual particle. And we never directly observe anything except individual particles or ensembles of particles. Might there be something that influences physics which we don't directly observe? Obviously I would say that there is: God. An underlying Pilot wave could play the same role, or an alternative branch of a multiverse. But if we can't observe it, then we can't measure its properties, and so it becomes an unknown cause. We can use symmetry considerations to model the possible values it could take, and feed that into a calculation of the probability of various outcomes. But then that probability is epistemic, because it is merely a mathematical parametrisation of our uncertainty due to the unknown cause. If we want to avoid subjective probabilities -- which I think we want to do when constructing a theory of objective reality -- then we are left with the logical theory of probability. Here a probability is a mapping between a set of outcomes and a set of numbers, such as the set of numbers satisfy the axioms of probability, and are calculated based on various premises, including some known (or assumed) data, such as the observed initial state and the physical laws, and some unknown data which is modelled in accordance with some symmetry principle. Of course, we use the axioms of probability because we want eventually to compare with an experimentally measured frequency distribution. But there is no reason to parametrise our uncertainty in this way. We just need to be able to have a bijective map from a set of outcomes to a set of numbers such that those numbers can be mapped (perhaps a surjective map) onto another set of numbers which are described by the probability axioms and which might thus be a suitable predictor for a frequency distribution. Why use this two step process? Because the additional degrees of freedom might be required to fully represent the internal properties of the quantum particles.

The approach I favour takes the path integral approach literally. It assumes that each event at each moment in time is selected by God out of a small number of options determined by the particle's final causes. We cannot predict which of these options God will choose, because we don't know the mind of God. But we can use symmetry considerations inspired by the divine attributes to model how likely each of these options are. Add in my interpretation of renormalisation, inspired by an extension of Wilson's block-spin approach, and you end up with a quantum field theory. There are numerous quantum field theories consistent with this framework, but they include the standard model of particle physics. Add in the anthropic principle, and you get an even closer match. This analysis obvious excludes quantum gravity, but the underlying symmetry that drives general relativity is also expected in this model, so I am hopeful in that regard (even if I am yet to work out the full details).

Is this approach reasonable? I would welcome comments. But I am not aware of any issues. It is consistent with the standard model of particle physics (and extensions to incorporate neutrino masses), so is consistent with experiment. Possibly there are some additional assumptions needed for the model which I have not acknowledged, or an error in my reasoning. So I welcome comments. But as far as I can see this is a viable interpretation.



The Philosophy of Quantum Physics 9: Relational quantum mechanics.


Reader Comments:

1. Michael Brazier
Posted at 17:57:34 Monday December 23 2024



Is this the end of the series, since it's the interpretation you support yourself?

There's a strong resemblance between this interpretation and the transactional interpretation of John Cramer. The main difference is that Cramer's "advanced waves" are not interpreted as signals coming from the future directly; they come from the timeless providence of God guiding the quantum system towards the result He intends.

I think you're clear of the "God of the gaps" charge. A "God of the gaps" argument examines all the possible causes of something that are known to science, pronounces them insufficient to explain the thing, and concludes immediately that only a miracle can explain it, therefore God exists. That conclusion is invalid; all that follows is that there is a cause unknown to science. Your argument (in its inductive form) reaches that valid conclusion, then analyzes what sort of being that unknown cause would have to be to operate as it does - which turns out to be an omnipresent intellect, given the assumption that no causal factor works backward in time.

2. Nigel Cundy
Posted at 16:31:56 Monday December 30 2024

End of Series

This was originally planned to be my last post, but a few people highlighted some more interpretations in comments to previous posts, so I am thinking of going back to address them next.

3. Jek
Posted at 13:00:33 Wednesday January 22 2025



Hello Mr. Nigel, a book called Existential Inertia and Theistic Proofs, by Joel Schmidt and Lymphord, recently came out. Do you think that physics favors existential inertia or divine conservation? Or none of them are favored by physics.

4. Nigel Cundy
Posted at 18:47:16 Wednesday January 22 2025

Existential Inertia

Dear Jek,

I do have that book in the pile next to me right now. I wrote a post on this topic a little while ago:

http://www.quantum-thomist.co.uk/my-cgi/blog.cgi?first=-1&last=-2&name=ExistentialInertia

It doesn't respond to Schmid's book, since I published it before I read that book.

In short, I don't think that contempoary physics supports existential inertia. In existential inertia, things have the tendency to continue to exist unless acted on by an external force due to their own internal nature. But there is spontaneous decay in physics, i.e. some things don't continue to exist despite there being no external force. That strikes me as having a major tension with the existential inertia thesis. I don't recall (although my memory might be misleading me) that Schmid and Linford responded to this point.

5. Jek
Posted at 21:14:02 Wednesday January 22 2025



Thank you for the answer, but I have a question about the second way about the order that Saint Thomas says, about efficient chain orders, because there is a criticism from Jordan Howard Sobel about not perceiving An essentially ordered chain supporting now As far as I remember it is in chapter 170 to 190 or 200, could you clarify this issue?

6. Jek
Posted at 21:22:48 Wednesday January 22 2025



I managed to summarize the objection, if I can, I will put it here The text discusses Thomas Aquinas's concept of efficient causes, focusing on his Second Way argument, which suggests that everything in the world has an efficient cause. Aquinas presents the idea that

7. Jek
Posted at 21:24:06 Wednesday January 22 2025



The text discusses Thomas Aquinas's concept of efficient causes, focusing on his Second Way argument, which suggests that everything in the world has an efficient cause. Aquinas presents the idea that sensible things (things we perceive with our senses) are connected by an order of efficient causes, such as the generation of offspring or the creation of objects by sculptors. However, the text raises doubts about sustaining causes, which are those necessary for a thing's continued existence at a given time. While we can observe generating causes (e.g., chickens laying eggs), it is unclear whether any sensible things are sustained by external causes in the way Aquinas suggests. The author questions whether the existence of a thing at a particular time depends on external causes, like oxygen or heat, and argues that these causes are more about perpetuation than direct sustenance. Finally, the text challenges the idea that there is a continuous order of sustaining causes leading back to a first cause, a concept that Aquinas implies but does not clearly demonstrate through empirical evidence.

8. Jek
Posted at 22:01:25 Wednesday January 22 2025



I made a response to Joe's argument about him saying that the first way only establishes that the first is pure act in the relevant aspect, but that it can contain a potential in another aspect without being I made an answer that I wanted you to analyze.

9. Jek
Posted at 22:02:59 Wednesday January 22 2025



1. Actuality is prior to potentiality

2. Nothing is prior to God. If God were all potential, actuality would have to be prior to

God.

3. Therefore, God cannot be all potential.

Given this premise, we need something actual before a potential that has been actualized, but we do not necessarily need actuality before potentiality that has not been actualized.

Therefore, you can have a being or beings that are composed of potentiality and actuality

without anything being prior to them. Thomas’s claim is much broader: it is not just that

those things that are actualized have to be actualized by something else, he says that

actuality is prior to potentiality without qualification. Therefore, by this principle, unrealized

potentials would have to be preceded ontologically by actuality proportional to

potential. But why would anyone believe this broader premise? He gives a reason (Thomas Aquinas): "so that everything that is in potential can be reduced to actuality, only by some being in actuality." It may not be clear how St. Thomas's premise follows from this causal premise, but the way I interpret it is that potentiality could not exist if there were no actuality to actualize it. Therefore, the existence of potentiality depends on actuality. Thus, there is a kind of sustaining of the relation between unactualized potentials and actuality at the moment when the actuality proportional to the unactualized potentiality ceases to exist. The same is true of unactualized potentiality. Therefore, actuality is absolutely prior to potentiality. Joe challenges premise two of this argument, saying that there is an immutable first, but that does not mean that it is immovable. Therefore, there may be something ontologically prior to the immovable first mover of a given causal chain. I think we have to consider that the essential conclusion of the first way is not that there is a being that is actually not moving. If that were the conclusion, you know that an immobile being could only be a fully actualized being that has just finished its movement. I don't think that's what St. Thomas Aquinas was trying to say. As explained earlier, the key here is the fact that reality is derived from another. Since this is indifferent to that reality itself, that's why Aquinas explains movement in terms of potentiality and actuality in the first way. All that the first way essentially says is: "These things do not have what they have independently; they derive from another." We know this because they have the capacity to change, and things that have the capacity to change are in potentiality to the relation of actuality. They do not have that reality in themselves. So when we look at Joe’s claim that the first path leads you to an unmoved being that could have been moved in some way or at some other time, what you have to ask yourself is: does this make sense? Does an unmoved being have all its actuality in itself, or is it derived from another? If the being is mobile, then the answer is no. Even if the being is not moving now, if the mobile being is not currently moving, the actuality that underlies this potentiality that is currently unmoved does not exist in the being itself; it is derived from another, even though the potentiality is not currently being actualized. The being still participates in the actuality of something external to itself, because that actuality is the source of the potential. So we conclude that the first unmoved mover cannot contain any potentiality.

10. Jek
Posted at 01:03:55 Thursday January 23 2025



Hello again Mr Nigel, I wrote a text about whether a compound has existential inertia, can you analyze whether this argument holds against existential inertia?

11. Jek
Posted at 01:06:55 Thursday January 23 2025



Let's imagine a compound, like a car. The car is made of materials like iron, aluminum, among others. These materials are called matter, but matter, by itself, is not capable of originating itself. It depends on a form that configures it and grants its existence as something defined, like a car.

Now, the car exists, but can it continue to exist by itself? In order for the car to run continuously, it needs fuel. This fuel, however, is only provided by something external, like the driver who fills up the car at 10 am. Thus, the car only continues to operate because the gasoline is present. What does this demonstrate? That a compound, like the car, cannot exist or operate alone, because its internal composition is insufficient to guarantee its continued existence.

For example, if the gasoline runs out, the car will stop working at the next instant (moment t1). This means that every compound depends on something external to continue existing or functioning. This dependence reflects an aspect of support: the car, as a composite, does not have an intrinsic power to sustain its operation or continued existence. Therefore, it depends on an extrinsic power to sustain it.

Thus, we conclude that every composite that depends on another to exist demonstrates a relationship of dependence. This means that its existence is subordinate to something greater or external, which gives it support.

12. Nigel Cundy
Posted at 19:39:05 Thursday January 23 2025

Responses to 9 and 11

Your response in 9 looks mostly reasonable to me. I would perhaps quibble with the statement "potentiality could not exist if there were no actuality to actualize it." I sort of see what you mean here -- 1) if potentiality represents a possible change, 2) and every change requires something actual to actualise it, 3) then without anything actual no change is possible; 4) therefore without some actuality there can be no potentiality. If this is what you had in mind, then my concern is there might be an equivocation in the meaning of "possible" between points 1 and 3. 1) means possible given the form of the being (regardless of whether or not the circumstances ever arise to allow it to happen), i.e. a possibility internal to the form, while 3 means possible in the sense that the external circumstances might arise to let it happen, i.e. a possibility dependent on factors external to the form. Do you mean it in the sense that only an actual being can have potentia? Then we might struggle with spontaneous generation (depending on how you handle that). Also I think the question you are trying to answer is whether God has any potentiality, while you answer whether God can be all potentiality. These are different things.

I don't really follow your argument for 11. 1) It is an argument from analogy, which is often weak. 2) The car continues to exist as a composite being regardless of whether or not it has any fuel. Without fuel, it cannot exhibit one of its final causes, but it still exhibits others (such as to reflect light, or maintain its shape in the absence of a large force, etc.).

I'll get to your objection in point 7 later.

13. Jek
Posted at 22:56:49 Thursday January 23 2025



Hello Mr. Nigel, on the question of the car analogy I wanted to say that a compound in itself cannot have total persistence by itself, I do not fully accept the question of divine conservation, I want to I'm kind of a minimalist on the issue of divine conservation, if you know a Thomist named Daniel Schidies released a book about the first way, this argument of potentiality is to talk about whether God Contain a potential, then I assume in the argument that the first being cannot contain potentiality

14. Jek
Posted at 23:00:28 Thursday January 23 2025



We can clearly see now that in the First Way, Aquinas cannot be arguing that an unmoved mover is necessary to keep everything in motion at every instant. This is because Aquinas does not believe the universe would freeze instantly in place if its mover stopped moving it. In fact, Aquinas states in his response to the twenty-sixth article that divine power is required to prevent the human body from corrupting (and thus undergoing the motion of dissolution) if the heavens were to cease rotating; the reasoning used applies to all composite bodies. To the extent that the conventional interpretation holds that natural beings must be continuously moved by an external mover while they are in motion, it clashes with Aquinas's understanding of physics and cosmology. Here is the excerpt from the Thomist author.

15. Jek
Posted at 23:05:18 Thursday January 23 2025



Yes, the compound can continue to exist without gasoline, but what I assume is that it will still have an external dependency even without being fully sustainable.I don't understand much about physics like you do, could you mention physical questions other than the one you presented about radioactive decay, against inertia?

16. Jek
Posted at 23:11:35 Thursday January 23 2025



Nature and Nature's God: A Philosophical and Scientific Defense of Aquinas's Unmoved Mover Argument.Here is the name of the book if you are interested

17. Jek
Posted at 23:24:48 Thursday January 23 2025



Could you clarify this argument further?

Regarding my argument, it is based on the assumption of the car's movement, but not entirely on what the car will be beyond that. It’s about stating that for the car to continue running, it depends on gasoline. I interpret this as gasoline being something distinct from the car itself—an existence separate from the car’s. The car's existence is "X," and the gasoline’s existence is "Y." The reason the car continues to run depends on "Y," but not on its own existence.

This argument is meant to draw an analogy to the idea of a being existing, but not by its own existence. Apologies if there’s any mistake in my reasoning.

18. Nigel Cundy
Posted at 16:55:21 Sunday January 26 2025

First way and existential inertia

Dear Jek,

Sorry for taking time to respond, and not responding to everything. I have numerous things on my mind at the moment, and can't always respond quickly.

I agree that some versions of the cosmological argument do not depend on or imply divine conservation. Many claim that Aristotle's own predecessor to the first way is among these. I am not convinced that Aquinas' formulations of the argument are like this, due to his reliance on essentially ordered series. In particular, I would recommend Aquinas' argument from De Ente et Essentia, which I think strongly imples divine conservation. Gaven Kerr has written some good articles on this topic -- I would start with his essays in his Collected Articles on the Existence of God.; Edward Feser has also written on this topic, see (for example), the works referenced in this post:

http://edwardfeser.blogspot.com/2020/02/agere-sequitur-esse-and-first-way.html, and he has other posts on the subject which you can find by searching. Schmid and Oppy seem to be the main defenders of existential inertia.

But this is a complex issue which I am not going to resolve in a quick comment.

I'm not sure that the quotation you give in 14 is against existential inertia. "Divine power is required to prevent the human body from corrupting." Corrupting in Aristotle's terminology is equivalent to annihilation in contempoary physics. That statement is an expression of divine conservation. The quote itself states that the absence of continual divine action wouldn't leave things frozen in place; but that is a discussion concerned about motion rather than existence.

Thanks for recommending that book. I have added it to my reading list.

As far as physical arguments against existential inertia: I cited radioactive decay as possibly the best known example, but anything small enough to be governed by quantum physics can undergo spontaneous change and even spontaneous corruption (or decay into something else), as long as it has enough energy to produce the things it decays into. This is true of fundamental particles; for example a photon can spontaneously decay into an electron/positron pair (or any other fermion/anti-fermion pair).

19. Nigel Cundy
Posted at 17:51:23 Sunday January 26 2025

Response to 7

Not knowing the paper you took this from, it is difficult to respond in detail. Aquinas distinguishes between two types of causal series: accidentally ordered series, and essentially ordered series.

In an accidentally ordered series, the power to propagate the next member of the series is present essentially (due to the nature) of each object. The classical example of this is begetting offspring.

In an essentially ordered series, the power to propagate the next member of the series is only accidental to the members of the series, i.e. they need not possess it. The classic example is a stick pushing a stone pushing another stone and so on. Here each member of the series is dependent on all previous members of the series in order to bring about its effect. The stone can only push another stone if it is itself in motion. We know that it will eventually stop being in motion unless it is continuously being provided with energy to compensate for that lost to friction. However, a stone is not capable of generating that motion by itself. It needs something other than a stone in order to do so. An infinite series of only moving stones is thus not possible (I don't see why there can't be an infinite series of stones, beyond the physical impractibility, but it has to contain something which is not a stone in order to explain the continued motion despite energy being lost to friction). There has to be something not a stone providing the force. The key element in question here is that the property being passed down the series is not essential to the being's nature. Each member of the series derives this property from those prior to it, i.e. it is dependent on them.

An accidental series could continue forever with only that type of object, because the power to generate the property in question is explained by the nature of the being itself. An essential series requires a prime mover different from the objects we observe in that series, because the existence of those objects cannot explain why there is that property, but only something which has that property, and the power to pass it on, essentially (i.e. as part of its nature).

The second way discusses efficient causality, so the property in question is existence. For most things (everything except God in fact, although I would need further argumentation to make this point), existence is distinct from its essence, i.e. it can come into or out of existence. (Obviously Kant argued that existence was not a predicate, but he was wrong. Saying "this unicorn exists" or "this particular horse exists" -- i.e. is capable of interacting with other beings -- adds greatly to our understanding of the unicorn or the horse; it tells us it is not merely an idea in our head.) Thus it cannot in itself explain why it exists. It depends on the previous members of the series for that explanation. And ultimately the series itself depends on something which exists essentially. The traditional definition of God is the uncausable cause, i.e. the first member of the chain of causes, so God is this being which exists essentially.

That's basically the argument of the second way expressed in my own words.

So it is difficult to see what the objection you raised is discussing. I have not mentioned the existince of sustaining causes as a premise of the argument -- possibly that's a misunderstanding of the author concerning the nature of essentially ordered series. The distinction between essence and existence does mean that something cannot in itself explain the reason for its continued existence, so needs something outside itself to explain its continued existence, which would ultimately be (to avoid a vicious infinite regress) a sustaining cause. Obviously we are getting back to the topic of existential inertia here. But I'm not sure quite why the article you cite is using the terminology of sustaining causes with respect to the second way.

Finally, the text challenges the idea that there is a continuous order of sustaining causes leading back to a first cause, a concept that Aquinas implies but does not clearly demonstrate through empirical evidence. Why is empirical evidence the standard here? Aquinas's argument is a philosophical one, not an empirical one. It draws its premises -- that something exists, that in some of those things which exist essence is distinct from its existence, and the principle of causality, but then draws conclusion from them. It does not empirically demonstrate that there is a first cause, but it demonstrates through a combination of empirical evidence and reason that there must be a first cause.

As Aquinas discusses

"Aquinas presents the idea that sensible things (things we perceive with our senses) are connected by an order of efficient causes, such as the generation of offspring or the creation of objects by sculptors. However, the text raises doubts about sustaining causes, which are those necessary for a thing's continued existence at a given time. While we can observe generating causes (e.g., chickens laying eggs), it is unclear whether any sensible things are sustained by external causes in the way Aquinas suggests. The author questions whether the existence of a thing at a particular time depends on external causes, like oxygen or heat, and argues that these causes are more about perpetuation than direct sustenance. Finally, the text challenges the idea that there is a continuous order of sustaining causes leading back to a first cause, a concept that Aquinas implies but does not clearly demonstrate through empirical evidence."

20. Jek
Posted at 19:27:22 Sunday January 26 2025



Thank you very much for the answer, regarding one more question, regarding the first copy, do you think it has backgrounds?

21. Jek
Posted at 20:07:04 Sunday January 26 2025



Hello Mr. Nigel, there is an article that makes an argument against creation ex nehilo. I would like you to look at it and you could counter-argue against the objection.

22. Jek
Posted at 20:07:42 Sunday January 26 2025



The problem of creation ex nihilo can be expressed in terms of the following argument:

1. All concrete objects that have an originating or sustaining efficient cause have an

originating or sustaining material cause, respectively.

2. If classical theismcvc is true, then the universe is a concrete object that has an originating

or sustaining efficient cause with neither an originating nor a sustaining material cause.

3. Therefore, classical theismcvc is false.

The argument is valid, and so the conclusion follows from the premises of necessity. What,

then, can be said on behalf of the premises?

Premise 1 expresses a causal principle, which I shall call the Principle of Material

Causality, or PMC for short. In simple terms, PMC says that all made things are

made from other things. A bit more carefully, it says that concrete objects (and aggregates

of such) have an originating or sustaining material cause whenever they have an originating

or sustaining efficient cause, respectively. Before I defend the premise, some preliminary

remarks about terminology are in order.

First, concrete object denotes at least the sorts of entities classically individuated

by the ontological category of substance, and is meant to distinguish the entities at issue

from those of other ontological categories (e.g., properties, relations, events, tropes, and

the like). Examples of substances or individuals thus include atoms, stars, rocks, planets,

trees, animals, people, and (if such there be) angels, Cartesian souls, and gods. They are

thus to be distinguished from concrete entities in other ontological categories (shapes,

surfaces, events, and the like) and abstract objects (propositions, numbers, sets, and the

23. Jek
Posted at 20:09:57 Sunday January 26 2025



The next two key terms in premise 1 are those of originating cause and sustaining

cause. By the former, I mean a cause of the temporal beginning of a thing’s existence4 (if

it should have such), and by the latter, I mean a cause of a thing’s continued existence. So,

for example, matches and lighter fluid are at least partial originating causes of the existence

of a flame, and the oxygen that surrounds it is at least a partial sustaining cause of the

flame’s existence.

Finally, material cause aims to capture (roughly) Aristotle’s notion of the term, and

to individuate the type of cause in play from the other three sorts of causes distinguished

by Aristotle, viz., formal, efficient, and final causes. In particular, by material cause, I

mean the temporally or ontologically prior things or stuff from which (though not

necessarily of which) a thing is made. So, for example, the originating material cause of a

shiny new penny is the parcel of copper from which it was made; the originating material

causes of a new water molecule are the hydrogen and oxygen atoms from which it was

made; and the sustaining material causes of a flame are the reacting gases and solids from

which it is made.

Two points about the causal premise merit special emphasis. First, PMC is

restricted to concrete objects as we’ve defined them. As such, it is neutral as to whether

entities in other ontological categories require a material cause. Second, the requirement

of a material cause is restricted further to just those concrete objects that have an originating

or sustaining efficient cause. It therefore allows for the possibility of concrete objects that

lack a material cause, namely, those that lack an originating or sustaining efficient cause.

24. Jek
Posted at 20:14:31 Sunday January 26 2025



So, for example, the premise allows that the universe may lack a material cause of its

existence if it is both beginningless and also lacks a sustaining efficient cause. It also allows

that a universe with a temporal beginning may lack a material cause if it also lacks an

originating and sustaining efficient cause. An example of the latter sort of case might be a

temporally finite, four-dimensional “block” universe. As such, the causal premise is neutral

as to whether all concrete objects begin to exist, and to whether all concrete objects that

begin to exist have a material cause. The causal premise only rules out concrete objects that

have an originating or sustaining efficient cause but lack a material cause.

Is PMC plausible? It certainly seems so. First, PMC enjoys abundant empirical

support. This is perhaps most clearly seen in the case of the extremely well-confirmed law

of the conservation of mass/energy.5 The law states that if there is a given quantity of

mass/energy at a given time, then it must have been caused by exactly the same quantity

of mass/energy at any earlier time. In general, though, our uniform experience is such that

whenever we find a concrete object with an originating or sustaining efficient cause, we

also find it to have an originating or sustaining material cause, respectively. Furthermore,

there seem to be no clear counterexamples to the principle in our experience. What

explains this? PMC is a simple, conservative hypothesis with wide explanatory scope,

which, if true, would best explain this data. Experience thus provides significant abductive

support for PMC.6

Second, consider a version of PMC with stronger modal force:

Strong PMC: Necessarily, all concrete objects that have an originating or sustaining

efficient cause have an originating or sustaining material cause, respectively. If you want more information, the author's name is Felipe Léo, there is an article where he discusses this

25. Dominik Kowalski
Posted at 12:53:16 Saturday April 26 2025

Response to Jek

Please consult Josh Rasmussens response on this question in this question in his co-authored work with Leon. You will notice that the ill-begotten definition of the principle poses no issue. Plus "material" is exceedingly hard to define and the principle already doesn't make sense on an Aristotelian interpretation

One quick response is from Brandon Watson. I think that should be enough for you, though Rasmussen is more thorough and interesting

https://branemrys.blogspot.com/2023/08/creation-ex-nihilo.html?m=1

26. Nigel Cundy
Posted at 21:21:21 Saturday April 26 2025

Response to Jek

Sorry, Jez, I somehow missed your comment when it first came out.

I would object to premise 1 of the argument."All concrete objects that have an originating or sustaining efficient cause have an originating or sustaining material cause, respectively." I think that some concrete objects have a originating material efficient cause, including those we observe. But it is not proven to argue from this by induction that all concrete objects have a material efficient cause. Induction cannot show that there are no exceptions. And arguments like the second way suggest that there must have been an exception. And with regards to the sustaining cause being material, I would reject that entirely. I regard God as (either wholly or in part) as the sustaining cause for everything.

"First, PMC enjoys abundant empirical support. This is perhaps most clearly seen in the case of the extremely well-confirmed law of the conservation of mass/energy. The law states that if there is a given quantity of mass/energy at a given time, then it must have been caused by exactly the same quantity of mass/energy at any earlier time."

This, I think, is a good example of where the atheist/deist and theist views of physics differ. The atheist/deist regards physics as operating independently of God. The theist regards physics as a description of God's sustaining of the universe in the absence of any miracle, i.e. any special circumstances where God has a specific reason to act differently.

The conservation of energy and momentum arises from two sources. In classical physics, it arises from the translation symmetry of the laws of physics, so the interactions only depend on the relative distances between objects. In quantum physics, Fourier transforming the creation and annihilation operators to transform into the momentum basis converts all spatial dependence into exponentials e^{ip_i x} for annihilation operators or e^{-ip_j x} for creation operators, and then integrating over the location converts this into a dirac delta function forcing the sum of the momenta for the annihilated particles to equal the sum of the momenta for the created particles. This requires the operators (in the location basis) interact through point like terms or differential terms, i.e. the locality of the interactions.

But when God has a special interest in producing a special outcome, the principles of translation invariance and locality are thrown out. The outcome God wants is at a specific place and time, not necessarily at the place and time where the interaction occurs. Thus there is no reason for energy/momentum to be conserved when God performs a miracle.

In other words, this argument assumes that God cannot perform a miracle. The second argument also assumes that we can extrapolate from non-miraculous events to all events including the miraculous, which again in effects assumes that there are no miracles. Given God's omnipotence, that seems a rather bold assumption to make without begging the question against theism.



Post Comment:

Some html formatting is supported,such as <b> ... <b> for bold text , < em>... < /em> for italics, and <blockquote> ... </blockquote> for a quotation
All fields are optional
Comments are generally unmoderated, and only represent the views of the person who posted them.
I reserve the right to delete or edit spam messages, obsene language,or personal attacks.
However, that I do not delete such a message does not mean that I approve of the content.
It just means that I am a lazy little bugger who can't be bothered to police his own blog.
Weblinks are only published with moderator approval
Posts with links are only published with moderator approval (provide an email address to allow automatic approval)

Name:
Email:
Website:
Title:
Comment:
How many seconds are there in an hour?