The Quantum Thomist

Musings about quantum physics, classical philosophy, and the connection between the two.
A Universe from Nothing? Part 2: Particle Physics

A Universe from Nothing? Part 3: Fine Tuning
Last modified on Sun Jun 16 00:23:25 2019

This is the third post in a series discussing Professor Lawrence Krauss' work A Universe From Nothing. In the first post, I gave an introduction to the underlying cosmology, and a quick overview of the rest of his work. Since his discussion of the cosmology was broadly correct, and very well presented, I am (largely) skipping over those chapters and concentrating on those parts of the book which I find problematic.

Last time I had a look at chapter 4, which is where things start to go awry. The chapter was about particle physics, and Krauss' claim was the uncertainty principle allowed matter to spontaneously emerge from the vacuum, with nothing before it, albeit only for very short periods. I suggested that his arguments for this were poor, and he made a number of mistakes with regards to the physics.

In his fifth to seventh chapters, he again turns to cosmology. Once more, he does a generally good job here, and I agree with most of what he has to say. Not everything, as I will come to in a moment, but most things. The eighth chapter, however, comes to the issue of fine tuning or the anthropic principle. This is one of the hot topics in contemporary science religion debate. I want to spend most of my time in this post discussing this chapter.

Firstly, though, I will summerise chapters five to seven.


Chapter five resumes the story where it left off in chapter 3, with the discussion of the flat universe, and the reception that he and his colleagues received for proposing it. The main problem was that he needed a value of the cosmological constant which was incredibly small, but not zero. Zero in physics is usually easy to explain: you assert some symmetry which forbids the term in question. The problem with this is that the cosmological constant is a dimensional quantity (so has units -- it has the dimensions of an inverse area), so can only be compared with other quantities with the same units. For example, we can easily compare a one meter rod with a two meter rod, and say that it is half the size. But we can't compare a one meter rod with π. The number we assign to the length of the meter rod depends on whether we measure it in inches or furlongs, while π is fixed and constant. If the cosmological constant arose from standard physical processes, one would expect its value to be comparable to the sort of values we see associated with the standard model particles. Physicists like to multiply by appropriate factors of Planck's constant and the speed of light to convert inverse lengths into masses, and if we do so, we find that the square root of the cosmological constant corresponds to a mass about 35 orders of magnitude smaller than the electron. Krauss' own calculation assuming vacuum production was even worse, and he got an answer about 60 orders of magnitude wrong. The cosmological constant would arise from an almost complete cancellation between two numbers, but missing out by a tiny amount. This has no known explanation.

Nonetheless, the evidence for the small value of the constant began mounting up. A few hints came from the age of the universe compared with its expansion rate. Precise measurements of the rate of expansion of the universe sealed the deal. One would expect the universe expansion to slow down, as gravity pulled all matter together. What was found was that it was accelerating slightly. There are a few ways of explaining this within the framework of general relativity, but a cosmological constant is the simplest. Other studies, such as of how galaxies cluster together, needed a similar value of the constant to make them work. With several independent methods all pointing to the same thing, the constant is now firmly established.

Chapter 6 tackles the problem of why the universe can be so flat. A flat universe is right on the knife edge between rapid expansion and collapse. Every region of space has to be just right. How did this come about? The standard proposal is through a very rapid period of expansion early in the universe. If you look at a small part of the sea up close, you will see the waves, and compared to the area you are studying they might be quite large. But blow things up to the size of the ocean, and it appears to be almost perfectly flat. A rapid expansion has the effect of smoothing out distortions. Equally, the Microwave background is very uniform, and this is best explained through a rapid early expansion of the universe.

Why might this inflation stop? That's quite easy for physicists: you have a phase transition. An everyday example is the phase transition between ice and water. If the temperature is above a temperature that is about 273K (and you give the system enough time to reach equilibrium), then it is liquid. A little bit below, and it becomes solid. A small change in the external conditions leads to a big change in the properties of the solid. Inflation is probably driven by a field with similar properties. The energy distribution of the various possible states depends on external factors such as the temperature and density of matter. It is a function that has two distinct types of minima. One is when the value of the field is zero, and the other is when it is non-zero (let's call it 1 in some appropriate units). At very high temperature and density, the non-zero minimum has the smallest energy, so all these particles will be in state 1. Gradually decrease the temperature (as happens when the universe expands), then the energy of state 1 increases, while the energy of state 0 remains the same. At a critical temperature, the energy of state 1 will equal the energy of state 0, and all the particles will quickly jump from state 1 to state 0. If inflation is driven by particles specifically in state 1, then inflation suddenly stops. Of course, quantum indeterminacy takes its place, and you don't get this jump at precisely the same time at different places in the universe; so the universe is not completely homogeneous and the microwave background not completely uniform. The size of these distortions can be computed, and the theory again agrees well with measurement. However, the precise details of what causes inflation remain a mystery.

But now, Krauss enters another rather dangerous area. He asks where all this energy comes from. How can the density of energy remain constant in an expanding universe with a cosmological constant? Space expands, so if the density of energy remains the same, the total energy will increase.

Krauss resolves this by suggesting a solution: if the energy density of the universe is zero, then it would remain zero as the universe expands. How do we do this? By including gravitational energy.

In Newtonian physics, particles carry positive energy, for example Kinetic energy is always positive. The same is true for mass energy, and binding energy. However, the gravitational potential energy is negative. So if you want to calculate the total energy of the universe, you need to subtract the total gravitational energy from the total energy of all the matter. Krauss argues that this value can be zero. In other words the total energy of the universe is claimed to be zero. That's how you can get matter from nothing while maintaining the conservation of energy. The positive energy contained in matter is precisely cancelled by the negative gravitational energy generated by the new particles. This idea will prove crucial to Krauss' overall thesis.

There are several problems with this analysis. The first is that it relies on Newtonian gravity, with its description of gravity as a force, with an associated potential energy. General relativity works differently. Rather than a force acting on objects deviating them from straight line constant velocity motion, general relativity curves space time, effectively redefining (or rather defining more carefully) what counts as a straight line in inertial motion. In the right coordinate system, there is no gravitational force. There is no unambiguous definition of gravitational energy in general relativity. So it is not even clear what the phrase "The total energy of the universe including gravity" means. If it doesn't mean anything, then how can we say that it is zero?

The second problem is, I think, even more damning. There are two different definitions of energy used by physicists. In the classical limit, they reduce to the same quantity, which is why the same term is used. Cosmology and quantum physics both evolved from a Newtonian physics, but modified it in different ways. Quantum mechanics was originally derived from Hamilton's reformulation of classical physics, replaced the classical Hamiltonian with an operator, and defined the energy as the eigenvector of that operator. General relativity, on the other hand, arises from considering the space time symmetries of the universe, and from this point of view, the Stress Energy tensor is the natural definition of energy to use. In classical physics, they are both two different ways of expressing the same thing. But quantum physics and general relativity modify them in in compatible ways.

The Stress Energy tensor is (in classical physics) derived as a consequence of ensuring the action is unchanged under complex coordinate transformations (coupled with a transformation of the gauge field for gauged particles). One component of this matrix gives an energy density. This is used in general relativity, and so when the cosmologists discuss energy, they mean the definition taken from the stress energy tensor. In classical physics, the stress energy tensor is conserved, which implies that the total energy and momentum as derived from this tensor don't change in time. This derivation depends on time-translation symmetry (things don't change if you move everything forward in time by a second), which is not satisfied in an expanding universe and when we come to general relativity.

Quantum physics also has the stress energy tensor. It appears in a type of Ward identity. The most common form of Ward identity are various relations associated with gauge transformations, which equate various different amplitudes used in quantum calculations with the currents that would be conserved in classical physics. These conserved currents are used to construct creation operators for composite particles such as pions. One can also, however, construct Ward identities for transformations of the space time coordinates, and if you do this the stress energy tensor plays a role. However, in quantum physics, the stress energy tensor is not conserved. The total amount of energy in the universe defined this way can change in time.

The second definition of energy used in quantum physics is that energy is the eigenvalue of the Hamiltonian (time evolution) operator. It is a number used to label metastable states of the system. This quantity is conserved, and it is always positive. So when we discuss the conservation of energy, we ought to use this definition of energy.

So this is Krauss' problem. He is combining a discussion of gravitational energy and the conservation of energy. But when he discusses gravity he is using one definition of energy, but while discussing the conservation of energy he ought to be using the other. Most people, when quantising gravity, introduce a new particle, the graviton, to convey the gravitational force. There are theoretical problems with gravitons, so I'm not convinced that this is the right approach, but if we go with the majority then the graviton energy would be the energy of the gravitational field and it would, like every other quantum particle, be positive.

Krauss' proposal for how the total energy of the universe could be zero only makes sense in the context of Newtonian physics. But everything else he discusses depends on post-Newtonian physics, whether general relativity or his interpretation of quantum physics. Thus his proposal is not convincing. I would also commend Luke Barnes' discussion of the matter.

It becomes even less convincing when he then contradicts himself. After discussing the escape velocity from the earth, and how a similar calculation can be performed on a galactic scale, he states

So what then do we find? In a flat universe, and only in a flat universe, the total average Newtonian gravitational energy of each object moving with the expansion is precisely zero.

I thought that gravitational energy was negative.

Anyway, onto chapter seven, where Krauss discusses the future of the universe. His conclusion is that the universe will continue to expand. As it does so, gradually the stars will disappear from the sky as they move further away than the time it takes for light to reach us; the light is redshifted to the point where it is impossible to detect. The density of objects would decrease. The microwave background will decrease in energy density, and become invisible as it is obscured by clouds of electrons between the galaxies. One of the strongest evidence for the big bang is that the lighter elements match their predicted relative abundance; but in time fusion will just leave the heavier elements. So we had better get cosmology right now, because the evidence is not going to be around for ever. As the universe nears equilibrium, any observers wouldn't be able to tell whether it had continued in a steady state forever, or had a beginning.

We live at a very special time. The only time we can observationally verify that we live at a special time.

This is all good material, and Krauss explains it well. Of course, he has to spoil it by throwing in with a jibe against the religious (claiming that a divine intelligence has no observational evidence, a statement which many people would dispute).

A grand accident

So at last I have reached the chapter I wanted to focus on in this post.

We suffer from selection bias. We remember the interesting and unusual, but forget the mundane. Of course, given the number of events that happen to us, it is inevitable that the unusual will happen from time to time. But because the long stretches of mundane existence that separate this events are forgotten, we give the unusual more significance than it deserves.

Until recently, there was a belief among physicists that every parameter is significant. Know enough, and we would know why gravity is weaker than the other forces, or the proton and electron the mass ratio that they do. But this is changing. There seems to be nothing inconsistent about universes with different values of these parameters.

Krauss attributes this change to two things: the 120 orders of magnitude difference between his calculation of the cosmological constant and the real one, and that we live at a time where cosmology is possible (which itself depends on the value of the cosmological constant). I'm not so sure that these are the real reasons, but fine tuning is a real question, so let us carry on.

Krauss' first example is from cosmology. Dark energy (perhaps arising from a cosmological constant) implies a repulsive force in the universe, causing the universe to expand rather than contract as we might expect from the normal attractive gravity. We live at a time where the density of matter in the universe is similar to the cosmological constant, so the rate of expansion in the universe is just starting to accelerate. However, suppose the constant is a bit larger than it is. Then the force of gravity would, in effect, be weaker. The more rapid expansion of the universe would happen at the time where stars and galaxies started to form. Since gravity is the main driving force behind stellar formation, this means that there wouldn't be any stars, or planets. This is clearly bad for us. But reduce the cosmological constant, and the universe would collapse in on itself: again, no time for stars to form.

This leads us to the anthropic principle: we observe a universe that is just right for us, because if it wasn't just right, we wouldn't be here to see it. Of course, this doesn't explain anything without a further model. But I will get to that in a moment.

Particle physicists have even more firmly showed that the anthropic principle is a significant aspect of nature. Krauss doesn't go into to detail, but the details are significant. There are around 30 dimensionless parameters drawn from particle physics and cosmology, which could take any value and the universe would remain self consistent. The basic structure of the laws of physics is constrained primarily by symmetry principles, and also by a number of options characterised by an integer (for example three space dimensions, or three families of fermions, or three non-gravitational forces mediated by gauge theories). On top of this, there are real number parameters which describe (for example) the strength of the interactions between particles: how likely it is, for example, that an electron will emit a photon. There is no obvious philosophical or scientific reason why these parameters have to take the values that they do. I can see, for example, that it might be possible to deduce the symmetry laws from an underlying philosophy of nature. But all we have with regards to these constants is the need for self-consistency, and that's not enough.

There are over a hundred known constraints on these parameters required for things such as star formation and so on to occur. Vary any combination of the parameters by a small amount in any way we choose, and we will find ourselves breaking several of these constraints. In short, the parameters of physics have to take the values that they do for the universe to contain any degree of chemical complexity, let alone biological complexity.

The interpretation

One might say in response to all this, ``Well, of course the universe will be ideally suited for us. So what?" The so what arises when we apply this to the philosophy of physics. We are asking why physical law is as it is. One explanation is that it just happens to be this way. Before we understood about fine tuning, this might have been reasonable. But we know that there is such a small window in such a vast range of possibility. The chance of this happening by accident is so remote we can forget about it if there are any competing explanations. And there are always two: the sample is biased, or we have a large number of samples, and pick one of them. The first implies some power outside the universe which can shape the laws of physics, i.e. God. The other is known as the multiverse. Naturally, theists prefer the first of these options, and atheists the latter.

I should mention that although the argument from fine tuning is often used to support the design argument for God it is broader than this. Design arguments often assume an underlying mechanistic world view. However, the fine tuning argument also fits well in a theistic world view, where the laws of physics describe God's actions in upholding the universe (or how God tends to move the different forms of material substances). Add a few common assumptions about God's motivations, and one is left with the conclusion that if God exists, then the universe would be fine-tuned for life. This is to be compared against the conclusion that if God does not exist, and there is only a single universe, then it is immensely unlikely that it would be fine tuned for life. We can then turn this around and say that given the universe is fine tuned, fine tuning offers immensely strong evidence for the existence of a God who desires to interact with living organisms over the idea of a Godless single universe, or a universe with an apathetic God. Atheists thus have no choice but to believe in the multiverse, despite there being no possibility of experimental evidence for it, on account of their philosophical prejudice.

Krauss dismisses the idea of God being behind fine tuning in a single paragraph.

A purely religious argument, on the other hand, could take significance to an extreme by suggesting that each fundamental constant is significant because God presumably chose each one to have the value it does as part of a divine plan for our universe. In this case, nothing is an accident, but, by the same token, nothing is predicted or actually explained. It is an argument by fiat that goes nowhere and yields nothing useful about the physical laws governing the universe, other than perhaps providing consolation for the believer.

I have two problems with this paragraph. The first, and ultimately more significant, is that it assumes an underlying mechanism: that the laws of physics exist independently of God, and God (if needed) merely pushes a few buttons to set the parameters, and then sits back and enjoys the show. Obviously, theists have a different view, with (albeit this being simplistic since it neglects secondary causation) the laws of physics being a description of God's sustaining of the universe. And with this in mind, we can see why Krauss' statement that this idea doesn't explain anything is bizarre. The laws of physics can be derived from three axioms: physical indeterminacy, symmetry, and the various constants (both those being discussed here, and others such as there being three dimensions and the geometry of the universe). If one can argue that the first two of these are a consequence of the divine attributes, then all that remains is to explain the physical constants. If a universe with rational beings has more goodness than one without them, then the fine tuning argument almost completes the derivation of the laws of physics (the almost being that there is still a little bit of freedom in the values of these constants). The purely religious argument, therefore, doesn't explain nothing; it explains everything (at least, everything dependent on fundamental physics).

Krauss obviously prefers the multiverse. Firstly, we need the definition. Our universe is all we could see or all we could ever see. Beyond that, there could be other universes which can never impact on us. The totality of all these is known as the multiverse.

Krauss' first argument is that the multiverse is possible, and since that everything that is possible is bound to happen in a big enough universe is guaranteed to occur somewhere within it. This, of course, relies on a dubious analogy by events in the universe and the laws which shape the universe.

He does, however, have stronger arguments, based on the idea that physical law itself implies the idea of the multiverse. He spends most of his time discussing string theory, and then bubble universes. These are, of course, not established theories, but merely speculation of what might lie beyond our best theories. At the moment all we can say is that they might be right, or might be wrong.

Bubble universes are based on the idea that space is big. I will assume that there is a background and rather boring space-time, before any inflation, which I will call the expanse. If inflation is caused by an indeterminate fluctuation, then you might have one point in the expanse where inflation happens to be triggered. That point will then blow up into a whole universe, with everything else pushed out of the way, never to interact with the internals of that universe. Then another point of the uninflated expanse will undergo inflation, and you will get another universe arising from it. And so on. In this way, you can have a large number of independent universes, which never interact with each other. The obvious problem with this scenario is does it actually solve the problem of fine tuning? Wouldn't the laws of physics be the same in each universe; whereas fine tuning requires that each of the universes has a different set of physical constants. The natural explanation is that the constants would be the same in each universe; after all, each universe would emerge from the same expanse, and the inflaton field and presumably all the other fields would have to exist on that expanse. This might be explained if the constants arose from the vacuum expectation value of some spontaneously broken field. Before inflation is triggered, there is a symmetry between the constants, and they are equally likely to take any value. But as soon as each universe starts to inflate, there is a phase transition and the constants drop into one particular value, and remain fixed there for all time. The problem with this is that the expectation values of fields are always dimensional while the fundamental constants are dimensionless. So bubble universes and inflation by themselves aren't enough to explain fine tuning.

So Krauss' main discussion focusses on string theory.

String theory was originally proposed as an attempt to understand the strong nuclear force. Standard quantum field theory assumes that the fundamental building blocks of nature are point-like particles, zero-dimensional. String theory (as originally formulated) proposed instead an action where the fundamental particles were one dimensional objects, called strings. The different particles correspond to resonance vibrations of these strings. This partially explained the pattern of masses of the composite particles such as pions and protons, and for a while it seemed like a reasonable option. However, it only got the results partially right, and as the experimental evidence started pouring in for the quark model, and the theoretical problems of QCD were resolved, it became clear that quantum chromodynamics was the true theory of the strong interaction.

With the success of the standard model of particle physics, people started turning their attention to converting the final force of nature, gravity, into a quantum field theory. The natural thing to do was to use the same procedure that had worked so well for electromagnetism, and the strong and weak nuclear forces: convert the classical potential (potential energy function) for those forces into creation and annihilation operators, apply various various commutation rules, and see where that takes you. Unfortunately, the classical potential in general relativity is the space time metric, which is used to define the geometry of the universe and to define distances between two different points. If we do this, then gravity will be carried by a spin-2 particle named the graviton. In my view, quantising the metric is philosophically troubling, and I prefer alternative approaches. But there is another, more serious, problem with this approach: it is not renormalizable. Many calculations in quantum field theory, performed naively, give infinite results -- unless you use exactly the right construction of creation and annihilation operators. For the standard model, we can start with a construction where the calculations are easier, and then rotate to the basis which gives the correct answers. This process is known as renormalization. However, in this naive quantum gravity, that procedure is impossible. The infinities cannot be removed. So a different approach is required.

What caught the eyes of the string theorists was that their theory naturally produced a spin-2 particle that looked a lot like the graviton. That opened the possibility that string theory could be the basis of a quantum theory of gravity. There were two main disadvantages of string theory. Firstly, it was only mathematically consistent in 26 dimensional space time. Secondly, it also implied particles known as tachyons which violated special relativity. After a promising start, these issues put a dent in string theory's momentum.

But it was then realised that combining string theory with another idea, supersymmetry, resolved the tachyon problem, and reduced the number of dimensions needed from 26 down to a mere 11. Supersymmetry was originally proposed as a solution to something known as the hierarchy problem, and it has a few other cute effects. It posits that every particle we know about is partnered by another type of particle. Every Fermion has a Boson super-partner, and every Boson a Fermion superpartner. So we all know about the electron, a spin half fermion. If supersymmetry is correct, there is also a super electron, a spin 1 Boson with the same mass. Of course, if there were such a particle, we would have observed it. But if supersymmetry is only approximate, it is possible that the superpartners can have a mass small enough to fix the hierarchy problem, but large enough that it wouldn't have been observed in the early 1970s, when this idea was first proposed.

So super-string theory emerged, and got people excited again. The remaining seven dimensions obviously aren't observed, but if they are compact (circular) and rolled up small enough we wouldn't notice them. However, there wasn't just one form of string theory, but several. When I was a graduate student, all the string theorists were getting excited because it had just been realised that these could be related to each other, and ten dimensional super gravity, through various symmetry transformations.

The next great revolution in string theory was the realisation that strings weren't the only things described by it. There were also higher dimensional objects called branes. Every string is either circular or has two ends. If it has ends, then it must start and finish on a brane. For example, we could have a four dimensional brane, with the strings attached to it. That four dimensional brane would represent our universe. But in a eleven dimensional space, there is plenty of room to have numerous different branes without them interacting with each other. Each one could represent a different universe. And there is no reason in many constructions of string theory why each universe should have the same particles in it or interactions between those particles. It is estimated that there are 10500 possible brane universes consistent with string theory, each one constructed from a different way of compacting the extra dimensions. And this has obvious implications for the multiverse solution to the fine tuning of the constants.

That 10500 is troubling: it means that string theory can never make predictions which can be validated by experiment, and that is a serious problem for a scientist. Unless it can lead to specific experimental predictions, we have no way of finding out whether it is true. String theory has drawn into itself a large proportion of the theoretical particle physics community. The people involved in it claim that, as far as quantum gravity is concerned, it is the only game in town (which is false: there are rivals such as loop quantum gravity and twistor theory, and I would like to see more efforts made in more radical approaches where one doesn't quantise the metric).

My focus has always been on standard model physics, so I don't really have a side in the debate between string theory and loop quantum gravity. I have never really been convinced that any of these approaches are the right approach, and I am concerned about the amount of resources and brainpower that has been devoted to string theory and supersymmetry at the expense of alternative approaches. Unofficially, I would quite like supersymmetry and string theory to be false, because it would put a satisfying dent in rather too many rather grandiose claims. But officially, I'm neutral until convincing experimental evidence comes in. That is starting to happen. While superstring theory cannot be proved by experiment, it can be disproved. The Large Hadron Collider in CERN is now reaching the energies where the supersymmetric particles might be expected to be found. No evidence for them has (to date, and to my knowledge) been found. The simplest and most natural supersymmetric models have already been ruled out. (There is a lower mass limit based on the experiments, which increases as we perform better experiments, and an upper mass limit related to the hierarchy problem. Once these two meet, supersymmetry is shown to be false.) More complex models of supersymmetry are still possible, and we will have to wait until LHC's successor to definitively rule these out. Without supersymmetry, string theory is dead.

Krauss has reservations about string theory, whose fundamental nature and make-up is still a mystery and we have no idea if it has anything to do with the real world. But what he is trying to do is merely play down the hype (perhaps because it is useful for his current purposes). He notes that since the 1980s, the intervening years have not been kind to string theory, even through it has produced a lot of very nice mathematics. The emergence of branes as the most fundamental objects rather than strings suggests that the theory was not named especially well (which seems to be a common problem for particle physicists over the past century: we have the habit of naming phenomena before they are properly understood).

The uniqueness of the theory disappeared. There are numerous different ways of getting rid of the extra dimensions. We might possibly find direct (with a powerful enough particle accelerator) or indirect (because they are needed to make theoretical calculations work) evidence for them (which Krauss uses to pretend that the physicist belief in unobservable extra dimensions which we might maybe one day find indirect evidence for is better than a religious belief in a God who we already do have both direct -- through miracles -- and indirect -- through philosophical argumentation -- for). But even if we start with a unique theory in ten or eleven dimensions, the numerous different ways of compactifing the extra dimensions means that it will not reduce to a unique four dimensional theory. Each four dimensional theory would have different laws, different symmetries and so on. The number of possibilities is too large to complicate. Rather than a theory of everything, string theory becomes a theory of anything. This of course means that string theory can never be experimentally confirmed. We don't have access to the extra seven dimensions, only the boring old four we are all used to, but string theory cannot make any useful predictions about the four dimensional physical theory that emerges from it.

But this is what Krauss needs to generate a multiverse. The non-uniqueness of string theory is no longer a vice of the theory, but a virtue. Krauss hopes that calculating distributions of the probabilities of different theories emerging from the fundamental theory, one might resolve the problem of the measure in the fine tuning argument. Maybe we will find out that most theories have a small vacuum energy, four forces and three generations of elementary particles. If this is the case, then it won't be an accident that we live in a universe like that.

Now suppose we did come up with a unified theory, which predicted a period of cosmic inflation, or that each possible universe was sampled; we found good experimental evidence for it (which goes beyond from what can be deduced from the fact that this theory would have to reduce to the standard model of particle physics and general relativity after some symmetry breaking), then we might have good reason to suppose that these explanations for the fine tuning is correct. Many very bright people have devoted their lives to the hope that somewhere, somehow, it would be possible to find experimental evidence for such a theory, and that its possible rivals are false.

This is the only way in which one might find evidence for a multiverse. In that case, there would be no fundamental reason why the constants of nature are as they are. Physics would become an environmental science.

Of course, there are weaknesses with this whole approach. Firstly, string theory has not been proved. The only experimental evidence we could find for it that couldn't also be explained by other approaches to quantum gravity is if we find direct evidence for the extra dimensions, which will require experiments at energies beyond anything that is ever likely to be practical. What experimental evidence we do have is casting doubt on one of its key components (string theory needs supersymmetry, but supersymmetry doesn't imply string theory). So every conclusion about a multiverse drawn from it remains speculation. Secondly, the consequences of string theory are still not fully understood. Even if string theory is true, then it hasn't been proved that it would lead to a multiverse of the sort needed to overcome the fine tuning problem. Thirdly, it might not explain the fine tuning of the universe, as the underlying inflation model might itself depend on various parameters which need to be fine tuned. It just moves the fine tuning problem from one part of the theory to another.

Krauss ends the chapter with an appeal that philosophy needs to be based on the best available empirical evidence, and the best theories of physics. On this, I fully agree with him. But after that brief moment of agreement, we diverge again. To my mind, the best theory of physics available to us means the standard model of particle physics, which is why I took time to show that my own Aristotelian based theism is consistent with it. I rely on the assumption that including gravity won't fundamentally affect this philosophy. A quantum theory of gravity, will, after all, be a quantum field theory, just constrained by different symmetries or with extra particles (or maybe even in more than four dimensions). Einstein's gravity only had relatively minor effects on the mechanical philosophy (beyond how we think about space and time, although most of that is already present in special relativity, which is included in the standard model). I expect that the philosophical impact of adding gravity to the standard model will be similar. My criticism of Krauss is primarily that he does not base his philosophy on the best theories of physics. He does not grasp the full implications of the quantum revolution, but still holds onto an underlying mechanism. The rest is based on theories that are little more than fanciful speculation. Only in cosmology does he get his interpretation of physics right, but that is not enough by itself to come up with a solid philosophy of nature.

In any case, with the close of chapter eight, Krauss has finished his survey of modern physics. Now we turn to his philosophy, and first of all his attempts to show that nothing is something.

Reader Comments:

1. Scott Lynch
Posted at 05:10:18 Sunday July 7 2019

Thoughts on PSR and Fine-Tuning

Dr. Cundy,

I have been thinking a lot lately about an argument for the Principle of Sufficient Reason (PSR) presented by Fr. Reginald Garrigou-Lagrange in his book “God His Existence And His Nature (Tabulation of the Modes of Being, Section 24)”. The standard argument presented by people like Dr. Edward Feser is that positing brute facts is epistemically self-defeating and also not supported by observation. However, I do not know many who go so far as to say that it violates the Law of Non-Contradiction. However Fr. Garrigou-Lagrange does go this far:

“To deny the principle of sufficient reason is to affirm that a contingent being which exists, though not by itself, can be uncaused or unconditioned. Now, what is uncaused or unconditioned exists by itself. Therefore, an uncaused contingent being would at the same time exist by itself and not by itself—which is absurd. This is precisely what St. Thomas means when he says: "Whatever it is proper for a thing to have, but not from its nature, accrues to it from an extrinsic cause; for what has no cause, is first and immediate." (C. Gentes, Bk. II, ch. 15, § 2). What is uncaused must by itself and immediately be existence itself. If the unconditioned were not existence itself, there could be no possible connection between it and existence, and it could not be distinguished from nothing. An absolute beginning, a being originating from nothing without any cause, is, therefore, an absurdity, since it would be both contingent, that is to say, not caused by itself, and at the same time uncaused, unconditioned, non-relative, that is to say, absolute or caused by itself. Its existence would be its own a se and not a se. Therefore, between unconditioned or uncaused contingency there is a contradiction.”

I think that what Fr. Garrigou-Lagrange is getting at here is that any object whose existence is finite and uncaused (a brute fact) must necessarily be in a state where change is possible (albeit brute). If change is not possible, then the object is not contingent but necessary. That is, it cannot be otherwise. However, if the object (being a brute fact) is not determined by anything, even its own nature, then the set of potentia that it can change into is infinite. So a simple electron could go from having negative one charge and existing in three dimensions to having positive two charge and existing in twelve dimensions to being a lizard that is made of a continuous (non-composite) substance. Essentially, the decay channels of the electron would be literally anything you can imagine that is not logically contradictory. And the probability of each decay channel (since that is also not determined by anything) would be equal. However, the probability of any discrete value being selected from an unbounded infinite set of values (zero to positive infinity) is zero. The same can be said for any non-infinite set of values (that is the probability of selecting a number within the subset of real numbers from zero to one thousand out of the set of zero to infinity is zero. In fact, the only “number” that you can select from such an infinite set is infinity. Therefore, the probability of any finite contingent object or any finite set of potentia subsisting for any finite positive amount of time is zero. The only way to prevent this is to impart necessity to the object or set of potentia. I have noticed this when arguing with people about the PSR. People seem to want to sneak necessity into their brute facts so that they can say something like the universe is a brute fact but it has to be the way it is. But this is equivalent to saying the universe is a necessary brute fact, or more explicitly, the universe is a necessary contingent fact and thus (most explicitly) a necessary non-necessary fact.

I think Garrigou-Lagrange’s argument works. I have not read Dr. Pruss’ book on PSR, and I would be interested to see what he thinks about the issue.

I would love to hear your thoughts in the meantime. Obviously this talk of probability and PSR is very relevant to the question of fine-tuning.

2. Nigel Cundy
Posted at 23:01:47 Sunday July 7 2019


Dear Scott,

Thanks for your question. It does look like an interesting argument. I have a few things occupying me in the first part of this week, but I'll try to respond towards the end of the week.


3. Julian
Posted at 13:45:49 Tuesday July 9 2019

Methaphysics and formal logic

Hello Dr. Cundy,

with great interest, I read your book "What is physics". Have you ever thought about a transcription of the chapters "A Classical Theory of God" and "Ethics" from English into formal logic? Normal language 's always incomplete (e. g. if you say "or", this can be understood exclusively and inclusively). In contrast, mathematical and physical signs have a taste of perfection. It would be so decisive, if scientist shows, that "God exists" is as true as "2+2=4".


4. Anon
Posted at 03:41:03 Thursday July 11 2019


Hello Dr. Cundy,

What do you think of Roger Penrose's theory of mind, Orchestrated Objective Reduction (Orch-OR)? If it is true, James Ross's argument in "Immaterial Aspects of Thought" and "Thought and World" might be undermined. His argument rests on the assumption that any material system can be destroyed, introducing indeterminacy. However, according to Roger Penrose, the information that composes our minds exists "indefinitely" outside of the body after death. Also, I have heard that this information may exist in a timeless state, while James Ross requires material systems to be temporal.

I am less interested in the neurological details of this theory (e.g. whether the brain is too warm and wet for quantum coherence). I am more interested in the philosophical possibility of such a theory being true. Assuming the brain can house quantum computing, can this theory hold or does James Ross's argument still work?


5. Nigel Cundy
Posted at 23:19:07 Thursday July 11 2019

Formal Logic

There are a number of reasons why I didn't use formal logic in those chapters.

1) Formal logic is harder to understand than the more traditional (wordy) logic, particularly for those not trained in it. You might say that it is a bit much my saying that, when my discussions of physics are mathematical, and thus hard to understand for those without the needed mathematical skills. However, modern physics is mathematical in construction; it is difficult to express the ideas precisely and convincingly without going into mathematics. However, a logical argument can be expressed in words rather than symbols. The direction I want to go next with this work is to make it easier to understand; to reduce the formalism rather than increase it. Also more people have studied mathematics (to at least some extent) than have formal logic.

2) Most of the arguments I used in those chapters were based on other sources (the only consciously original arguments I make are parts of chapters 11 and 15, and the way I have combined the various threads together), and those sources made use of traditional logic. Obviously, I expressed the arguments in my own words and adapted them to make them consistent and fit with my own presentation, but it was still easier to write using the same methodology and formalism as my sources than translate it to a more formal logic.

3) I am trained in physics and mathematics, not in formal logic. Most mathematical papers express the logical part of their arguments in words, using formalism only where needed for the mathematics. Again, this is largely to make the papers more readable. But it is the style I am used to reading and writing in; so naturally I adopted it. Not being trained in formal logic means that it is not as easy for me to do that work as it is to write a mathematical treatise. Indeed, my mathematical training concentrates on those parts needed for physics: calculus, operator theory, geometry, group theory and so on; but there are other areas of mathematics where I am less knowledgable: most relevantly set theory, which (as I understand it) is the basis of formal logic.

4) I am concerned that at least some ways of expressing formal logic imply metaphysical assumptions which can be questioned. Take, for example, the following argument:

1) All men are mortal; Socrates is a man; therefore Socrates is mortal.

2) Let S represent Socrates, H the set of all men, and M the set of all mortal beings. Then,

formal logic

At first glance, these seem to say the same thing. However, I am concerned about the notation. The set of all men is a set of substances. Mortality is a property. Thus, unless we define substances by their properties (which I don't think is wise), although H and M are both treated as sets in the formalism; they are in practice different. In this simple example, that doesn't make much difference, but my worry is that in more complicated arguments an assumption I don't want might sneak in via the formalism. I therefore need to be particularly careful when translating; it is a harder task than it would be for someone who does (for example) treat substances as bundles of properties. Of course, one can introduce another symbol to indicate that something exhibits a property (and then I would have to distinguish between essential, accidental and relational or conditional properties), but it will take extra effort to overcome this, and thus more chances for me to go wrong. So as well as translating the ideas into formal logic, I would have to create my own formalism which is adapted to my own philosophical principles. Better to stick with the method I know (expressing things in words) where it is harder to introduce an error of this type.

5) Even if I did express the arguments in formal logic, it wouldn't be enough to give a mathematical and formal logic proof for God. Every logical argument is only as good as its premises, and people can always attack those premises. My argument can be summarised as follows: contemporary physics is consistent with a set of metaphysical premises, and inconsistent with various alternative metaphysics. The metaphysics it is inconsistent is that which underlies contemporary atheism; the metaphysics it is consistent with directly implies the existence of God (and a certain understanding of ethics). I can get the physics right, and I can get the logic part right. Both of those are essentially mathematical/logical arguments, whose validity can be confirmed by checking them. But there is still that join in the middle. I don't think that join could be argued in terms of formal logic alone: we would be trying to lay down the initial definition. People will still question whether there is an alternative metaphysical approach consistent with physics which disagrees with some of the premises needed for my arguments for God.

6) Having said that, yes formal logic does have its place. It is more precise than the wordy style I use, and easier to check the validity of the arguments. It would be worth doing; but it is not something I plan to do in the near future, and it is something I would have to take care over. As I said, my next task is to try to remove the formalism I do have to make the work more accessible. After that, there are a couple of other projects I have in mind. But after that? Maybe. I do think it worth doing; just not my highest priority.

6. Nigel Cundy
Posted at 23:43:20 Thursday July 11 2019


Unfortunately, I don't know anything about Roger Penrose's theory of the mind. I have read some of his works (the Emperor's New Mind, I think, and maybe others); but it was a long time ago and I've retained little of it. Sorry I can't be of more help.

7. Nigel Cundy
Posted at 18:09:26 Saturday July 13 2019

Father Garrigou-Lagrange on the PSR

“To deny the principle of sufficient reason is to affirm that a contingent being which exists, though not by itself, can be uncaused or unconditioned. Now, what is uncaused or unconditioned exists by itself. Therefore, an uncaused contingent being would at the same time exist by itself and not by itself—which is absurd. This is precisely what St. Thomas means when he says: "Whatever it is proper for a thing to have, but not from its nature, accrues to it from an extrinsic cause; for what has no cause, is first and immediate." (C. Gentes, Bk. II, ch. 15, § 2). What is uncaused must by itself and immediately be existence itself. If the unconditioned were not existence itself, there could be no possible connection between it and existence, and it could not be distinguished from nothing. An absolute beginning, a being originating from nothing without any cause, is, therefore, an absurdity, since it would be both contingent, that is to say, not caused by itself, and at the same time uncaused, unconditioned, non-relative, that is to say, absolute or caused by itself. Its existence would be its own a se and not a se. Therefore, between unconditioned or uncaused contingency there is a contradiction.”

Let's break the argument down:

1) If the PSR is false, then there exists a contigent being which is uncaused or unconditioned.

2) A contingent being is something which exists, but not by itself.

3) What is uncaused or unconditioned exists by itself.

4) Therefore there is no contingent being which is uncaused or unconditioned.

5) Therefore the PSR is true.

This chain of reasoning looks OK to me. Premise 1) serves as a definition of the PSR. Premise 2) serves as the definition of contingent. Premise 3) serves as the definition of being uncaused or unconditioned. Step 4 follows from 2) and 3), and step 5) from 1) and 4).

There are four main points to ponder. A) whether 1) serves as a good definition of the PSR; B) whether 2) serves as a good definition of contingent; C) whether 3) serves as a good definition of being uncaused or unconditioned; and D) whether a false dichotomy is raised between "exists by itself" and "exists by something else".

B) a contingent being is usually defined in contrast to a necessary being (something which exists essentially), i.e something which

can come in or out of existence, or which existence is dependent on something else continually sustaining it. If saying "exists by itself" is a synonym for a necessary being, then this looks reasonable as a definition of contingent.

C) "What is uncaused or unconditioned exists by itself" seems to define existence by itself as existing without having something else causing it or being conditional on something else, which is not the same as the definition in (B) without begging the question. It would be better to state "What is uncaused or unconditioned does not exist by something else."

A) the PSR states that everything has a sufficient reason for itself, i.e. either it is caused or contingent upon something else, or it exists essentially. To deny the PSR therefore implies that there is a being that does not exist essentially (i.e. is contingent) and is uncaused or unconditioned. This definition looks OK, with my amendment to C).

D) My concern is whether there is a false dichotomy between the definitions in B) and my modified C). We have three possibilities: a) something exists necessarily; b) something exists on account of something else; c) something exists, but neither of these; perhaps it pops into existence for no reason (if it had not come into existence and had always existed without being changed by something external to it, then it would qualify as a necessary being, so it must come into existence). To deny the possibility of 3) without any argumentation is just to beg the question. Clearly the person who denies the PSR will accept that (c) can happen. My concern is that the argument addresses (a) in (2), and (b) in (3), but does not seem to address c).

Does the remaining part of the argument look at this? I particularly like this sentence "If the unconditioned were not existence itself, there could be no possible connection between it and existence." I might expand on this idea. If there is something which fits in with (c), then it can have no interaction with any other existent being as it comes into being. It might or might not interact with some other existent being. If it does not interact with anything after coming into existence, then clearly it does nothing in the universe, and may as well not have existed. [I take a short cut in my work by defining existence in terms of the possibility to interact with something else.] We could certainly never observe it directly or indirectly. That leaves something which pops into existence from nothing, spends a period of time without interacting with anything, and then starts interacting at a later time. In the period between its coming into existence and its first interaction, it has no effect on the universe; nothing can "know" about its existence (since to know that would require interacting with it, which runs against our assumption); and there is no gain in supposing that it is there. It would have made no difference if it didn't exist in that period, but only came into existence on its first interaction. But, if its first effect on the universe is in that first interaction, we cannot distinguish between the case that it came into existence then, and if it existed earlier. We lose nothing by assuming that it came into existence at that first interaction. If it did come into existence at that interaction, then we would say that it is caused by whatever it was that it interacted with. At that first interaction, it ceases to be a brute fact.

Thus I don't think that this argument represents an absolute proof for the PSR, since it can't rule out the possibility (c) without begging the question. But it might serve as part of a strong epistemic argument, in that if there is a brute fact, then we cannot know that there is one, no chain of reasoning can depend on its existence which would not work just as well if it wasn't there, and no chain of reasoning that denies the existence of the brute fact can be proved false.

I'm not sure that your argument is a restatement of Father Garrigou-Lagrange's, but it is also interesting. As you say, if something is a brute fact, then there is no reason why it would become that thing and not something else, and consequently why its final causes would be limited. However, the probability would only be zero if a) we assumed that that the probability that it could turn into anything else is equal, and b) there is an infinite number of consistent things it could turn into. I'm not completely convinced about (a) (for example, one can argue that the distribution ought to be a Poisson distribution counting the number of quarks or electrons in the decay product -- rather than every thing is of equal probability, every quark is of equal probability). (b) strikes me as problematic. You might think "We can always add one more atom," but eventually the force of gravity will become too strong that it will collapse into itself into a black hole.

Post Comment:

Some html formatting is supported,such as <b> ... <b> for bold text , < em>... < /em> for italics, and <blockquote> ... </blockquote> for a quotation
All fields are optional
Comments are generally unmoderated, and only represent the views of the person who posted them.
I reserve the right to delete or edit spam messages, obsene language,or personal attacks.
However, that I do not delete such a message does not mean that I approve of the content.
It just means that I am a lazy little bugger who can't be bothered to police his own blog.

What is 4×7+15?