Read The Beginning of Infinity: Explanations That Transform the World Online
Authors: David Deutsch
The puzzle of understanding what living things are and how they came about has given rise to a strange history of misconceptions, near-misses and ironies. The last of the ironies is that the neo-Darwinian
theory, like the Popperian theory of knowledge, really does describe creation, while their rivals, beginning with creationism, never could.
Evolution (Darwinian)
Creation of knowledge through alternating variation and selection.
Replicator
An entity that contributes causally to its own copying.
Neo-Darwinism
Darwinism as a theory of replicators, without various misconceptions such as ‘survival of the fittest’.
Meme
An idea that is a replicator.
Memeplex
A group of memes that help to cause each other’s replication.
Spontaneous generation
Formation of organisms from non-living precursors.
Lamarckism
A mistaken evolutionary theory based on the idea that biological adaptations are improvements acquired by an organism during its lifetime and then inherited by its descendants.
Fine-tuning
If the constants or laws of physics were slightly different, there would be no life.
Anthropic explanation
‘It is only in universes that contain intelligent observers that anyone wonders why the phenomenon in question happens.’
– Evolution.
– More generally, the creation of knowledge.
The evolution of biological adaptations and the creation of human knowledge share deep similarities, but also some important differences. The main similarities: genes and ideas are both replicators; knowledge and adaptations are both hard to vary. The main difference: human knowledge can be explanatory and can have great reach; adaptations are never explanatory and rarely have much reach beyond the situations
in which they evolved. False explanations of biological evolution have counterparts in false explanations of the growth of human knowledge. For instance, Lamarckism is the counterpart of inductivism. William Paley’s version of the argument from design clarified what does or does not have the ‘appearance of design’ and hence what cannot be explained as the outcome of chance alone – namely hard-to-vary adaptation to a purpose. The origin of this must be the creation of knowledge. Biological evolution does not optimize benefits to the species, the group, the individual or even the gene, but only the ability of the gene to spread through the population. Such benefits can nevertheless happen because of the universality of laws of nature and the reach of some of the knowledge that is created. The ‘fine-tuning’ of the laws or constants of physics has been used as a modern form of the argument from design. For the usual reasons, it is not a good argument for a supernatural cause. But ‘anthropic’ theories that try to account for it as a pure selection effect from an infinite number of different universes are, by themselves, bad explanations too – in part because most logically possible laws are themselves bad explanations.
The fundamental theories of modern physics explain the world in jarringly counter-intuitive ways. For example, most non-physicists consider it self-evident that when you hold your arm out horizontally you can
feel
the force of gravity pulling it downwards. But you cannot. The existence of a force of gravity is, astonishingly, denied by Einstein’s general theory of relativity, one of the two deepest theories of physics. This says that the only force on your arm in that situation is that which you yourself are exerting, upwards, to keep it constantly accelerating away from the straightest possible path in a curved region of spacetime. The reality described by our other deepest theory, quantum theory, which I shall describe in
Chapter 11
, is even more counter-intuitive. To understand explanations like those, physicists have to learn to think about everyday events in new ways.
The guiding principle is, as always, to reject bad explanations in favour of good ones. In regard to what is or is not real, this leads to the requirement that, if an entity is referred to by our best explanation in the relevant field, we must regard it as really existing. And if, as with the force of gravity, our best explanation denies that it exists, then we must stop assuming that it does.
Furthermore, everyday events are stupendously
complex
when expressed in terms of fundamental physics. If you fill a kettle with water and switch it on, all the supercomputers on Earth working for the age of the universe could not solve the equations that predict what all those water molecules will do – even if we could somehow determine their initial state and that of all the outside influences on them, which is itself an intractable task.
Fortunately, some of that complexity resolves itself into a higher-level
simplicity. For example, we
can
predict with some accuracy how long the water will take to boil. To do so, we need know only a few physical quantities that are quite easy to measure, such as its mass, the power of the heating element, and so on. For greater accuracy we may also need information about subtler properties, such as the number and type of nucleation sites for bubbles. But those are still relatively ‘high-level’ phenomena, composed of intractably large numbers of interacting atomic-level phenomena. Thus there is a class of high-level phenomena – including the liquidity of water and the relationship between containers, heating elements, boiling and bubbles – that can be well explained in terms of each other alone, with no direct reference to anything at the atomic level or below. In other words, the behaviour of that whole class of high-level phenomena is
quasi-autonomous
– almost self-contained. This resolution into explicability at a higher, quasi-autonomous level is known as
emergence
.
Emergent phenomena are a tiny minority. We can predict when the water will boil, and that bubbles will form when it does, but if you wanted to predict where each bubble will go (or, to be precise, what the probabilities of its various possible motions are – see
Chapter 11
), you would be out of luck. Still less is it feasible to predict the countless microscopically defined properties of the water, such as whether an odd or an even number of its electrons will be affected by the heating during a given period.
Fortunately, we are uninterested in predicting or explaining most of those properties, despite the fact that they are the overwhelming majority. That is because none of them has any bearing on what we want to do with the water – such as understand what it is made of, or make tea. To make tea, we want the water to be boiling, but we do not care what the pattern of bubbles was. We want its volume to be between a certain minimum and maximum, but we do not care how many molecules that is. We can make progress in achieving those purposes because we can express them in terms of those quasi-autonomous emergent properties about which we have good high-level explanations. Nor do we need most of the microscopic details in order to understand the role of water in the cosmic scheme of things, because nearly all of those details are parochial.
The behaviour of high-level physical quantities consists of nothing
but the behaviour of their low-level constituents with most of the details ignored. This has given rise to a widespread misconception about emergence and explanation, known as
reductionism
: the doctrine that science always explains and predicts things reductively, i.e. by analysing them into components. Often it does, as when we use the fact that inter-atomic forces obey the law of conservation of energy to make and explain a high-level prediction that the kettle cannot boil water without a power supply. But reductionism requires the relationship between different levels of explanation
always
to be like that, and often it is not. For example, as I wrote in
The Fabric of Reality
:
Consider one particular copper atom at the tip of the nose of the statue of Sir Winston Churchill that stands in Parliament Square in London. Let me try to explain why that copper atom is there. It is because Churchill served as prime minister in the House of Commons nearby; and because his ideas and leadership contributed to the Allied victory in the Second World War; and because it is customary to honour such people by putting up statues of them; and because bronze, a traditional material for such statues, contains copper, and so on. Thus we explain a low-level physical observation – the presence of a copper atom at a particular location – through extremely high-level theories about emergent phenomena such as ideas, leadership, war and tradition.
There is no reason why there should exist, even in principle, any lower-level
explanation
of the presence of that copper atom than the one I have just given. Presumably a reductive ‘theory of everything’ would in principle make a low-level
prediction
of the probability that such a statue will exist, given the condition of (say) the solar system at some earlier date. It would also in principle describe how the statue probably got there. But such descriptions and predictions (wildly infeasible, of course) would explain nothing. They would merely describe the trajectory that each copper atom followed from the copper mine, through the smelter and the sculptor’s studio and so on . . . In fact such a prediction would have to refer to atoms all over the planet, engaged in the complex motion we call the Second World War, among other things. But even if you had the superhuman capacity to follow such lengthy predictions of the copper atom’s being there, you would still not be able to say ‘Ah yes, now I understand
why
they are there’. [You] would have to inquire into
what it was
about that configuration of atoms, and those trajectories, that gave
them the propensity to deposit a copper atom at this location. Pursuing that inquiry would be a creative task, as discovering new explanations always is. You would have to discover that certain atomic configurations support emergent phenomena such as leadership and war, which are related to one another by high-level explanatory theories. Only when you knew those theories could you understand why that copper atom is where it is.
Even in physics, some of the most fundamental explanations, and the predictions that they make, are not reductive. For instance, the second law of thermodynamics says that high-level physical processes tend towards ever greater disorder. A scrambled egg never becomes unscrambled by the whisk, and never extracts energy from the pan to propel itself upwards into the shell, which never seamlessly reseals itself. Yet, if you could somehow make a video of the scrambling process with enough resolution to see the individual molecules, and play it backwards, and examine any part of it at that scale, you would see nothing but molecules moving and colliding in strict obedience to the low-level laws of physics. It is not yet known how, or whether, the second law of thermodynamics can be derived from a simple statement about individual atoms.
There is no reason why it should be. There is often a moral overtone to reductionism (science
should be
essentially reductive). This is related both to instrumentalism and to the Principle of Mediocrity, which I criticized in
Chapters 1
and
3
. Instrumentalism is rather like reductionism except that, instead of rejecting only high-level explanations, it tries to reject all explanations. The Principle of Mediocrity is a milder form of reductionism: it rejects only high-level explanations that involve people. While I am on the subject of bad philosophical doctrines with moral overtones, let me add
holism
, a sort of mirror image of reductionism. It is the idea that the only valid explanations (or at least the only significant ones) are of parts in terms of wholes. Holists also often share with reductionists the mistaken belief that science
can only
(or should only) be reductive, and therefore they oppose much of science. All those doctrines are irrational for the same reason: they advocate accepting or rejecting theories on grounds other than whether they are good explanations.
Whenever a high-level explanation does follow logically from
low-level ones, that also means that the high-level one
implies something
about the low-level ones. Thus, additional high-level theories, provided that they were all consistent, would place more and more constraints on what the low-level theories could be. So it could be that all the high-level explanations that exist, taken together,
imply
all the low-level ones, as well as vice versa. Or it could be that some low-level, some intermediate-level and some high-level explanations, taken together, imply
all
explanations. I guess that that is so.
Thus, one possible way that the fine-tuning problem might eventually be solved would be if some high-level explanations turned out to be exact laws of nature. The microscopic consequences of that might well seem to be fine-tuned. One candidate is the principle of the universality of computation, which I shall discuss in the next chapter. Another is the principle of testability, for, in a world in which the laws of physics do not permit the existence of testers, they also forbid themselves to be tested. However, in their current form such principles, regarded as laws of physics, are anthropocentric and arbitrary – and would therefore be bad explanations. But perhaps there are deeper versions, to which they are approximations, which
are
good explanations, well integrated with those of microscopic physics like the second law of thermodynamics is.
In any case, emergent phenomena are essential to the explicability of the world. Long before humans had much explanatory knowledge, they were able to control nature by using rules of thumb. Rules of thumb have explanations, and those explanations were about high-level regularities among emergent phenomena such as fire and rocks. Long before that, it was only genes that were encoding rules of thumb, and the knowledge in them, too, was about emergent phenomena. Thus emergence is another beginning of infinity: all knowledge-creation depends on, and physically consists of, emergent phenomena.