Read The Beginning of Infinity: Explanations That Transform the World Online
Authors: David Deutsch
The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote . . . Our future discoveries must be looked for in the sixth place of decimals.
Albert Michelson, address at the opening of the Ryerson Physical Laboratory, University of Chicago, 1894
What exactly was Michelson doing when he judged that there was only an ‘exceedingly remote’ chance that the foundations of physics as he knew them would ever be superseded? He was prophesying the future. How? On the basis of the best knowledge available at the time. But that consisted of the physics of 1894! Powerful and accurate though it was in countless applications, it was not capable of predicting the content of its successors. It was poorly suited even to
imagining
the changes that relativity and quantum theory would bring – which is why the physicists who did imagine them won Nobel prizes. Michelson would not have put the expansion of the universe, or the existence of parallel universes, or the non-existence of the force of gravity, on any list of possible discoveries whose probability was ‘exceedingly remote’. He just didn’t conceive of them at all.
A century earlier, the mathematician Joseph-Louis Lagrange had remarked that Isaac Newton had not only been the greatest genius who ever lived, but also the luckiest, for ‘the system of the world can
be discovered only once.’ Lagrange would never know that some of his own work, which he had regarded as a mere translation of Newton’s into a more elegant mathematical language, was a step towards the replacement of Newton’s ‘system of the world’. Michelson did live to see a series of discoveries that spectacularly refuted the physics of 1894, and with it his own prophecy.
Like Lagrange, Michelson himself had already contributed unwittingly to the new system – in this case with an experimental result. In 1887 he and his colleague Edward Morley had observed that the speed of light relative to an observer remains constant when the observer moves. This astoundingly counter-intuitive fact later became the centrepiece of Einstein’s special theory of relativity. But Michelson and Morley did not realize that that was what they had observed. Observations are theory-laden. Given an experimental oddity, we have no way of predicting whether it will eventually be explained merely by correcting a minor parochial assumption or by revolutionizing entire sciences. We can know that only
after
we have seen it in the light of a new explanation. In the meantime we have no option but to see the world through our best existing explanations – which include our existing misconceptions. And that biases our intuition. Among other things, it inhibits us from conceiving of significant changes.
When the determinants of future events are unknowable, how should one prepare for them? How can one? Given that some of those determinants are beyond the reach of scientific prediction, what is the right philosophy of the unknown future? What is the rational approach to the unknowable – to the inconceivable? That is the subject of this chapter.
The terms ‘optimism’ or ‘pessimism’ have always been about the unknowable, but they did not originally refer especially to the future, as they do today. Originally, ‘optimism’ was the doctrine that the world – past, present and future – is as good as it could possibly be. The term was first used to describe an argument of Leibniz (1646–1716) that God, being ‘perfect’, would have created nothing less than ‘the best of all possible worlds’. Leibniz believed that this idea solved the ‘problem of evil’, which I mentioned in
Chapter 4
: he proposed that all apparent evils in the world are outweighed by good consequences that are too remote to be known. Similarly, all apparently good events that
fail
to
happen – including all improvements that humans are unsuccessful in achieving – fail because they would have had bad consequences that would have outweighed the good.
Since consequences are determined by the laws of physics, the larger part of Leibniz’s claim must be that the laws of physics are the best possible too. Alternative laws that made scientific progress easier, or made disease an impossible phenomenon, or made even one disease slightly less unpleasant – in short, any alternative that would
seem
to be an improvement upon our actual history with all its plagues, tortures, tyrannies and natural disasters – would in fact have been even worse on balance, according to Leibniz.
That theory is a spectacularly bad explanation. Not only can
any
observed sequence of events be explained as ‘best’ by that method, an alternative Leibniz could equally well have claimed that we live in the
worst
of all possible worlds, and that every good event is necessary in order to prevent something even better from happening. Indeed, some philosophers, such as Arthur Schopenhauer, have claimed just that. Their stance is called philosophical ‘pessimism’. Or one could claim that the world is exactly halfway between the best possible and the worst possible – and so on. Notice that, despite their superficial differences, all those theories have something important in common: if any of them were true, rational thought would have almost no power to discover true explanations. For, since we can always imagine states of affairs that seem better than what we observe, we would always be mistaken that they
were
better,
no matter how good our explanations were
. So, in such a world, the true explanations of events are never even imaginable. For instance, in Leibniz’s ‘optimistic’ world, whenever we try to solve a problem and fail, it is because we have been thwarted by an unimaginably vast intelligence that determined that it was best for us to fail. And, still worse, whenever someone rejects reason and decides instead to rely on bad explanations or logical fallacies – or, for that matter, on pure malevolence – they still achieve, in every case, a better outcome on balance than the most rational and benevolent thought possibly could have. This does not describe an explicable world. And that would be very bad news for us, its inhabitants. Both the original ‘optimism’ and the original ‘pessimism’ are close to pure pessimism as I shall define it.
In everyday usage, a common saying is that ‘an optimist calls a glass half full while a pessimist calls it half empty’. But those attitudes are not what I am referring to either: they are matters not of philosophy but of psychology – more ‘spin’ than substance. The terms can also refer to moods, such as cheerfulness or depression, but, again, moods do not necessitate any particular stance about the future: the statesman Winston Churchill suffered from intense depression, yet his outlook on the future of civilization, and his specific expectations as wartime leader, were unusually positive. Conversely the economist Thomas Malthus, a notorious prophet of doom (of whom more below), is said to have been a serene and happy fellow, who often had his companions at the dinner table in gales of laughter.
Blind
optimism
is
a stance towards the future. It consists of proceeding as if one knows that the bad outcomes will not happen. The opposite approach, blind pessimism, often called the
precautionary principle
, seeks to ward off disaster by avoiding everything not known to be safe. No one seriously advocates either of these two as a universal policy, but their assumptions and their arguments are common, and often creep into people’s planning.
Blind optimism is also known as ‘overconfidence’ or ‘recklessness’. An often cited example, perhaps unfairly, is the judgement of the builders of the ocean liner
Titanic
that it was ‘practically unsinkable’. The largest ship of its day, it sank on its maiden voyage in 1912. Designed to survive every foreseeable disaster, it collided with an iceberg in a manner that had not been foreseen. A blind pessimist argues that there is an inherent asymmetry between good and bad consequences: a successful maiden voyage cannot possibly do as much good as a disastrous one can do harm. As Rees points out, a single catastrophic consequence of an otherwise beneficial innovation could put an end to human progress for ever. So the blindly pessimistic approach to building ocean liners is to stick with existing designs and refrain from attempting any records.
But blind pessimism is a blindly optimistic doctrine. It assumes that unforeseen disastrous consequences cannot follow from existing knowledge too (or, rather, from existing ignorance). Not all shipwrecks happen to record-breaking ships. Not all unforeseen physical disasters need be caused by physics experiments or new technology. But one thing we do know is that protecting ourselves from
any
disaster,
foreseeable or not, or recovering from it once it has happened, requires knowledge; and knowledge has to be created. The harm that can flow from any innovation that does not destroy the growth of knowledge is always finite; the good can be unlimited. There would be no existing ship designs to stick with, nor records to stay within, if no one had ever violated the precautionary principle.
Because pessimism needs to counter that argument in order to be at all persuasive, a recurring theme in pessimistic theories throughout history has been that an exceptionally dangerous moment is imminent.
Our Final Century
makes the case that the period since the mid twentieth century has been the first in which technology has been capable of destroying civilization. But that is not so. Many civilizations in history were destroyed by the simple technologies of fire and the sword. Indeed, of all civilizations in history, the overwhelming majority have been destroyed, some intentionally, some as a result of plague or natural disaster. Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology, better hygiene, or better political or economic institutions. Very few, if any, could have been saved by greater caution about innovation. In fact most had enthusiastically implemented the precautionary principle.
More generally, what they lacked was a certain combination of abstract knowledge and knowledge embodied in technological artefacts, namely sufficient
wealth
. Let me define that in a non-parochial way as the repertoire of physical transformations that they would be capable of causing.
An example of a blindly pessimistic policy is that of trying to make our planet as unobtrusive as possible in the galaxy, for fear of contact with extraterrestrial civilizations. Stephen Hawking recently advised this, in his television series
Into the Universe
. He argued, ‘If [extraterrestrials] ever visit us, I think the outcome would be much as when Christopher Columbus first landed in America, which didn’t turn out very well for the Native Americans.’ He warned that there might be nomadic, space-dwelling civilizations who would strip the Earth of its resources, or imperialist civilizations who would colonize it. The science-fiction author Greg Bear has written some exciting novels based
on the premise that the galaxy is full of civilizations that are either predators or prey, and in both cases are hiding. This would solve the mystery of Fermi’s problem. But it is implausible as a serious explanation. For one thing, it depends on civilizations becoming convinced of the existence of predator civilizations in space, and totally reorganizing themselves in order to hide from them, before being noticed – which means before they have even invented, say, radio.
Hawking’s proposal also overlooks various dangers of
not
making our existence known to the galaxy, such as being inadvertently wiped out if
benign
civilizations send robots to our solar system, perhaps to mine what they consider an uninhabited system. And it rests on other misconceptions in addition to that classic flaw of blind pessimism. One is the Spaceship Earth idea on a larger scale: the assumption that progress in a hypothetical rapacious civilization is limited by raw materials rather than by knowledge. What exactly would it come to steal? Gold? Oil? Perhaps our planet’s water? Surely not, since any civilization capable of transporting itself here, or raw materials back across galactic distances, must already have cheap transmutation and hence does not care about the chemical composition of its raw materials. So essentially the only resource of use to it in our solar system would be the sheer mass of matter in the sun. But matter is available in
every
star. Perhaps it is collecting entire stars wholesale in order to make a giant black hole as part of some titanic engineering project. But in that case it would cost it virtually nothing to omit inhabited solar systems (which are presumably a small minority, otherwise it is pointless for us to hide in any case); so would it casually wipe out billions of people? Would we seem like insects to it? This can seem plausible only if one forgets that there can be only one type of person: universal explainers and constructors. The idea that there could be beings that are to us as we are to animals is a belief in the supernatural.
Moreover, there is only one way of making progress: conjecture and criticism. And the only moral values that permit sustained progress are the objective values that the Enlightenment has begun to discover. No doubt the extraterrestrials’ morality is different from ours; but that will not be because it resembles that of the conquistadors. Nor would we be in serious danger of culture shock from contact with an advanced civilization: it will know how to educate its own children (or AIs), so
it will know how to educate us – and, in particular, to teach us how to use its computers.
A further misconception is Hawking’s analogy between our civilization and pre-Enlightenment civilizations: as I shall explain in
Chapter 15
, there is a qualitative difference between those two types of civilization. Culture shock need not be dangerous to a post-Enlightenment one.
As we look back on the failed civilizations of the past, we can see that they were so poor, their technology was so feeble, and their explanations of the world so fragmentary and full of misconceptions that their caution about innovation and progress was as perverse as expecting a blindfold to be useful when navigating dangerous waters. Pessimists believe that the present state of our own civilization is an exception to that pattern. But what does the precautionary principle say about
that
claim? Can we be sure that our present knowledge, too, is not riddled with dangerous gaps and misconceptions? That our present wealth is not pathetically inadequate to deal with unforeseen problems? Since we cannot be sure, would not the precautionary principle require us to confine ourselves to the policy that would always have been salutary in the past – namely innovation and, in emergencies, even blind optimism about the benefits of new knowledge?