Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers (30 page)

BOOK: Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers
13.74Mb size Format: txt, pdf, ePub
ads

Now for the second reason that undecidability has limited practical effects: it turns out that we can often do a good job of solving unde-cidable problems
most of the time.
The main example of the current chapter is an excellent illustration of this. We followed an elaborate proof showing that no computer program can ever be capable of finding all the bugs in all computer programs. But we can still try to write a crash-finding program, hoping to make it find most of the bugs in most types of computer programs. This is, indeed, a very active area of research in computer science. The improvements we've seen in software reliability over the last few decades are partly due to the advances made in crash-finding programs. Thus, it is often possible to produce very useful partial solutions to undecidable problems.

Undecidability and the Brain

Does the existence of undecidable problems have implications for human thought processes? This question leads directly to the murky depths of some classic problems in philosophy, such as the definition of consciousness and the distinction between mind and brain. Nevertheless, we can be clear about one thing: if you believe that the human brain could, in principle, be simulated by a computer, then the brain is subject to the same limitations as computers. In other words, there would be problems that no human brain could solve—however intelligent or well-trained that brain might be. This conclusion follows immediately from the main result in this chapter. If the brain can be imitated by a computer program, and the brain can solve undecidable problems, then we could use a computer simulation of the brain to solve the undecidable problems also—contradicting the fact that computer programs cannot solve undecidable problems.

Of course, the question of whether we will ever be able to perform accurate computer simulations of the brain is far from settled. From a scientific point of view, there do not seem to be any fundamental barriers, since the low-level details of how chemical and electrical signals are transmitted in the brain are reasonably well understood. On the other hand, various philosophical arguments suggest that somehow the physical processes of the brain create a “mind” that is qualitatively different from any physical system that could be simulated by computer. These philosophical arguments take many forms and can be based, for example, on our own capacity for self-reflection and intuition, or an appeal to spirituality.

There is a fascinating connection here to Alan Turing's 1937 paper on undecidability—a paper that is regarded by many as the foundation of computer science as a discipline. The paper's title is, unfortunately, rather obscure: it begins with the innocuous-sounding phrase “On computable numbers…” but ends with the jarring “…with an application to the Entscheidungsproblem.” (We won't be concerning ourselves with the second part of the title here!) It is crucial to realize that in the 1930s, the word “computer” had a completely different meaning, compared to the way we use it today. For Turing, a “computer” was a
human
, doing some kind of calculation using a pencil and paper. Thus, the “computable numbers” in the title of Turing's paper are the numbers that could, in principle, be calculated by a human. But to assist his argument, Turing describes a particular type of machine (for Turing, a “machine” is what we would call a “computer” today) that can also do calculations. Part of the paper is devoted to demonstrating that certain calculations cannot be performed by these machines—this is the proof of undecidability, which we have discussed in detail already. But another part of the same paper makes a detailed and compelling argument that Turing's “machine” (read: computer) can perform any calculation done by a “computer” (read: human).

You may be beginning to appreciate why it is difficult to overstate the seminal nature of Turing's “On computable numbers.” paper. It not only defines and solves some of the most fundamental problems in computer science, but also strikes out into the heart of a philosophical minefield, making a persuasive case that human thought processes could be emulated by computers (which, remember, had not been invented yet!). In modern philosophical parlance, this notion—that all computers, and probably humans too, have equivalent computational power—is known as the
Church-Turing thesis.
The name acknowledges both Alan Turing and Alonzo Church, who (as mentioned earlier) independently discovered the existence of undecidable problems. In fact, Church published his work a few months before Turing, but Church's formulation is more abstract and does not explicitly mention computation by machines.

The debate over the validity of the Church-Turing thesis rages on. But if its strongest version holds, then our computers aren't the only ones humbled by the limits of undecidability. The same limits would apply not only to the genius at our fingertips, but the genius behind them: our own minds.

11

Conclusion: More Genius at Your Fingertips?

We can only see a short distance ahead, but we can see plenty there that needs to be done.

—A
LAN
T
URING
,
Computing Machinery and Intelligence
, 1950

I was fortunate, in 1991, to attend a public lecture by the great theoretical physicist Stephen Hawking. During the lecture, which was boldly titled “The Future of the Universe,” Hawking confidently predicted that the universe would keep expanding for at least the next 10 billion years. He wryly added, “I don't expect to be around to be proved wrong.” Unfortunately for me, predictions about computer science do not come with the same 10-billion-year insurance policy that is available to cosmologists. Any predictions I make may well be disproved during my own lifetime.

But that shouldn't stop us thinking about the future of the great ideas of computer science. Will the great algorithms we've explored remain “great” forever? Will some become obsolete? Will new great algorithms emerge? To address these questions, we need to think less like a cosmologist and more like a historian. This brings to mind another experience I had many years ago, watching some televised lectures by the acclaimed, if controversial, Oxford historian A. J. P. Taylor. At the end of the lecture series, Taylor directly addressed the question of whether there would ever be a third world war. He thought the answer was yes, because humans would probably “behave in the future as they have done in the past.”

So let's follow A. J. P. Taylor's lead and bow to the broad sweep of history. The great algorithms described in this book arose from incidents and inventions sprinkled throughout the 20th century. It seems reasonable to assume a similar pace for the 21st century, with a major new set of algorithms coming to the fore every two or three decades. In some cases, these algorithms could be stunningly original, completely new techniques dreamed up by scientists. Public key cryptography and the related digital signature algorithms are examples of this. In other cases, the algorithms may have existed in the research community for some time, waiting in the wings for the right wave of new technology to give them wide applicability. The search algorithms for indexing and ranking fall into this category: similar algorithms had existed for years in the field known as information retrieval, but it took the phenomenon of web search to make these algorithms “great,” in the sense of daily use by ordinary computer users. Of course, the algorithms also evolved for their new application; PageRank is a good example of this.

Note that the emergence of new technology does not necessarily lead to new algorithms. Consider the phenomenal growth of laptop computers over the 1980s and 1990s. Laptops revolutionized the way people use computers, by vastly increasing accessibility and portability. And laptops also motivated hugely important advances in such diverse areas as screen technology and power management techniques. But I would argue that no great algorithms emerged from the laptop revolution. In contrast, the emergence of the internet is a technology that did lead to great algorithms: by providing an infrastructure in which search engines could exist, the internet allowed indexing and ranking algorithms to evolve toward greatness.

Therefore, the undoubted acceleration of technology growth that continues to unfold around us does not, in and of itself, guarantee the emergence of new great algorithms. In fact, there is a powerful historical force operating in the other direction, suggesting that the pace of algorithmic innovation will, if anything, decrease in the future. I'm referring to the fact that computer science is beginning to mature as a scientific discipline. Compared to fields such as physics, mathematics, and chemistry, computer science is very young: it has its beginnings in the 1930s. Arguably, therefore, the great algorithms discovered in the 20th century may have consisted of low hanging fruit, and it will become more and more difficult to find ingenious, widely applicable algorithms in the future.

So we have two competing effects: new niches provided by new technology occasionally provide scope for new algorithms, while the increasing maturity of the field narrows the opportunities. On balance, I tend to think that these two effects will cancel each other out, leading to a slow but steady emergence of new great algorithms in the years ahead.

SOME POTENTIALLY GREAT ALGORITHMS

Of course, some of these new algorithms will be completely unexpected, and it's impossible to say anything more about them here. But there are existing niches and techniques that have clear potential. One of the obvious trends is the increasing use of artificial intelligence (and, in particular, pattern recognition) in everyday contexts, and it will be fascinating to see if any strikingly novel algorithmic gems emerge in this area.

Another fertile area is a class of algorithms known as “zero-knowledge protocols.” These protocols use a special type of cryptography to achieve something even more surprising than a digital signature: they let two or more entities combine information without revealing any of the individual pieces of information. One potential application is for online auctions. Using a zero-knowledge protocol, the bidders can cryptographically submit their bids to each other in such a way that the winning bidder is determined, but no information about the other bids is revealed to anyone! Zero-knowledge protocols are such a clever idea that they would easily make it into my canon of great algorithms, if only they were used in practice. But so far, they haven't achieved widespread use.

Another idea that has received an immense amount of academic research but limited practical use is a technique known as “distributed hash tables.” These tables are an ingenious way of storing the information in a peer-to-peer system—a system that has no central server directing the flow of information. At the time of writing, however, many of the systems that claim to be peer-to-peer in fact use central servers for some of their functionality and thus do not need to rely on distributed hash tables.

The technique of “Byzantine fault tolerance” falls in the same category: a surprising and beautiful algorithm that can't yet be classed as great, due to lack of adoption. Byzantine fault tolerance allows certain computer systems to tolerate any type of error whatsoever (as long as there are not too many simultaneous errors). This contrasts with the more usual notion of fault tolerance, in which a system can survive more benign errors, such as the permanent failure of a disk drive or an operating system crash.

CAN GREAT ALGORITHMS FADEAWAY?

In addition to speculating about what algorithms might rise to greatness in the future, we might wonder whether any of our current “great” algorithms—indispensable tools that we use constantly without even thinking about it—might fade in importance. History can guide us here, too. If we restrict attention to particular algorithms, it is certainly true that algorithms can lose relevance. The most obvious example is in cryptography, in which there is a constant arms race between researchers inventing new crypto algorithms, and other researchers inventing ways to crack the security of those algorithms. As a specific instance, consider the so-called cryptographic hash functions. The hash function known as MD5 is an official internet standard and has been widely used since the early 1990s, yet significant security flaws have been found in MD5 since then, and its use is no longer recommended. Similarly, we discussed in
chapter 9
the fact that the RSA digital signature scheme will be easy to crack if and when it becomes possible to build quantum computers of a reasonable size.

However, I think examples like this answer our question too narrowly. Sure, MD5 is broken (and, by the way, so is its main successor, SHA-1), but that doesn't mean the central idea of cryptographic hash functions is irrelevant. Indeed, such hash functions are used extremely widely, and there are plenty of uncracked ones out there. So, provided we take a broad enough view of the situation and are prepared to adapt the specifics of an algorithm while retaining its main ideas, it seems unlikely that many of our presently great algorithms will lose their importance in the future.

WHAT HAVE WE LEARNED?

Are there any common themes that can be drawn out from the great algorithms presented here? One theme, which was a great surprise to me as the author of the book, is that all of the big ideas can be explained without requiring previous knowledge of computer programming or any other computer science. When I started work on the book, I assumed that the great algorithms would fall into two categories. The first category would be algorithms with some simple yet clever trick at their core—a trick that could be explained without requiring any technical knowledge. The second category would be algorithms that depended so intimately on advanced computer science ideas that they could not be explained to readers with no background in this area. I planned to include algorithms from this second category by giving some (hopefully) interesting historical anecdotes about the algorithms, explaining their important applications, and vociferously asserting that the algorithm was ingenious even though I couldn't explain how it worked. Imagine my surprise and delight when I discovered that all the chosen algorithms fell into the first category! To be sure, many important technical details were omitted, but in every case, the key mechanism that makes the whole thing work could be explained using nonspecialist notions.

Another important theme common to all our algorithms is that the field of computer science is much more than just programming. Whenever I teach an introductory computer science course, I ask the students to tell me what they think computer science actually is. By far the most common response is “programming,” or something equivalent such as “software engineering.” When pressed to provide additional aspects of computer science, many are stumped. But a common follow-up is something related to hardware, such as “hardware design.” This is strong evidence of a popular misconception about what computer scientists really do. Having read this book, I hope you have a much more concrete idea of the problems that computer scientists spend their time thinking about, and the types of solutions they come up with.

A simple analogy will help here. Suppose you meet a professor whose main research interest is Japanese literature. It is extremely likely that the professor can speak, read, and write Japanese. But if you were asked to guess what the professor spends the most time thinking
about
while conducting research, you would not guess “the Japanese language.” Rather, the Japanese language is a necessary piece of knowledge for studying the themes, culture, and history that comprise Japanese literature. On the other hand, someone who speaks perfect Japanese might be perfectly ignorant of Japanese literature (there are probably millions of such people in Japan).

The relationship between computer programming languages and the main ideas of computer science is quite similar. To implement and experiment with algorithms, computer science researchers need to convert the algorithms into computer programs, and each program is written in a programming language, such as Java, C++, or Python. Thus, knowledge of a programming language is essential for computer scientists, but it is merely a prerequisite: the main challenge is to invent, adapt, and understand algorithms. After seeing the great algorithms in this book, it is my hope that readers will have a much firmer grasp of this distinction.

THE END OF OUR TOUR

We've reached the end of our tour through the world of profound, yet everyday, computation. Did we achieve our goals? And will your interactions with computing devices be any different as a result?

Well, it's just possible that next time you visit a secure website, you'll be intrigued to know who vouched for its trustworthiness, and check out the chain of digital certificates that were inspected by your web browser (
chapter 9
). Or perhaps the next time an online transaction fails for an inexplicable reason, you'll be grateful instead of frustrated, knowing that database consistency should ensure you won't be charged for something you failed to order (
chapter 8
). Or maybe you'll be musing to yourself one day, “Now wouldn't it be nice if my computer could do
this
for me”—only to realize that it's an impossibility, because your wished-for task can be proved unde-cidable using the same method as for our crash-finding program (
chapter 10
).

I'm sure you can think of plenty more examples in which knowledge of the great algorithms might change the way you interact with a computer. However, as I was careful to state in the introduction, this wasn't the primary objective of the book.
My
chief goal was to give readers enough knowledge about the great algorithms that they gain a sense of wonder at some of their ordinary computing tasks—much as an amateur astronomer has a heightened appreciation of the night sky.

Only you, the reader, can know whether I succeeded in this goal. But one thing is certain: your own personal genius is right at your fingertips. Feel free to use it.

BOOK: Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers
13.74Mb size Format: txt, pdf, ePub
ads

Other books

The House of Daniel by Harry Turtledove
My Little Secret by Anna J.
A Regency Invitation to the House Party of the Season by Nicola Cornick, Joanna Maitland, Elizabeth Rolls
The Trouble With Time by Lexi Revellian
Masks by Evangeline Anderson


readsbookonline.com Copyright 2016 - 2024