Read Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover Online
Authors: James Barrat
* * *
If the software problem turns out to be intractably complex, there are still at least two more arrows in the AGI seeker’s quiver. They are, first, to overpower the problem with faster computers, and second, to reverse engineer the brain.
Converting an AI system to AGI through brute force means increasing the functionality of the AI’s hardware, particularly its speed. Intelligence and creativity are increased if they operate
many
times faster. To see how, imagine a human who could do a thousand minutes of thinking in
one
minute. In important ways, he’s many times more intelligent than someone with the same baseline IQ who thinks at normal speed. But does intelligence have to start at human level for an increase in speed to impact intelligence? For instance, if you speed up a dog’s brain a thousand times do you get chimpanzee-equivalent behavior, or do you just get a very clever dog? We know that with a fourfold increase in brain
size,
from chimpanzee to human, humans acquired at least one new superpower—speech. Larger brains evolved incrementally, much slower than the rate at which processor speed routinely increases.
Overall, it’s not clear that in the absence of intelligent software, processor speed could fill the gap, and power the way to AGI and beyond to an intelligence explosion. But nor does it seem out of the question.
* * *
Now let’s turn to what’s called “reverse engineering” the brain and find out why it may be a fail-safe for the software complexity problem. So far we’ve briefly looked at the opposite approach—creating cognitive architectures that generally seek to model the brain in areas like perception and navigation. These cognitive systems are inspired by how the brain works, or—and this is important—how researchers
perceive
the brain works. They’re often called
de novo,
or, “from the beginning” systems because they’re not based on actual brains, and start from the ground up.
The problem is, systems that are inspired by cognitive models may ultimately fall short of accomplishing what a human brain does. While there’s a lot of promising headway in natural language, vision, Q&A systems, and robotics, there’s disagreement over almost every aspect of the methodology and principles that will ultimately yield progress toward AGI. Subfields as well as bold universal theories emerge because of early success or an individual or a university’s promotional power. But they just as quickly vanish again. As Goertzel said, there is no generally accepted theory of intelligence and how computationally to achieve it. Plus, there are functions of the human mind that current software techniques seem ill-equipped to address, including general learning, explanation, introspection, and controlling attention.
So what’s really been accomplished in AI? Consider the old joke about the drunk who loses his car keys and looks for them under a streetlight. A policeman joins the search and asks, “Exactly where did you lose your keys?” The man points down the street to a dark corner. “Over there,” he says. “But the light’s better here.”
Search, voice recognition, computer vision, and affinity analysis (the kind of machine learning Amazon and Netflix use to suggest what you might like) are some of the fields of AI that have seen the most success. Though they were the products of decades of research, they are also among the easiest problems, discovered
where the light’s better.
Researchers call them “low hanging fruit.” But if your goal is AGI, then
all
the narrow AI applications and tools may seem like low hanging fruit, and are only getting you marginally closer to your human-equivalent goal. Some researchers hold that narrow AI applications are in no way advancing AGI. They’re unintegrated specialist applications. And no artificial intelligence system right now smacks of general human equivalence. Are you also frustrated by big AI promises and paltry returns? Two widely made observations may have bearing on your feelings.
First, as Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, put it, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.” Not so long ago, AI was not embedded in banking, medicine, transportation, critical infrastructure, and automobiles. But today, if you suddenly removed all AI from these industries, you couldn’t get a loan, your electricity wouldn’t work, your car wouldn’t go, and most trains and subways would stop. Drug manufacturing would creak to a halt, faucets would run dry, and commercial jets would drop from the sky. Grocery stores wouldn’t be stocked, and stocks couldn’t be bought. And when were all these AI systems implemented? During the last thirty years, the so-called AI winter, a term used to describe a long decline in investor confidence, after early, overly optimistic AI predictions proved false. But there was no
real
winter. To avoid the stigma of the label “artificial intelligence,” scientists used more technical names like machine learning, intelligent agents, probabilistic inference, advanced neural networks, and more.
And still the accreditation problem continues. Domains once thought exclusively human—chess and
Jeopardy!,
for example—now belong to computers (though we’re still allowed to play). But do you consider the chess game that came with your PC to be “artificial intelligence?” Is IBM's Watson humanlike, or merely a specialized, high-powered Q&A system? What will we call scientists when computers, like Hod Lipsom’s aptly named Golem at Cornell University, start doing science? My point is this: since the day John McCarthy gave the science of machine intelligence a name, researchers have been developing AI with alacrity and force, and it’s getting smarter, faster, and more powerful all the time.
AI’s success in domains like chess, physics, and natural language processing raises a second important observation. Hard things are easy, and easy things are hard. This axiom is known as Moravec’s Paradox, because AI and robotics pioneer Hans Moravec expressed it best in his robotics classic,
Mind Children:
“It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
Puzzles so difficult that we can’t help but make mistakes, like playing
Jeopardy!
and deriving Newton’s second law of thermodynamics, fall in seconds to well-programmed AI. At the same time, no computer vision system can tell the difference between a dog and a cat—something most two-year-old humans can do. To some degree these are apples-and-oranges problems, high-level cognition versus low-level sensor motor skill. But it should be a source of humility for AGI builders, since they aspire to master the whole spectrum of human intelligence. Apple cofounder Steve Wozniak has proposed an “easy” alternative to the Turing test that shows the complexity of simple tasks. We should deem any robot intelligent, Wozniak says, when it can walk into any home, find the coffeemaker and supplies, and make us a cup of coffee. You could call it the Mr. Coffee Test. But it may be harder than the Turing test, because it involves advanced AI in reasoning, physics, machine vision, accessing a vast knowledge database, precisely manipulating robot actuators, building a general-use robot body, and more.
In a paper entitled “The Age of Robots,” Moravec provided a clue to his eponymous paradox. Why are the hard things easy and the easy things hard? Because our brains have been practicing and refining the “easy” things, involving vision, motion, and movement, since our nonhuman ancestors first
had
brains. “Hard” things like reason are relatively recently acquired abilities. And, guess what, they’re easier, not harder. It took computing to show us. Moravec wrote:
In hindsight it seems that, in an absolute sense, reasoning is much easier than perceiving and acting—a position not hard to rationalize in evolutionary terms. The survival of human beings (and their ancestors) has depended for hundreds of millions of years on seeing and moving in the physical world, and in that competition large parts of their brains have become efficiently organized for the task. But we didn’t appreciate this monumental skill because it is shared by every human being and most animals—it is commonplace. On the other hand, rational thinking, as in chess, is a newly acquired skill, perhaps less than one hundred thousand years old. The parts of our brain devoted to it are not well organized, and, in an absolute sense, we’re not very good at it. But until recently we had no competition to show us up.
That competition, of course, is computers. Making a computer that does something smart forces researchers to scrutinize themselves and other Homo sapiens, and plumb the depths and shallows of our intelligence. In computation it is prudent to formalize ideas mathematically. In the field of AI, formalization reveals hidden rules and organization behind the things we do with our brains. So, why not just cut through the clutter and just look at how a brain works from
inside
the brain, through close scrutiny of the neurons, axons, and dendrites? Why not just figure out what each neuronal cluster in the brain does, and model it with algorithms? Since most AI researchers agree that we can solve the mysteries of how a brain works, why not just build a brain?
That’s the argument for “reverse engineering the brain,” the pursuit of creating a model of a brain with computers and then teaching it what it needs to know. As we discussed, it may be
the
solution for attaining AGI if software complexity turns out to be too hard. But then again, what if whole-brain emulation
also
turns out to be too hard? What if the brain is actually performing tasks we cannot engineer? In a recent article criticizing Kurzweil’s understanding of neuroscience, Microsoft cofounder Paul Allen and his colleague Mark Greaves wrote, “The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be.… In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors.” In other words, 200 million years of evolution have honed the brain into a finely optimized thinking instrument impossible to duplicate—
“No, no, no, no, no, no, no! Absolutely not. The brain is
not
optimized, nor is any other part of the mammalian body.”
Richard Granger’s eyes darted around in a panic, as if I’d let loose a bat in his office at Dartmouth College in Hanover, New Hampshire. Though a solid New England Yankee, Granger looks like a rock star in the British invasion mold—economically built, with boyish good looks under a mop of brown hair now turning to silver. He’s intense and watchful—the one band member who understands that playing electronic instruments in the rain is risky. Earlier in life, Granger actually had rock star ambitions, but instead became a world-class computational neuroscientist, now with several books and more than a hundred peer-reviewed papers to his credit. From a window-lined office high above the campus, he heads the Brain Engineering Lab at Dartmouth College. It was here, at the 1956 Dartmouth Summer Research Conference on Artificial Intelligence, that AI first got its name. Today at Dartmouth, AI’s future lies in computational neuroscience—the study of the computational principles the brain uses to get things done.
“Our goal in computational neuroscience is to understand the brain sufficiently well to be able to simulate its functions. As simple robots today substitute for human physical abilities, in factories and hospitals, so brain engineering will construct stand-ins for our mental abilities. We’ll then be able to make simulacra of brains, and to fix ours when they break.”
If you’re a computational neuroscientist like Granger, you believe that simulating the brain is simply an engineering problem. And to believe
that
you have to take the lofty human brain, king of all mammalian organs, and bring it down a few notches. Granger sees the brain in the context of other human body parts, none of which evolved to perfection.
“Think about it this way.” Granger flexed one hand and scrutinized it. “We are not, not, not, not optimized to have five fingers, to have hair over our eyes and not on our foreheads, to have noses between our eyes instead of to the left or the right. It’s laughable that any of those are optimizations. Mammals
all
have four limbs, they
all
have faces, they
all
have eyes above noses above mouths.” And, as it turns out, we all have almost the same brains. “All mammals, including humans, have exactly the same set of brain areas and they’re wired up unbelievably similarly,” Granger said. “The way evolution works is by randomly trying things and testing them, so you
might
think that all of those different things get tested out there in the laboratory of evolution and either stick around or don’t. But they don’t get tested.”
Nevertheless, evolution hit upon something remarkable when it arrived at the mammalian brain, said Granger. That’s why it has only undergone a few tweaks in the path from early mammals to us. Its parts are redundant, and its connections are imprecise and slow, but it is using engineering principles that we can learn from—nonstandard principles humans haven’t come up with yet. That’s why Granger believes creating intelligence has to start with a close study of the brain. He doesn’t think de novo cognitive architecture—those that aren’t derived from the principles of the brain—will ever get close.
“Brains, alone among organs, produce thought, learning, recognition,” he said. “No amount of engineering yet has equaled, let alone surpassed, brains’ abilities at any of these tasks. Despite huge efforts and large budgets, we have no artificial systems that rival humans at recognizing faces, nor understanding natural languages, nor learning from experience.”
So give our brains their due. It was brains not brawn that made us the dominant species on the planet. We didn’t get to the pinnacle by being prettier than animals competing for our resources, or those that wanted to eat us. We outthought them, even perhaps when that competition was with other hominid species. Intelligence, not muscle, won the day.