Read Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover Online
Authors: James Barrat
The scientists at Asilomar created rules for conducting DNA-related research, most critically, an agreement to work only with bacteria that couldn’t survive outside the laboratory. Researchers resumed work, adhering to the guidelines, and consequently tests for inherited diseases and gene therapy treatment are today routine. In 2010, 10 percent of the world’s cropland was planted with genetically modified crops. The Asilomar Conference is seen as a victory for the scientific community, and for an open dialogue with a concerned public. And so it’s cited as a model for how to proceed with other dual use technologies (milking the symbolic connection with this important conference, the Association for the Advancement of Artificial Intelligence [AAAI], the leading scholarly organization for AI, held their 2009 meeting at Asilomar).
Frankenstein pathogens escaping labs recalls chapter 1’s Busy Child scenario. For AGI, an open, multidisciplinary Asilomar-style conference could mitigate
some
sources of risk. Attendees would encourage one another to develop ideas on how to control and contain up-and-coming AGIs. Those anticipating problems could seek advice. The existence of a robust conference would encourage researchers in other countries to attend, or host their own. Finally, this open forum would alert the public. Citizens who know about the risk versus reward calculation may contribute to the conversation, even if it’s just to tell politicians that they don’t support unregulated AGI development. If harm comes from an AI disaster, as I predict it will, an informed public is less likely to feel deceived or to call for relinquishment.
As I’ve said, I’m generally skeptical of plans to modify AGI while it’s in development because I think it will be futile to rein in developers who must assume that their competitors are not similarly impeded. However, DARPA and other major AI funders could impose restrictions on their grantees. The more easily the restrictions are to integrate the more likely they’ll be followed.
One restriction might be to require that powerful AIs contain components that are programmed to
die by default
. This refers to biological systems in which the whole organism is protected by killing off parts at the cellular level through preprogrammed death. In biology it’s called apoptosis.
Every time a cell divides, the original half receives a chemical order to commit suicide, and it will do so unless it receives a chemical reprieve. This prevents unrestricted cell multiplication, or cancer. The chemical orders come from the cell itself. The cells of your body do this all the time, which is why you are continually sloughing off dead skin cells. An average adult loses up to seventy billion cells a day to apoptosis.
Imagine CPUs and other hardware chips hardwired to die. Once an AI reached some pre–Turing test benchmark, researchers could replace critical hardware with apoptotic components. These could ensure that if an intelligence explosion occurred, it would be short-lived. Scientists would have an opportunity to return the AI to its precritical state and resume their research. They could incrementally advance, or freeze the AI and study it. It would be similar to the familiar video game convention of advancing until you fail, then restarting from the last saved position.
Now, it’s easy to see how a self-aware, self-improving AI on the verge of AGI would understand that it had apoptotic parts—that’s the very definition of self-aware. At a pre-Turing stage, it couldn’t do much about it. And right about the time it was able to devise a plan to route around its suicidal elements, or play dead, or otherwise take on its human creators, it would die. Its makers could determine if it would or would not remember what had just happened. For the burgeoning AGI, it might feel a lot like the movie
Groundhog Day,
but without the learning.
The AI could be dependent on a regular reprieve from a human or committee, or from another AI that could not self-improve, and whose sole mission was to ensure that the self-improving candidate developed safely. Without its “fix” the apoptotic AI would expire.
For the University of Ulster’s Roy Sterrit, apoptotic computing is a broad-spectrum defense whose time has come:
We have made the case previously that all computer-based systems should be Apoptotic, especially as we increasingly move into a vast pervasive and ubiquitous environment. This should cover all levels of interaction with technology from data, to services, to agents, to robotics. With recent headline incidents of credit card and personal data losses by organizations and governments to the Sci-Fi nightmare scenarios now being discussed as possible future, programmed death by default becomes a necessity.
We’re rapidly approaching the time when new autonomous computer-based systems and robots should undergo tests, similar to ethical and clinical trials for new drugs, before they can be introduced, the emerging research from Apoptotic Computing and Apoptotic Communications may offer the safe-guard.
Recently Steve Omohundro has begun to develop a plan with some similarities to apoptotic systems. Called the “Safe-AI Scaffolding Approach,” it advocates creating “highly constrained but still powerful intelligent systems” to help build yet more powerful systems. An early system would help researchers resolve dangerous problems in the creation of a more advanced system, and so on. In order to be considered safe, the initial scaffold’s “safety” would be demonstrated by mathematical proofs. Proof of safety would be required for every subsequent AI. From a secure foundation, powerful AI could then be used to solve real-world problems. Omohundro writes, “Given the infrastructure of provably reliable computation devices we then leverage them to get provably safe devices which can physically act on the world. We then design systems for manufacturing new devices that provably can only make devices in the trusted classes.”
The end goal is to create intelligent devices powerful enough to address all the problems that might emerge from multiple, unrestricted ASIs
or
to create “a restricted world that still meets our needs for freedom and individuality.”
Ben Goertzel’s solution to the problem is an elegant strategy that’s not borrowed from nature or engineering. Recall that in Goertzel’s OpenCog system, his AI initially “lives” in a virtual environment. This architecture might solve the “embodiment” issue of intelligence while providing a measure of safety. Safety, however, is not Goertzel’s concern—he wants to save money. It’s much cheaper for an AI to explore and learn in a
virtual
world than it is to outfit it with sensors and actuators and let it learn by exploring the
real
world. That would require a pricey robot body.
Whether a virtual world can ever have enough depth, detail, and other worldlike qualities to promote an AI’s cognitive development is an open question. And, without extremely careful programming, a superintelligence might discover it’s confined to a “sandbox,” a.k.a., a virtual world, and then attempt to escape. Once again, researchers would have to assess their ability to keep a superintelligence contained. But if they managed to create a friendly AGI, it might actually prefer a virtual home to a world in which it may not be welcome. Is interaction in the physical world necessary for an AGI or ASI to be useful? Perhaps not. Physicist Stephen Hawking, whose mobility and speech are extremely limited, may be the best proof. For forty-nine years Hawking has endured progressive paralysis from a motor neuron disease, all the while making important contributions to physics and cosmology.
Of course, once again, it may not take long for a creature a thousand times more intelligent than the most intelligent human to figure out that it is in a box. From the point of view of a self-aware, self-improving system, that would be a “horrifying” realization. Because the virtual world it inhabited could be switched off, it’d be highly vulnerable to not achieving its goals. It could not protect itself, nor could it gather genuine resources. It would try to safely leave the virtual world as quickly as possible.
Naturally you could combine a sandbox with apoptotic elements—and here lies an important point about defenses. It’s unrealistic to expect one defense to remove risks. Instead, a cluster of defenses might mitigate them.
I’m reminded of my friends in the cave-diving community. In cave diving, every critical system is triply redundant. That means divers carry or cache at least three sources of air, and retain a third of their air through the end of each dive. They carry at least three underwater lights and at least three knives, in case of entanglement. Even so, cave diving remains the world’s most dangerous sport.
Triple or quadruple containment measures could confound a Busy Child, at least temporarily. Consider a Busy Child reared in a sandbox within an apoptotic system. The sandbox of course would be separated by an air gap from any network, cabled or wireless. A separate individual human would be in charge of each restriction. A consortium of developers and a fast-response team could be in contact with the lab during critical phases.
And yet, would this be enough? In
The Singularity Is Near
, after recommending defenses to AGI, Kurzweil concedes that no defense will always work.
“There is no purely technical strategy that is workable in this area because greater intelligence will always find a way to circumvent measures that are the product of a lesser intelligence.”
There is no absolute defense against AGI, because AGI can lead to an intelligence explosion and become ASI. And against ASI we will fail unless we’re extremely lucky or well-prepared. I’m hoping for luck because I do not believe our universities, corporations, or government institutions have the will or the awareness for adequate, timely preparation.
Paradoxically, however, there’s a chance we can be saved by our own stupidity and fear. Organizations such as MIRI, the Future of Humanity Institute, and the Lifeboat Foundation emphasize the existential risk of AI, believing that if AI poses lesser risks, they rank lower in priority than the total destruction of mankind. As we’ve seen, Kurzweil alludes to smaller “accidents” on the scale of 9/11, and ethicist Wendall Wallach, whose quotation starts this chapter, anticipates small ones, too. I’m with both camps—we’ll suffer big and little disasters. But what kinds of AI-related accidents are we likely to endure on the road to building AGI? And will we be frightened enough by them to consider the quest for AGI in a new, sober light?
Chapter Fifteen
The Cyber Ecosystem
The next war will begin in cyberspace.
—Lt. General Keith Alexander, USCYBERCOM
For Sale
ZeuS 1.2.5.1
Clean
I am selling a private zeus ver. 1.2.5.1 for 250$. Accept only Western Union. Contact me for more details. I am also provide antiabuse hosting, domain for zeus control panel. And also can help with installing and setting up zeus botnet.
—It’s Not The Latest Version But it’s Work fine
Contact: [email protected]
—add for malware found on
www.opensc.ws
State-sponsored private hackers will be the first to use AI and advanced AI for theft, and will cause destruction and loss of life when they do. That’s because computer malware is growing so capable that it can already be considered narrowly intelligent. As Ray Kurzweil told me, “There are software viruses that do exhibit AI. We’ve been able to keep up with them. It’s not guaranteed we can always do that.” Meanwhile, expertise in malware has become commoditized. You can find hacking services for hire as well as products. I found the preceding ad for Zeus malware (malicious software) after Googling for less than a minute.
Symantec, Inc. (corporate motto:
CONFIDENCE IN A CONNECTED WORLD
), started life as an artificial intelligence company, but now it’s the biggest player in the Internet’s immune system. Each year, Symantec discovers about 280
million
new pieces of malware. Most of it is created by software that writes software. Symantec’s defenses are also automatic, analyzing suspect software, creating a “patch” or block against it if it is deemed harmful, and adding it to a “blacklist” of culprits. According to Symantec, in sheer numbers malware passed good software several years ago, and as many as one in every ten downloads from the Web includes a harmful program.