Read TRUE NAMES Online

Authors: Vernor Vinge

TRUE NAMES (13 page)

“I’ll bet you wonder how anyone so daydreamy could be the Erythrina you knew in the Other Plane.” “Why, no,” he lied. “You seem perfectly lucid to me.”

“Lucid, yes. I am still that, thank God. But I know — and no one has to tell me — that I can’t support a train of thought like I could before. These last two or three years, I’ve found that my mind can wander, can drop into reminiscence, at the most inconvenient times. I’ve had one stroke, and about all ‘the miracles of modern medicine’ can do for me is predict that it will not be the last one.

“But in the Other Plane, I can compensate. It’s easy for the EEG to detect failure of attention. I’ve written a package that keeps a thirty-second backup; when distraction is detected, it forces attention and reloads my short-term memory. Most of the time, this gives me better concentration than I’ve ever had in my life. And when there is a really serious wandering of attention, the package can interpolate for a number of seconds. You may have noticed that, though perhaps you mistook it for poor communications coordination.”

She reached a thin, blue-veined hand toward him. He took it in his own. It felt so light and dry, but it returned his squeeze. “It really is me — Ery — inside, Slip.”

He nodded, feeling a lump in his throat.

“When I was a kid, there was this song, something about us all being aging children. And it’s so very, very true. Inside I still feel like a youngster. But on this plane, no one else can see…”

“But I know, Ery. We knew each other on the Other Plane, and I know what you truly are. Both of us are so much more there than we could ever be here.” This was all true: even with the restrictions they put on him now, he had a hard time understanding all he did on the Other Plane. What he had become since the spring was a fuzzy dream to him when he was down in the physical world. Sometimes he felt like a fish trying to imagine what a man in an airplane might be feeling. He never spoke of it like this to Virginia and her friends: they would be sure he had finally gone crazy. It was far beyond what he had known as a warlock. And what they had been those brief minutes last spring had been equally far beyond that.

“Yes, I think you do know me, Slip. And we’ll be … friends as long as this body lasts. And when I’m gone —” “I’ll remember; I’ll always remember you, Ery.” She smiled and squeezed his hand again. “Thanks. But that’s not what I was getting at….” Her gaze drifted off again. “I figured out who the Mailman was and I wanted to tell you.”

Pollack could imagine Virginia and the other DoW eavesdroppers hunkering down to their spy equipment. “I hoped you knew something.” He went on to tell her about the Slimey Limey’s detection of Mailman — like operations still on the System. He spoke carefully, knowing that he had two audiences.

Ery — even now he couldn’t think of her as Debby — nodded. “I’ve been watching the Coven. They’ve grown, these last months. I think they take themselves more seriously now. In the old days, they never would have noticed what the Limey warned you about. But it’s not the Mailman he saw, Slip.”

“How can you be sure, Ery? We never killed more than his service programs and his simulators-like DON.MAC. We never found his True Name. We don’t even know if he’s human or some science-fictional alien.”

“You’re wrong, Slip. I know what the Limey saw, and I know who the Mailman is — or was,” she spoke quietly, but with certainty. “It turns out the Mailman was the greatest cliché of the Computer Age, maybe of the entire Age of Science.”

“Huh?”

“You’ve seen plenty of personality simulators in the Other Plane. DON.MAC — at least as he was rewritten by the Mailman — was good enough to fool normal warlocks. Even Alan, the Coven’s elemental, shows plenty of human emotion and cunning.” Pollack thought of the new Alan, so ferocious and intimidating. The Turing T-shirt was beneath his dignity now. “Even so, Slip, I don’t think you’ve ever believed you could be permanently fooled by a simulation, have you?”

“Wait. Are you trying to tell me that the Mailman was just another simulator? That the time lag was just to obscure the fact that he was a simulator? That’s ridiculous. You know his powers were more than human, almost as great as ours became.” “But do you think you could ever be fooled?” “Frankly, no. If you talk to one of those things long enough, they display a repetitiveness, an inflexibility that’s a giveaway. I don’t know; maybe someday there’ll be programs that can pass the Turing test. But whatever it is that makes a person a person is terribly complicated. Simulation is the wrong way to get at it, because being a person is more than symptoms. A program that was a person would use enormous data bases, and if the processors running it were the sort we have now, you certainly couldn’t expect real-time interaction with the outside world.” And Pollack suddenly had a glimmer of what she was thinking.

“That’s the critical point, Slip:
if you want real-time interaction
. But the Mailman — the sentient, conversational part — never did operate real time. We thought the lag was a communications delay that showed the operator was off-planet, but really he was here all the time. It just took him hours of processing time to sustain seconds of self-awareness.”

Pollack opened his mouth, but nothing came out. It went against all his intuition, almost against what religion he had, but it might just barely be possible. The Mailman had controlled immense resources. All his quick time reactions could have been the work of ordinary programs and simulators like DON.MAC. The only evidence they had for his humanity were those teleprinter conversations where his responses were spread over hours.

“Okay, for the sake of argument, let’s say it’s possible. Someone, somewhere had to write the original Mailman. Who was that?”

“Who would you guess? The government, of course. About ten years ago. It was an NSA team trying to automate system protection. Some brilliant people, but they could never really get it off the ground. They wrote a developmental kernel that by itself was not especially effective or aware. It was designed to live within large systems and gradually grow in power and awareness,
independent
of what policies or mistakes the operators of the system might make.

“The program managers saw the Frankenstein analogy — or at least they saw a threat to their personal power — and quashed the project. In any case, it was very expensive. The program executed slowly and gobbled incredible data space.”

“And you’re saying that someone conveniently left a copy running all unknown?”

She seemed to miss the sarcasm. “It’s not that unlikely. Research types are fairly careless-outside of their immediate focus. When I was in FoG, we lost thousands of megabytes ‘between the cracks’ of our data bases. And back then, that was a lot of memory. The development kernel is not very large. My guess is a copy was left in the system. Remember, the kernel was designed to live untended if it ever started executing. Over the years it slowly grew — both because of its natural tendencies and because of the increased power of the nets it lived in.”

Pollack sat back on the sofa. Her voice was tiny and frail, so unlike the warm, rich tones he remembered from the Other Plane. But she spoke with the same authority.

Debby’s — Erythrina’s — pale eyes stared off beyond the walls of the apt, dreaming. “You know, they are right to be afraid,” she said finally. “Their world is ending. Even without us, there would still be the Limey, the Coven — and someday most of the human race.”

Damn. Pollack was momentarily tongue-tied, trying desperately to think of something to mollify the threat implicit in Ery’s words.
Doesn’t she understand that DoW would never let us talk unbugged? Doesn’t she know how trigger-happy scared the top Feds must be by now?
But before he could say anything, Ery glanced at him, saw the consternation in his face, and smiled. The tiny hand patted his. “Don’t worry, Slip. The Feds are listening, but what they’re hearing is tearful chitchat — you overcome to find me what I am, and me trying to console the both of us. They will never know what I really tell you here. They will never know about the gun the local boys took off you.”

“What?”

“You see, I lied a little. I know why you really came. I know you thought that I might be the new monster. But I don’t want to lie to you anymore. You risked your life to find out the truth, when you could have just told the Feds what you guessed.” She went on, taking advantage of his stupefied silence. “Did you ever wonder what I did in those last minutes this spring, after we surrendered — when I lagged behind you in the Other Plane?

“It’s true, we really did destroy the Mailman; that’s what all that unintelligible data space we plowed up was. I’m sure there are copies of the kernel hidden here and there, like little cancers in the System, but we can control them one by one as they appear.

“I guessed what had happened when I saw all that space, and I had plenty of time to study what was left, even to trace back to the original research project. Poor little Mailman, like the monsters of fiction he was only doing what he had been designed to do. He was taking over the System, protecting it from everyone — even its owners. I suspect he would have announced himself in the end and used some sort of nuclear blackmail to bring the rest of the world into line. But even though his programs had been running for several years, he had only had fifteen or twenty hours of human type self-awareness when we did him in. His personality programs were that slow. He never attained the level of consciousness you and I had on the System.

“But he really was self-aware, and that was the triumph of it all. And in those few minutes, I figured out how I could adapt the basic kernel to accept any input personality. … That is what I really wanted to tell you.”

“Then what the Limey saw was —”

She nodded. “Me …”

She was grinning now, an open though conspiratorial grin that was very familiar. “When Bertrand Russell was very old, and probably as dotty as I am now, he talked of spreading his interests and attention out to the greater world and away from his own body, so that when that body died he would scarcely notice it, his whole consciousness would be so diluted through the outside world.

“For him, it was wishful thinking, of course. But not for me. My kernel is out there in the System. Every time I’m there, I transfer a little more of myself. The kernel is growing into a true Erythrina, who is also truly me. When this body dies,” she squeezed his hand with hers, “when this body dies,
I
will still be, and you can still talk to me.”

“Like the Mailman?”

“Slow like the Mailman. At least till I design faster processors….

“… So in a way, I am everything you and the Limey were afraid of.
You
could probably still stop me, Slip.” And he sensed that she was awaiting his judgment, the last judgment any mere human would ever be allowed to levy upon her.

Slip shook his head and smiled at her, thinking of the slow-moving guardian angel that she would become.
Every race must arrive at this point in its history
, he suddenly realized. A few years or decades in which its future slavery or greatness rests on the goodwill of one or two persons. It could have been the Mailman. Thank God it was Ery instead. And beyond those years or decades… for an instant, Pollack came near to understanding things that had once been obvious. Processors kept getting faster, memories larger. What now took a planet’s resources would someday be possessed by everyone. Including himself.

Beyond those years or decades… were millennia. And Ery.

Vernor Vinge,
San Diego
June 1979 — January 1980

AFTERWORD by Marvin Minsky
October 1, 1984

In real life, you often have to deal with things you don’t completely understand. You drive a car, not knowing how its engine works. You ride as passenger in someone else’s car, not knowing how that driver works. And strangest of all, you sometimes drive yourself to work, not knowing how you work, yourself.

Then, how do we manage to cope with things we don’t understand? And, how do we ever understand anything in the first place? Almost always, I think, by using analogies by pretending that each alien thing we see resembles something we already know. Whenever an object’s internal workings are too strange, complicated, or unknown to deal with directly, we try to extract what parts of its behavior seem familiar—and then represent them by familiar symbols—that is, the names of things we already know which we think behave in similar ways. That way, we make each novelty at least appear to be like something we already know from our own pasts. It is a great idea, that use of symbols. It lets our minds transform the strange into the commonplace. It is the same with names.

For example, suppose that some architect invented a new way to go from one place to another: a device which serves in some respects the normal functions of a door, but one whose form and mechanism is so entirely outside our past experience that, to see it, we’d never of think of it as a door, nor guess what purpose to use it for. No matter: just superimpose, on its exterior, some decoration that reminds one of a door. We could clothe it in a rectangular shape, or add to it a waist-high knob, or a push-plate, or a sign lettered “EXIT” in red and white, or do whatever else may seem appropriate—and every visitor will know, without a conscious thought, that pseudo-portal’s purpose–and how to make it do its job.

At first this may seem mere trickery; after all, this new invention, which we decorate to look like a door, is not really a door. It has none of what we normally expect a door to be, e.g., some sort of hinged, swinging slab of wood, cut into wall. The inner details are all wrong. Names and symbols, like analogies, are only partial truths; they work by taking many-leveled descriptions of different things and chopping off all of what seem, in the present context, to be their least essential details—that is the ones which matter least to our intended purposes. But still, what matters—when it comes to using such a thing—is that whatever symbol or icon, token or sign we choose should re-mind us of the use we seek—which, for that not-quite-door, should represent some way to go from one place to another. Who cares how it works, so long as it works! It does not even matter if that “door” does not open to anywhere: in TRUE NAMES the protagonists’ bodies never move at all, but remain plugged-in to the network while programs change their representations of the simulated realities!

And strangely, this is also so inside the ordinary brain: it, too, lacks any real sense of where it is! To be sure, most modern, educated people know that thought proceeds inside the brain—but that is something no brain knows until it’s told. With the help of education, a human brain has no idea that any such thing as a brain exists. To be sure, we tend to imagine our thoughts as in some vague place behind the face, because that’s where so many sense organs are–yet even that impression is wrong: brain-centers for vision are far away, in the back of the head, where no naive brain would expect them to be.

An icon’s job is not to represent the truth about how an object (or program) works. An icon’s purpose is, instead, to represent how that thing can be used! And since the idea of a use is in the user’s mind—and not inside the thing itself—the form and figure of the icon must be suited to the symbols that have accumulated in the user’s own development. It has to be connected to whatever mental processes are already used for expressing the user’s intentions.

This principle, of choosing symbols and icons which express the functions of things (or rather, their users’ intended attitudes toward them) was already second nature to the designers of earliest fast-interaction computer systems, namely, the early computer games. In the 1970’s the meaningful-icon idea was developed for personal computers by Alan Kay’s research group at Xerox, but it was only in the early 1980’s (through the work of Steve Jobs’ development group at Apple Computer) that this concept entered the mainstream of the computer revolution.

Over the same period, there were also some less-publicized attempts to develop iconic ways to represent, not what the programs do, but how they work. This would be more useful for the different enterprise of making it easier for programmers to make new programs from old ones. Such attempts have been less successful, on the whole, perhaps because it is hard to decide how much to specify about the lower-level details of how the programs work. But such difficulties do not much obscure Vinge’s vision, for he seems to regards present day forms of programming — with their stiff, formal, inexpressive languages—as but an early stage of how better programs will be made in the future.

I too am convinced that the days of programming as we know it are numbered, and that eventually we will construct large computer systems not by anything resembling today’s meticulous but conceptually impoverished procedural specifications. Instead, we’ll express our intentions about what should be done in terms of gestures and examples that will be better designed for expressing our wishes and convictions. Then these expressions will be submitted to immense, intelligent, intention-understanding programs that then will themselves construct the actual, new programs. We shall no longer need to understand the inner details of how those programs work; that job will be left to those new, great utility programs, which will perform the arduous tasks of applying the knowledge that we have embodied in them, once and for all, about the arts of lower-level programming. Once we learn better ways to tell computers what we want them to accomplish, we will be more able to return to our actual goals–of expressing our own wants and needs. In the end, no user really cares about how a program works, but only about what it does—in the sense of the desired effects it has on things which the user cares about.

In order for that to happen, though, we will have to invent and learn to use new technologies for “expressing intentions”. To do this, we will have to break away from our old, though still evolving, programming languages, which are useful only for describing processes. But this brings with it some serious risks!

The first risk is that it is always dangerous to try to relieve ourselves of the responsibility of understanding exactly how our wishes will be realized. Whenever we leave the choice of means to any servants we may choose then the greater the range of possible methods we leave to those servants, the more we expose ourselves to accidents and incidents. When we delegate those responsibilities, then we may not realize, before it is too late to turn back, that our goals have been misinterpreted, perhaps even maliciously. We see this in such classic tales of fate as
Faust
, the
Sorcerer’s Apprentice
, or the
Monkey’s Paw
by W.W. Jacobs.

A second risk is exposure to the consequences of self-deception. It is always tempting to say to oneself, when writing a program, or writing an essay, or, for that matter, doing almost anything, that
“I know what I would like to happen, but I can’t quite express it clearly enough”.
However, that concept itself reflects a too- simplistic self-image, which portrays one’s own self as existing, somewhere in the heart of one’s mind (so to speak), in the form of a pure, uncomplicated entity which has well-defined wishes, intentions, and goals. This pre-Freudian image serves to excuse our frequent appearances of ambivalence; we convince ourselves that clarifying our intentions is merely a matter of straightening-out the input-output channels between our inner and outer selves. The trouble is, we simply aren’t made that way.
Our goals themselves are ambiguous
.

The ultimate risk comes when our greedy, lazy, master-minds attempt to take that final step—of designing goal-achieving programs that are programmed to make themselves grow increasingly powerful, by self-evolving methods that augment and enhance their own capabilities. It will be tempting to do this, both to gain power and to decrease our own effort toward clarifying our own desires. If some genie offered you three wishes, would not your first one be,
“Tell me, please, what is it that I want the most!”
The problem is that, with such powerful machines, it would require but the slightest accident of careless design for them to place their goals ahead of ours–as were. The machine’s goals may be allegedly benevolent, as with the robots of
With Folded Hands
, by Jack Williamson, whose explicit purpose was allegedly benevolent: to protect us from harming ourselves, or as with the robot in
Colossus
, by D.H. Jones, who itself decides, at whatever cost, to save us from an unsuspected enemy. In the case of Arthur C. Clarke’s HAL, the machine decides that the mission we have assigned to it is one we cannot properly appreciate. And in Vernor Vinge’s computer-game fantasy,
True Names
, the dreaded Mailman (who teletypes its messages because it cannot spare the time to don disguises of dissimulated flesh) evolves new ambitions or its own.

Would it be possible to duplicate the character of a human person as another Self inside a machine? Is anything like that conceivable? And if it were, then would those simulated computer-people be in any sense the same or genuine extensions of those real people? Or would they merely be new, artificial, person-things that resemble their originals only through some sort of structural coincidence? To answer that, we have to think more carefully about what people are—about the nature of our selves. We have to think more carefully about what an individuals is.

A simplistic way to think about this is to assume that inside every normal person’s mind there is a certain portion, which we call the Self, that uses symbols and representations very much like the magical signs and symbols used by sorcerers to work their spells. For we already use such magical incantations, in much the same ways, to control those hosts of subsystems within ourselves. That surely is more or less how we do so many things we don’t understand.

To begin with, we humans know less about the insides of our minds than we know about the outside world. Let me spell that out: compared to what we understand about how real objects work, we understand virtually nothing about what happens in the great computers inside our brains. Doesn’t it seem strange that we can think, not knowing what it means to think? Isn’t it bizarre that we can get ideas, yet not be able to explain what ideas are, or how they’re found, or grown, or made? Isn’t it strange how often we can better understand what our friends do than what we do ourselves?

Consider again, how, when you drive, you guide the immense momentum of a car, not knowing how its engine works, or how its steering wheel directs the vehicle toward left or right. Yet, when we come to think of it, it is the same with our own bodies; so far as conscious thought is concerned, the way you operate your mind is very similar: you set yourself in a certain goal-direction—as though you were turning a mental steering wheel to set a course for your thoughts to take. All you are aware of is some general intention—
“It’s time to go: where is the door?”
—and all the rest takes care of itself. But did you ever consider the complicated processes involved in such an ordinary act as, when you walk, to change the direction you’re going in? It is not just a matter of, say, taking a larger or smaller step on one side, the way one changes course when rowing a boat. If that were all you did, when walking, you would tip over and fall toward the outside of the turn.

Try this experiment: watch yourself carefully while turning—and you’ll notice that before you start the turn, you tip yourself in advance; this makes you start to fall toward the inside of the turn; then, when you catch yourself on the next step, you end up moving in a different direction. When we examine that more closely, it all turns out to be dreadfully complicated: hundreds of interconnected muscles, bones, and joints are all controlled simultaneously by interacting programs that our locomotion-scientists still barely comprehend. Yet all that your conscious mind need do, or say, or think, is
“Go that way!”
So far as one can see, we guide the vast machines inside ourselves, not by using technical and insightful schemes based on knowing how the underlying mechanisms work, but by tokens, signs, and symbols which are entirely as fanciful as those of Vinge’s sorcery. It’s enough to make one wonder if it’s fair for us to gain our ends by casting spells upon our helpless hordes of mental under-thralls.

Now take another mental step to see that, just as we walk without thinking, we also think without thinking! That is, in much the same way, we also exploit the agencies that carry out our mental work. Suppose you have a hard problem. You think about it for a while; then after a time you find a solution. Perhaps the answer comes to you suddenly; you get an idea and say,
“Aha, I’ve got it. I’ll do such and such.”
But then, were someone to ask how you did it, how you found the solution, you simply would not know how to reply. People usually are able to say only things like this:

“I suddenly realized…”

“I just got this idea…”

“It occurred to me that…

“It came to me…”

If people really knew how their minds work, we wouldn’t so often act on motives which we don’t suspect, nor would we have such varied theories in Psychology. Why, when we’re asked how people come upon their good ideas, are we reduced to superficial reproductive metaphors, to talk about “conceiving” or “gestating”, or even “giving birth” to thoughts? We even speak of “ruminating” or “digesting”—as though our minds were anywhere but in our heads. And, worst of all, we see ourselves as set adrift upon some chartless mental sea, with minds like floating nets which wait to catch whatever sudden thought-fish may get trapped inside! If we could see inside our minds we’d surely say more useful things than “Wait. I’m thinking.”

Other books

A Wild Light by Marjorie M. Liu
Leaves by Michael Baron
Stuff We All Get by K. L. Denman
A Piece of My Heart by Richard Ford
Thunder Dog by Michael Hingson
The Late Child by Larry McMurtry
Ask For It by Faulkner, Gail
Didn't I Warn You by Amber Bardan


readsbookonline.com Copyright 2016 - 2024