Read Cryptonomicon Online

Authors: Neal Stephenson

Tags: #Literature, #U.S.A., #American Literature, #21st Century, #Fiction, #Science Fiction, #v.5, #Amazon.com, #Retail

Cryptonomicon (145 page)

The raw data goes to an onboard SPARCstation that performs data assessment in real time as a sort of quality assurance check, then streams the numbers onto DAT cassettes. The survey team is keeping an eye on the results, watching for any formations through which cable cannot
be run. These are found more frequently in the Indian than in the Atlantic Ocean, mostly because the Atlantic has been charted more thoroughly.

Steep slopes are out. A cable that traverses a steep slope will always want to slide down it sideways, secretly rendering every nautical chart in the world obsolete while imposing unknown stresses on the cable. This and other constraints may throw an impassable barrier across the proposed route of the cable. When this happens, the survey ship has to backtrack, move sideways, and survey other corridors parallel and adjacent to the first one, gradually building a map of a broader area, until a way around the obstruction is found. The proposed route is redrafted, and the survey ship proceeds.

The result is a shitload of DAT tapes and a good deal of other data as well. For example, in water less than 1,200 meters deep, they also use sidescan sonar to generate analog pictures of the bottom — these look something like black-and-white photographs taken with a point light source, with the exception that shadows are white instead of black. It is possible to scan the same area from several different directions and then digitally combine the images to make something that looks just like a photo. This may provide crucial information that would never show up on the survey — for example, a dense pattern of anchor scars indicates that this is not a good place to lay a cable. The survey ship can also drop a flowmeter that will provide information about currents in the ocean.

The result of all this, in the case of the FLAG survey, was about a billion data points for the bathymetric survey alone, plus a mass of sidescan sonar plots and other documentation. The tapes and the plots filled a room about 5 meters square all the way to the ceiling. The quantity of data involved was so vast that to manage it on paper, while it might have been theoretically possible given unlimited resources, was practically impossible given that FLAG is
run by mortals and actually has to make money. FLAG is truly an undertaking of the digital age in that it simply couldn’t have been accomplished without the use of computers to manage the data. Evans’s mission was to present FLAG with a final survey report. If he had done it the old-fashioned way, the report would have occupied some 52 linear feet of shelf space, plus several hefty cabinets full of charts, and the inefficiency of dealing with so much paper would have made it nearly impossible for FLAG’s decision makers }to grasp everything.

Instead, Evans bought FLAG a PC and a plotter. During the summer of 1994, while the survey data was still being gathered, he had some developers write browsing software. Keeping in mind that FLAG’s investors were mostly high-finance types with little technical or nautical background, they gave the browser a familiar, easy-to-use graphical user interface. The billion data points and the sidescan sonar imagery were boiled down into a form that would fit onto 5 CD-ROMs, and in that form the final report was presented to FLAG at the end of 1994. When FLAG’s decision makers wanted to check out a particular part of the route, they could zoom in on it by clicking on a map, picking a small square of ocean, and blowing it up to reveal several different kinds of plots: a topographic map of the seafloor, information abstracted from the sidescan sonar images, a depth profile along the route, and another profile showing the consistency of the bottom — whether muck, gravel, sand, or hard rock. All of these could be plotted out on meterwide sheets of paper that provided a much higher-resolution view than is afforded by the computer screen.

This represents a noteworthy virtuous circle — a self-amplifying trend. The development of graphical user interfaces has led to rapid growth in personal computer use over the last decade, and the coupling of that technology with the Internet has caused explosive growth in the use of the World Wide Web, generating enormous demand for
bandwidth. That (in combination, of course, with other demands) creates a demand for submarine cables much longer and more ambitious than ever before, which gets investors excited — but the resulting project is so complex that the only way they can wrap their minds around it and make intelligent decisions is by using a computer with a graphical user interface.

 

Hacking wires

 

As you may have figured out by this point, submarine cables are an incredible pain in the ass to build, install, and operate. Hooking stuff up to the ends of them is easy by comparison. So it has always been the case that cables get laid first and then people begin trying to think of new ways to use them. Once a cable is in place, it tends to be treated not as a technological artifact but almost as if it were some naturally occurring mineral formation that might be exploited in any number of different ways.

This was true from the beginning. The telegraphy equipment of 1857 didn’t work when it was hooked up to the first transatlantic cable. Kelvin had to invent the mirror galvanometer, and later the siphon recorder, to make use of it. Needless to say, there were many other Victorian hackers trying to patent inventions that would enable more money to be extracted from cables. One of these was a Scottish-Canadian-American elocutionist named Alexander Graham Bell, who worked out of a laboratory in Boston.

Bell was one of a few researchers pursuing a hack based on the phenomenon of resonance. If you open the lid of a grand piano, step on the sustain pedal, and sing a note into it, such as a middle C, the strings for the piano’s C keys will vibrate sympathetically, while the D strings will remain still. If you sing a D, the D strings vibrate and the
C strings don’t. Each string resonates only at the frequency to which it has been tuned and is deaf to other frequencies.

If you were to hum out a Morse code pattern of dots and dashes, all at middle C, a deaf observer watching the strings would notice a corresponding pattern of vibrations. If, at the same time, a second person was standing next to you humming an entirely different sequence of dots and dashes, but all on the musical tone of D, then a second deaf observer, watching the D strings, would be able to read that message, and so on for all the other tones on the scale. There would be no interference between the messages; each would come through as clearly as if it were the only message being sent. But anyone who wasn’t deaf would hear a cacophony of noise as all the message senders sang in different rhythms, on different notes. If you took this to an extreme, built a special piano with strings tuned as close to each other as possible, and trained the message senders to hum Morse code as fast as possible, the sound would merge into an insane roar of white noise.

Electrical oscillations in a wire follow the same rules as acoustical ones in the air, so a wire can carry exactly the same kind of cacophony, with the same results. Instead of using piano strings, Bell and others were using a set of metal reeds like the ones in a harmonica, each tuned to vibrate at a different frequency. They electrified the reeds in such a way that they generated not only acoustical vibrations but corresponding electrical ones. They sought to combine the electrical vibrations of all these reeds into one complicated waveform and feed it into one end of a cable. At the far end of the cable, they would feed the signal into an identical set of reeds. Each reed would vibrate in sympathy only with its counterpart on the other end of the wire, and by recording the pattern of vibrations exhibited by that reed, one could extract a Morse code message independent of the other messages being transmitted on
the other reeds. For the price of one wire, you could send many simultaneous coded messages and have them all sort themselves out on the other end.

To make a long story short, it didn’t work. But it did raise an interesting question. If you could take vibrations at one frequency and combine them with vibrations at another frequency, and another, and another, to make a complicated waveform, and if that waveform could be transmitted to the other end of a submarine cable intact, then there was no reason in principle why the complex waveform known as the human voice couldn’t be transmitted in the same way. The only difference would be that the waves in this case were merely literal representations of sound waves, rather than Morse code sequences transmitted at different frequencies. It was, in other words, an analog hack on a digital technology.

We have all been raised to think of the telephone as a vast improvement on the telegraph, as the steamship was to the sailing ship or the electric lightbulb to the candle, but from a hacker tourist’s point of view, it begins to seem like a lamentable wrong turn. Until Bell, all telegraphy was digital. The multiplexing system he worked on was purely digital in concept even if it did make use of some analog properties of matter (as indeed all digital equipment does). But when his multiplexing scheme went sour, he suddenly went analog on us.

Fortunately, the story has a happy ending, though it took a century to come about. Because analog telephony did not require expertise in Morse code, anyone could take advantage of it. It became enormously popular and generated staggering quantities of revenue that underwrote the creation of a fantastically immense communications web reaching into every nook and cranny of every developed country.

Then modems came along and turned the tables. Modems are a digital hack on an analog technology, of course; they take the digits from your computer and convert them into a complicated analog waveform that can be transmitted down existing wires. The roar of white noise that you hear when you listen in on a modem transmission is exactly what Bell was originally aiming for with his reeds. Modems, and everything that has ensued from them, like the World Wide Web, are just the latest example of a pattern that was established by Kelvin 140 years ago, namely, hacking existing wires by inventing new stuff to put on the ends of them.

It is natural, then, to ask what effect FLAG is going to have on the latest and greatest cable hack: the Internet. Or perhaps it’s better to ask whether the Internet affected FLAG. The explosion of the Web happened after FLAG was planned. Taketo Furuhata, president and CEO of IDC, which runs the Miura station, says: “I don’t know whether Nynex management foresaw the burst of demand related to the Internet a few years ago — I don’t think so. Nobody — not even AT&T people — foresaw this. But the demand for Internet transmission is so huge that FLAG will certainly become a very important pipe to transmit such requirements.”

John Mercogliano, vice president — Europe, Nynex Network Systems (Bermuda) Ltd., says that during the early 1990s when FLAG was getting organized, Nynex executives felt in their guts that something big was going to happen involving broadband multimedia transmission over cables. They had a media lab that was giving demos of medical imaging and other such applications. “We knew the Internet was coming — we just didn’t know it was going to be called the Internet,” he says.

FLAG may, in fact, be the last big cable system that was planned in the days when people didn’t know about the Internet. Those days were a lot calmer in the global
telecom industry. Everything was controlled by monopolies, and cable construction was based on sober, scientific forecasts, analogous, in some ways, to the actuarial tables on which insurance companies predicate their policies.

When you talk on the phone, your words are converted into bits that are sent down a wire. When you surf the Web, your computer sends out bits that ask for yet more bits to be sent back. When you go to the store and buy a Japanese VCR or an article of clothing with a Made in Thailand label, you’re touching off a cascade of information flows that eventually leads to transpacific faxes, phone calls, and money transfers.

If you get a fast busy signal when you dial your phone, or if your Web browser stalls, or if the electronics store is always low on inventory because the distribution system is balled up somewhere, then it means that someone, somewhere, is suffering pain. Eventually this pain gets taken out on a fairly small number of meek, mild-mannered statisticians — telecom traffic forecasters — who are supposed to see these problems coming.

Like many other telephony-related technologies, traffic forecasting was developed to a fine art a long time ago and rarely screwed up. Usually the telcos knew when the capacity of their systems was going to be stretched past acceptable limits. Then they went shopping for bandwidth. Cables got built.

That is all past history. “The telecoms aren’t forecasting now,” Mercogliano says. “They’re reacting.”

This is a big problem for a few different reasons. One is that cables take a few years to build, and, once built, last for a quarter of a century. It’s not a nimble industry in that way. A PTT thinking about investing in a club cable is making a 25-year commitment to a piece of equipment that will almost certainly be obsolete long before it
reaches the end of its working life. Not only are they risking lots of money, but they are putting it into an exceptionally long-term investment. Long-term investments are great if you have reliable long-term forecasts, but when your entire forecasting system gets blown out of the water by something like the Internet, the situation gets awfully complicated.

The Internet poses another problem for telcos by being asymmetrical. Imagine you are running an international telecom company in Japan. Everything you’ve ever done, since TPC-1 came into Ninomiya in ‘64, has been predicated on circuits. Circuits are the basic unit you buy and sell — they are to you what cars are to a Cadillac dealership. A circuit, by definition, is symmetrical. It consists of an equal amount of bandwidth in each direction — since most phone conversations, on average, entail both parties talking about the same amount. A circuit between Japan and the United States is something that enables data to be sent from Japan to the US, and from the US to Japan, at the same rate — the same bandwidth. In order to get your hands on a circuit, you cut a deal with a company in the States. This deal is called a correspondent agreement.

Other books

Path of Revenge by Russell Kirkpatrick
DuckStar / Cyberfarm by Hazel Edwards
Ghosts of the Pacific by Philip Roy
Early One Morning by Robert Ryan
Wicked Enchantment by Anya Bast
Amish Grace: How Forgiveness Transcended Tragedy by Kraybill, Donald B., Nolt, Steven M., Weaver-Zercher, David L.
Made You Up by Francesca Zappia


readsbookonline.com Copyright 2016 - 2024