Evil Geniuses: The Unmaking of America: A Recent History (41 page)

Our antitrust authorities have gotten out of the habit of being aggressive. As it happens, the last epic antitrust case was one brought twenty-two years ago to stop the newest computer monopolist at the time from crushing smaller competitors in its quest to dominate the suddenly commercializing Internet.

In the months right after the government filed that antitrust case,
United States v. Microsoft,
Microsoft and the tiny Menlo Park start-up Google had both launched search engines, Amazon announced it would start selling things other than books—and Charles Koch’s libertarian Cato Institute held a conference in San Jose called “Washington, D.C., vs. Silicon Valley.” Eighty-six-year-old Milton Friedman was a featured speaker.

“When I started in this business,” Friedman told the attendees, “as a believer in competition, I was a great supporter of antitrust laws. I thought enforcing them was one of the few desirable things that the government could do to promote more competition.” But Friedman said his “views about the antitrust laws have changed greatly over time” because, he claimed, the government had never enforced them aggressively enough. So disingenuous, and so typical of libertarians who really don’t believe in competition as much as in disapproving of government and letting big business have its way. “I have gradually come to the conclusion that antitrust laws do far more harm than good and that we would be better off if we didn’t have them at all, if we could get rid of them.”

Then Friedman turned to the pending federal case whose outcome meant everything to the venture capitalists and entrepreneurs in that audience in 1998. Most and possibly all of them were rooting for Microsoft to lose.

Is it really in the self-interest of Silicon Valley to set the government on Microsoft? Your industry, the computer industry, moves so much more rapidly than the legal process, that by the time this suit is over, who knows what the shape of the industry will be. Here again is a case that seems to me to illustrate the suicidal impulse of the business community.

Three decades after his Friedman Doctrine condemned decent executives for indulging a “suicidal impulse” by pledging their companies would try to serve the public good, he was still at it: businesspeople who were on the side of fairness, whether voluntary or required by government, were by definition enemies of capitalism. In fact, that Microsoft case did not drag on too long—in 2001 the new Republican administration settled it. The company got to keep its huge existing software monopolies (operating systems, word processing, spreadsheets), but it was required to refrain from killing off rival Internet browsers and other online competitors.

Thanks to that timely government intervention at the turn of this century, Internet start-ups flourished freely. MSN Search and its successor Bing didn’t crush its superior rival Google, and Microsoft’s partner MySpace didn’t crush its superior rival Facebook. And yet thanks to the government’s permissive, hands-off attitude after that and ever since, Google and Facebook have become anticompetitive monopolies far more ubiquitous, momentous, unorthodox, and problematic than Microsoft ever got to be.

The thing is, while the premise of capitalism is that fresh competition always drives innovation which drives endless creative destruction, the last thing a successful entrenched capitalist wants is innovative competition that might make
his
business the next one creatively destroyed. In order to maintain their monopolies and absolute market power, Google and Facebook buy up their competitors and potential competitors, the earlier the better. Google acquired YouTube when it was only a year old, and since 2010 it has bought some other company every month, on average. Consider Google’s history with one competitor over the last decade: At first it paid Yelp to feature its reviews, then tried but failed to acquire the company. Then it swiped Yelp’s content for its own listings until it agreed with federal regulators to stop—but then, according to Yelp, it resumed its systematic thievery. The government gave a pass to Facebook to buy Instagram and WhatsApp in the 2010s, which was as if they’d let CBS buy NBC and ABC in the 1940s. The several Facebook social media brands have five times as many American users as the biggest platform it doesn’t own, Twitter. Google does more than 90 percent of all Internet searches, forty times as many as its closest competitor. As far as venture capitalists are concerned, digital start-ups looking to compete with the Internet colossi, no matter how amazing their new technology or service or vision, ultimately have two alternatives: to be acquired by Google or Facebook, or to be destroyed by Google or Facebook. If the latter seems likelier—being in the “kill zone,” in the Valley term of art—the VCs tend not to invest in the first place.

So now Google and Facebook occupy a dreamy capitalist nirvana, previously unimaginable, good for them but not ultimately for the rest of us or for capitalism. They are utilities, but unlike their forerunners—natural monopolies like water, gas, and electric companies—they’re completely private as well as almost entirely unregulated, and the markets they monopolize are national (and global). Still more remarkably, Google and Facebook are also ad-supported media companies that are vastly larger and more profitable than any media company ever. Like broadcasters in the last century and most media companies today, they provide things to viewers and listeners and readers for free—but unlike other such companies, they’re allowed to disclaim all responsibility for what they publish and actually
get
most of what they publish for free themselves.
*2


That is, for “free,” because somebody is always paying.

“On the one hand,” the great countercultural and early-computer-world figure Stewart Brand said on a stage in 1984 at a Marin County computer conference he’d organized,

information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.

It was a brilliant insight, at a time when almost nobody except a few geeks even knew of the Internet. But the enduring takeaway from Brand’s talk was just the five-word meme,
Information wants to be free.
It was swiftly turned from a lovely philosophical idea into a utopian demand, and then, as the whole world went online, a universal literal expectation that nobody should have to pay for information, or to read or watch or listen to anything on a computer screen.

Information does indeed want to be free, thanks to computers and the Internet and software. But information
started
wanting to be free five hundred years ago, when an early machine, the printing press, made cheap books possible, then made newspapers possible. Three hundred years later, steam-powered printing presses made much cheaper books and cheaper newspapers possible, and two hundred years after that, with digital technology, the cost approached zero, de facto free.

So the larger truth here is that
machines
want information to be free.

But of course, it wasn’t just printing and information. The industrial revolution was about powered machines enabling people to produce more and more things more quickly and cheaply, over the last two centuries enabling a remarkable, continual increase in prosperity (and, with luck and political will, social progress).

So if
information wants to be free,
that’s just a particular digital-age instance of
machines want everything to be free—
including the cost of every kind of work. Machines want to do all the jobs.

In this century, as computers and AI become ever more powerful and ever cheaper, we’re seeing machines get ever closer to their goal, so to speak, of doing all the jobs. We’ve already moved from the exurbs to the outskirts of that science fiction destination. So the 64-quadrillion-dollar question now is, what happens after the machines’ mission is accomplished, and most of us are economically redundant? What is the machines’ ultimate goal? Do they want to enrich all of us, or to immiserate most of us? To be our willing slaves or to enslave us? Of course, it isn’t actually up to the machines. It’s up to us.

*1
Throughout his life, Leontief was ahead of his time, not just intellectually. He emigrated from the USSR to Germany right before Stalin took over, and from Germany to the United States right before Hitler took over. In 1975, at sixty-nine, he became so “disenchanted” with the Harvard economics department’s paucity of female and nonwhite and leftist faculty—although he was none of those—that he quit and moved to New York University.

*2
Although Amazon is Internet-based and very powerful, it is not a new corporate species like Google and Facebook. Antitrust enforcers of the old school—and maybe those of a new school opening soon—would have kept companies like Amazon (and Apple and Netflix) from
making
products that competed directly with the ones made by other companies that they only distributed. But Amazon really just does what Sears started doing in the 1890s (and Walmart in the 1960s), except with an electronic mail-order catalog. For now, online remains a small fraction of all retail, and even Amazon’s share of online retail, around 40 percent, is a fraction of Google’s and Facebook’s market shares in search and social media and digital advertising.

“For generations they’ve been built up to worship competition and the market, productivity and economic usefulness, and the envy of their fellow men,” a character in the superautomated future says about his fellow Americans in Kurt Vonnegut’s 1952 novel
Player Piano
. “And boom! It’s all yanked out from under them. They can’t participate, can’t be useful any more. Their whole culture’s been shot to hell.”

AI-powered technologies will surely continue to reduce the number of jobs it makes economic sense for Americans to do; the question is how fast that happens. Predicting the impacts of technology on work and prosperity and general contentment isn’t a strictly technological question. It’s an interlaced set of technological
and
economic
and
political
and
cultural questions—because yes, our economy, like every economy, is a political economy, continually shaped both by the available technologies and by the changing political and cultural climate.

During the last twenty years, for instance, new technology enabled but did not require a global digital-information duopoly consisting of Google and Facebook. As consumers and citizens, we and our governments let it happen. Over the next twenty years, we will, for instance,
choose
at what rate and to what extent driverless vehicles eliminate the jobs of the millions of Americans who drive trucks and cars and other vehicles for a living. How technology is used and whose well-being it improves and wealth it increases doesn’t just happen, like acts of God. Societies choose. Might the experiences of 2020 transform Americans’ understanding of what government is for, as the Great Depression and New Deal did? Won’t social distancing and fear of the next novel virus accelerate the automation and general digitalization of work?

In any event, what we know is that going forward, “the choice isn’t between automation and non-automation,” says Erik Brynjolfsson, one of the MIT economists focused on digital technology and work. “It’s between whether you use the technology in a way that creates shared prosperity, or more concentration of wealth.” We will presumably have an economy that keeps growing overall—that could start growing faster, maybe much faster—with people doing less and less of the necessary work. If and when “machines make human labor superfluous, we would have vast aggregate wealth,” the MIT economist David Autor has written, “but a serious challenge in determining who owns it and how to share it. Our chief economic problem will be one of distribution, not scarcity.”

So just where American society winds up on the spectrum between dystopia and utopia is on us, on how we adapt politically and culturally starting now. We’ll either allow the fundamental choices to be made privately, by and for big business and the rich, or else we’ll struggle our way through to progress, adapting our economic system to optimize the bounty—things and services and free time—for all Americans.

Here’s the basic menu, with five options ranging from worst to bad to better to best.

POSSIBLE AMERICAN FUTURES
winners take all
share the wealth
familiar technology & current growth

BAD

BETTER

like the Nordics now

or

WORSE

like Venezuela now

AI-boosted prosperity

WORSE

BEST

After recovering from the Pandemic Recession (or Depression), if the United States drifts along as it had been, sometimes growing 2 percent or more a year, no game-changing new industrial technologies, no fundamental changes in the political economy, we will get the future of the top-left quadrant—familiar, grim,
bad,
like
Minority Report,
or Pottersville with Ernie the cab driver replaced by a self-driving Uber.

The option on the uppermost right is the necessary—but immensely challenging—option for a much
better
future. We continue with the same slow-to-moderate economic growth that we (and the rest of the developed world) have had for decades, but we
re
-reengineer our economy to make it more like those elsewhere in the developed world—that is, restoring or improving upon the fairer sharing of national wealth we had in America before 1980, emulating Canada and the Nordics.

But then there’s the other, probably equally plausible, but terrible option in that same corner: if we fail to reform our government and economy along sensible democratic lines, we might well experience a large-scale populist spasm involving pitchforks and hangman nooses, as that AIG CEO put it, America the first large modern society to go from fully developed to failing.

Another possible awful American future is the lower-left quadrant, where quantum computing and robots and miraculous nanotech molecular assembly
are
doing and making nearly everything, but inequality of income and wealth and power become even more extreme than they are now—a small ruling elite presiding over what the historian and futurist Yuval Noah Harari calls the “useless class,” what the Silicon Valley entrepreneur and investor Martin Ford calls “digital feudalism,”
Elysium
without that movie’s happy ending.

And finally, there’s the best of all plausible worlds—amazing machines, more than enough stuff that our new, optimal social democracy divides fairly, more or less Earth as on
Star Trek
or in the redemptive finale of
WALL-E
.

In 1930—just after the word
robot
was invented, just as Aldous Huxley was imagining the dystopia of
Brave New World
and just before H. G. Wells depicted the utopia of
The Shape of Things to Come—
their friend John Maynard Keynes saw the economic future.
*1
“We are being afflicted with a new disease,” he wrote in a speculative essay called “Economic Possibilities for Our Grandchildren,” a disease of which “readers will hear a great deal in the years to come—namely,
technological unemployment
. This means unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.” It would become a bigger and bigger problem, the founder of macroeconomics warned, and in about a century—that is, around 2030—it would finally require a major rethink of how we organize economies.

As I’ve said, plenty of intelligent people today have taken it on faith that as the postindustrial age keeps rolling, the problem of disappearing jobs will somehow sort itself out. They point out that good new jobs replaced the ones rendered economically moot by steam in the 1800s, and good new jobs replaced the ones rendered economically moot for much of the 1900s, and now history will repeat, because free markets, the invisible hand, well-paying new jobs we can’t even imagine, blah blah blah blah blah. It’s pretty to think so, but it seems unlikely. I’ve not seen a single plausible same-as-it-ever-was scenario with the particulars of how that might happen this time, absent fundamental changes in our political economy. Nor do Americans buy it: a 2017 Pew survey asked people if, in a near future where “robots and computers perform many of the jobs currently done by humans,” they thought that “the economy will create many new, better-paying human jobs,” and by 75 to 25 percent they said they did not.

The expert consensus is striking. Among the several MIT professors who lead the field studying the effects of automation on work is Daron Acemoglu, an economist and political historian. “In the standard economic canon,” he said recently, “the proposition that you can increase productivity and harm labor is bunkum.” But “this time is different,” he says, because “unlike previous transformations of the economy, the demand for labor is not rising fast enough.”

“Exponential progress is now pushing us toward the endgame” of the last two centuries, writes Martin Ford in
The Rise of the Robots: Technology and the Threat of a Jobless Future
. “Emerging industries will rarely, if ever, be highly labor-intensive. In other words, the economy is likely on a path toward a tipping point where job creation will begin to fall consistently short of what is required to fully employ the workforce.”

Even Larry Summers—supermainstream economist, enthusiastic Wall Street deregulator in the 1990s, former treasury secretary, ultimate neoliberal—has thought for a while that we’ve entered a whole new zone, not the end of economic history but definitely the beginning of an unprecedented economic future. When he delivered the annual Martin Feldstein Lecture at Harvard for economists in 2013, he called his talk “Economic Possibilities for Our Children,” playing off Keynes’s 1930 essay. “When I was an MIT undergraduate in the early 1970s,” he said, every

economics student was exposed to the debate about automation. There were two factions in those debates….The stupid people thought that automation was going to make all the jobs go away and there wasn’t going to be any work to do. And the smart people understood that when more was produced, there would be more income and therefore there would be more demand. It wasn’t possible that all the jobs would go away, so automation was a blessing….I’m not so completely certain now.

To Summers, “the prodigious change” in the political economy wrought by computers and the way we use them looks “qualitatively different from past technological change.” From here on out, “the economic challenge will not be producing enough. It will be providing enough good jobs.” And soon “it may well be that some categories of labor will not be able to earn a subsistence income.”

One of his key facts referred to the fraction of American men between twenty-five and fifty-four, prime working age, who weren’t doing paid work and weren’t looking for work—a percentage at the beginning of 2020 about three times as large as it was when those fifty-four-year-olds were born. Before the troubles of 2020, the fraction of men without college degrees who weren’t working or looking for work had doubled just since the 1990s, to 20 percent. Half said they had some disabling condition, one of the most common being “difficulty concentrating, remembering, or making decisions.”

In 2019 about 7 million prime-age American men had opted out of the labor force. Only a third of them were among the nearly 9 million young and nonelderly former workers getting monthly disability payments from the government. The number of workers on disability tripled in just two decades, from the early 1990s to the early 2010s, especially among the middle-aged, not as a result of looser eligibility rules but simply, it seems, because there are so few jobs for people who have neither youth nor education nor any other way to make ends meet until they hit sixty-six and can collect regular Social Security. So disabled or “disabled,” they receive $15,000 a year on average, the equivalent of a full-time minimum-wage job. Of all the Americans in their fifties who have only a high school degree or not even that, between 10 and 20 percent now receive these nonuniversal basic incomes. The government officially calls them disability “awards.”
*2
Not coincidentally, this is the same American cohort who, over the same period of time, started killing themselves more than ever before, deliberately and otherwise, with liquor, drugs, guns. Among white people in their forties and fifties without college degrees, such “deaths of despair” increased by 150 percent from 2000 to 2017.

Lately, the people running big corporations have been admitting, almost accidentally, that the shape of things to come includes next to no workers. “I think the longer-term solution to addressing a lot of these labor costs,” the chief financial officer of Nike said in 2013, just before he retired and joined the board of the San Francisco tech company Dropbox, “has really been
engineering the labor out of the product,
and that really is with technology and innovation” (emphasis added).

A few years ago the founder and operator of the annual weeklong convocation of masters of the universe known as the World Economic Forum in Davos, Switzerland, started using “the Fourth Industrial Revolution” to describe what’s happening. It stuck, and in 2019 at Davos it was a main topic for the three thousand CEOs and bankers (and government officials and consultants and academics and journalists), a third of them American. The technology reporter Kevin Roose wrote a bracingly honest account in
The
New York Times
called “The Hidden Automation Agenda of the Davos Elite.” In the public panel discussions and on-the-record interviews, he wrote,

executives wring their hands over the negative consequences that artificial intelligence and automation could have for workers….But in private settings…these executives tell a different story: They are racing to automate their own work forces to stay ahead of the competition, with little regard for the impact on workers….They crave the fat profit margins automation can deliver, and they see A.I. as a golden ticket to savings, perhaps by letting them whittle departments with thousands of workers down to just a few dozen.

The president of Infosys, a big global technology services and consulting company, told Roose at Davos that their corporate clients used to have “incremental, 5 to 10 percent goals in reducing their work force,” in other words shrinking them by half or more over the next decade. “Now they’re saying” about their near futures, “ ‘Why can’t we do it with 1 percent of the people we have?’ ”

Other books

The Fate of Princes by Paul Doherty
Dead and Loving It by MaryJanice Alongi
New Species 04 Justice by Laurann Dohner
Child of the Dawn by Coleman, Clare;
She's No Faerie Princess by Christine Warren
Shadowed Eden by Katie Clark
Shadowspell by Jenna Black
Diamond Deceit by Carolyn Keene


readsbookonline.com Copyright 2016 - 2024