Read Ha! Online

Authors: Scott Weems

Ha! (18 page)

One of the last major challenges for children is “operational thinking,” the ability to reason abstractly. At early ages we learn to manipulate objects in our environment, and even organize and classify them. Eventually we learn to use symbols for these objects, and when we get really advanced we do the same for things we can't see or touch, like numbers. Children who call a pet dog a “cat” or say “hi” to a person in a photograph—both are attempts at humor—are essentially breaking newly learned rules about abstract names referring to concrete things. They're playing with the fact that the representations are different than the objects themselves.

Development doesn't end in childhood, of course. It extends throughout the life span, meaning that humor preferences change later in life too. As most of us have learned through personal experience, one key aspect of getting older is that we lose cognitive flexibility. It becomes harder to learn new things and to approach new situations with open, flexible minds. Another consequence is that we stop caring what other people think, which can have a sizable impact on our sense of humor.

To explore why this is the case, let's consider another study conducted by the German psychologist Willibald Ruch. This one was his biggest yet, examining more than four thousand subjects ranging from age fourteen to sixty-six. Ruch started by giving his subjects a sense-of-humor questionnaire, which divided humor into two types: “incongruity humor,” which involves the traditional surprise and resolution stages described earlier, and “nonsense humor,” which also involves incongruity but leaves the resolution stage unresolved for the sake of the ridiculous. We've seen this kind of joke already:
Why did the elephant sit on the marshmallow? Because she didn't want to fall in the hot chocolate.

After measuring the subjects' preferences for these two humor types and administering additional personality assessments, Ruch analyzed the data to determine whether these preferences changed with age. They did. Not surprisingly, he found that as people get older, they like nonsense humor less and incongruity humor more—probably because, by a certain age, we all come to expect things to make sense.

But the most interesting finding emerged when Ruch compared these results with conservatism, which he'd also assessed. Conservatism is a difficult thing to measure, as you might expect, so Ruch was forced to create his own test. Composed of several questions from other personality assessments asking questions about traditional family ideology, liberal upbringing of children, and orientation toward work, his test measured how averse the subjects were to change and how traditional they were in their social outlook. Ruch found that age differences in humor are strongly correlated with conservatism. The more people disliked nonsense humor, the more conservative were their beliefs.

This effect was quite strong, accounting for 90 percent of the variance in incongruity-humor liking and 75 percent for nonsense-humor disliking. In fact, it was strong enough to suggest that taste in humor is largely driven by conservatism alone.

Before writing this book I would never have guessed that our brains have an optimal age for humor. It has been said that children are fools if they are not liberals, just as adults are fools if they are not conservatives—and this may very well be true, at least in terms of brain plasticity. Young brains are flexible and open, leading to an affinity for liberalism and elephant jokes. Conflict is less of a problem for children than for adults because it helps them grow and learn. But as we get older, our perspectives alter. Change becomes less welcome, as does absurdity, and learning becomes less important than making things fit. It's not a happy thought, at least for those of us in that second group, but it's an important one to recognize.

Indeed, by revealing so much about ourselves, humor may be the best way of learning who we really are. It's an intriguing idea, one that will get further attention in the next chapter. Except, next we won't be singling out women, children, or conservative adults. Instead, we'll be looking at individuals who don't have any brains at all.

5

   
O
UR
C
OMPUTER
O
VERLORDS

              
The question of whether computers can think is like the question of whether submarines can swim.

—E
DSGER
W. D
IJKSTRA

“T
HIS WAS TO BE AN AWAY GAME FOR HUMANITY.
” S
O SPOKE
Ken Jennings, author, software engineer, and holder of the longest winning streak on the television show
Jeopardy!
He had been invited by producers of the show to compete against, of all things, a computer, which IBM had developed as part of its artificial intelligence research program. It seemed like an intriguing idea, at least until he entered the auditorium where he would compete and saw that the entire crowd was against him. Rather than filming in its usual Los Angeles location, the show had been transported to Westchester County, New York, the site of IBM's research labs. As soon as the lights went on, the audience cheered. But they weren't cheering for their own species. They were rooting for the competition.

“It was an all-IBM crowd: programmers, executives. Stockholders all!” said Jennings. “They wanted human blood. It was gladiatorial out there.”

The challenge was daunting; Watson was a marvel of engineering, and everyone knew it. Built from ninety clustered IBM Power 750 servers and running thirty-two massively parallel Power7 processors, Watson was capable of holding more than 16 terabytes of memory. That's a 16 with twelve zeros after it. And it operated at more than 80 teraflops, which meant that it could perform 80 trillion operations—per second. In short, it was built to hold its own against whatever combination of water, salt, and proteins its competitors threw at it.

Despite Watson's power, the historical advantage still belonged to humans. IBM had developed Watson to compete on
Jeopardy!
because this is exactly the arena where computers typically fail. Watson may have had incredible computing power, but the game of
Jeopardy!
—like life—is messy. Winning takes not just real-world knowledge but the ability to recognize irony, slang, puns, pop-culture references, and all sorts of other complexities. But it also requires knowing what you don't know. In other words, you can't just guess at every opportunity, because penalties for errors add up.

Consider, for instance, the sentence “I never said she stole my money,” which the IBM engineers offered as an example of the kind of ambiguity for which humans are specialized. There are literally seven different meanings those words can convey, an impressive number given that the sentence contains only seven words. If you don't believe me, read it out loud yourself, each time emphasizing a different word. All it takes is an inflection here or a change of stress there, and the entire intention is changed. Recognizing this kind of ambiguity is something humans do with ease, but computers—well, let's just say that computers don't like to be confused.

After the first day of competition Jennings performed relatively well against both Watson and Brad Rutter, the other human contestant. At one point Rutter, who owned the distinction of having won the most
money in the show's history, was tied with Watson at $5,000. Jennings had $2,000. Then things got out of hand.

Rather than being stumped by vague or confusing clues, Watson thrived on them. It knew that “The ancient Lion of Nimrud went missing from this city's national museum in 2003” meant “Baghdad” and that “An etude is a composition that explores a technical musical problem; the name is French for this” referred to “study.” Granted, it also made mistakes—for example, when it gave “Toronto” as an answer for the category of “US cities.” But the gaffes were minimal and Watson won handily with $35,734, compared to Rutter's $10,400 and Jennings's $4,800.

The second match, which was to be aired on the final night of the competition, February 16, 2011, removed any question as to who the new
Jeopardy!
champion would be. By the time the contestants reached the last round, which always ends with a wager on a single, final clue, Watson had a significant lead. The final question, “What novel was inspired by William Wilkinson's
An Account of the Principalities of Wallachia and Moldavia,”
was answered correctly by all three contestants (Bram Stoker's
Dracula
), but it didn't matter. Watson had already won the match, though Jennings had one last surprise inside him.

Below his final answer he wrote: “I, for one, welcome our new computer overlords.”

This was a play on a classic line from an episode of
The Simpsons
in which a clueless news anchor, believing that the earth has been taken over by a master race of giant space ants, decides to suck up to his new bosses. “I, for one, welcome our new insect overlords,” he says. “And I'd like to remind them that as a trusted TV personality, I can be helpful in rounding up others to toil in their underground sugar caves.”

Jennings may have lost the match, but he won several hearts—especially when he took an extra jab at the computer that had just defeated him: “Watson has lots in common with a top-ranked
Jeopardy!
player. It's very smart, very fast, speaks in an even monotone, and has never known the touch of a woman.”

In fact, Jennings did something that Watson could never do—he cracked a joke. Using its massive computing power Watson was able to overcome the problem of ambiguity, but it couldn't tell a joke because jokes require not just recognizing ambiguity but exploiting it too. That's a lot to ask, even from such a powerful machine as Watson.

In today's world, there's almost nothing computers can't do. They help to fly our planes, drive our cars, and even give medical diagnoses. One of the last things we thought computers could do is deal with ambiguity like humans—which is why Watson's accomplishment was so impressive. Contrast this with Deep Blue's defeat of chess grandmaster Garry Kasparov in May 1997. Deep Blue was capable of examining 200 million chess moves in the three minutes allocated for each move, but it didn't have to deal with messy things like language. Chess, though complex, is still a well-defined problem: there's never any doubt regarding the purpose of the game or what the next potential moves are.

Yet, both Watson and Deep Blue highlight the importance of flexible thinking. That flexibility was seen in Watson's ability to interpret subtle meanings and make reasonable guesses as to possible linguistic interpretations, and also in Deep Blue's surprising chess moves. Flexibility is key, especially for activities that computers typically struggle with, such as being creative. Writing sonnets, composing symphonies, telling jokes—these are things computers will never be able to do. Or will they?

Consider Game Two of the 1997 match between Kasparov and Deep Blue, which ended with a win for the computer. About thirty moves into the match, Kasparov realized he was in trouble and decided to sacrifice a pawn. Taking that pawn would have given Deep Blue a distinct advantage. Every chess-playing program ever created would have taken it, and so would most chess masters. There were no obvious drawbacks to the move. Yet Deep Blue rejected the bait. Instead, it moved its queen to “b6,” a position with less immediate benefit. But it also disrupted Kasparov's attempt for a comeback—a ploy that shocked Kasparov so deeply that he claimed humans must have intervened.
There was no way a computer, or anyone less than a grand master, could have seen what he was planning and countered so effectively. Computers simply aren't that creative.

Other books

Still Star-Crossed by Melinda Taub
Nocturnal by Chelsea M. Cameron
LUKA (The Rhythm Series, Book 2) by Jane Harvey-Berrick
Rugged and Relentless by Kelly Hake
The Deed of Paksenarrion by Elizabeth Moon
The Girlfriend Contract by Lambert, Lucy


readsbookonline.com Copyright 2016 - 2024