Read The Thing with Feathers Online

Authors: Noah Strycker

The Thing with Feathers (26 page)

The Mornington study showed that this is mostly the case for cooperative purple-crowned fairy-wrens. When researchers examined the family trees of helpers, they found that 60 percent of them live with both of their parents and 90 percent with at least one parent. Helpers bringing extra grub to a nest are usually feeding their own younger brothers and sisters. The birds are generally monogamous, so eggs in any given nest can be assumed to contain the genes from the two adults—and additional helpers—who attend it. DNA testing confirms these relationships.

It seems kind of obvious that you’d want to help your relatives more than strangers, which is technically called “kin selection.” Familiarity is a complicating factor; you’re more apt to aid a friend than a stranger even though neither shares your genes, and family is generally most familiar because you live together. Still, you’d probably include a distant family member in your last will and testament before a complete stranger. And you might prefer to donate a kidney to a family member over a good friend. As they say, blood is thicker than water. Most examples of apparent altruism in animals take place between genetic relatives.

Accordingly, scientists have found that fairy-wrens are more likely to help a nest with chicks that share their genes, and the fewer genes they share, the less help they give. The British geneticist J. B. S. Haldane had a firm grasp on this concept decades ago; when asked if he’d sacrifice his life for a drowning
friend, he quipped, “No, but I would to save two brothers or eight cousins”—referring to the fact that each brother shares 50 percent of your genes and each cousin only 12.5 percent.

But some purple-crowned fairy-wren helpers aren’t related at all to those they support. Young, dispersing birds sometimes drift into an area, somehow convince a pair of completely unrelated adults to take them in, and begin helping those adults feed their chicks. Such cases are particularly interesting because they can’t be explained by kin selection. If those helpers aren’t gaining any genetic benefit, then why are they being so generous?


TO UNDERSTAND FAIRY-WRENS
and cooperation in general, it can be helpful to forget that they are birds, and instead treat fairy-wrens as generic, logical beings in a strategic competition. Imagine that survival is just a game in which the birds are individual players. They can decide either to cooperate with one another or not in various situations, and each of those decisions will affect their ultimate success. To win the game, with the highest chance of passing on their own genes, the birds must choose a perfect strategy, so that when they cooperate with one another, they score more points than they would if they had decided to strike out on their own—and they don’t expose themselves to unnecessary risks when cooperation doesn’t pay off.

Looked at this way, cooperative nesting in fairy-wrens can be distilled into a problem of game theory, the study of strategic decision making. This assumes that there is such a thing as a perfect survival strategy for fairy-wrens, that some strategies are better than others, and that real life can be represented as a logic puzzle at all. But game theory, which has been well studied by contemporary mathematicians, can tell us a lot about
cooperation, from global wars to cancer cells, and might illuminate bird behavior as well.

Logically, the decision to cooperate or not can be trickier than it sounds. Sometimes, short-term rewards are weighted against cooperation even when working together would pay off better. This is illustrated by a classic strategy problem known as the prisoner’s dilemma.

Imagine that you have been arrested for robbing a bank along with a close friend. The police lock you and your accomplice in separate cells to await trial and then give each of you a choice: Stay quiet, or testify against your partner in hopes of a deal. You have no way of knowing what your buddy will do, but the police are perfectly straightforward. They advise that if you both stay quiet, you will each receive one-year sentences. If you each betray the other, you will both get three years. And if you rat out your partner and he protects you, he’ll get ten years while you go free.

The best overall outcome is for both of you to stay quiet—in which case you’ll both be sprung in a year. But you’re not sure if your friend has your best interests at heart. By protecting him, hoping that he’ll do the same for you, you risk the worst possible sentence for yourself. And you realize that the odds are stacked in favor of betrayal: If you stay quiet, you’ll get either one or ten years, whereas if you defect, you’ll get either zero or three years. Staying quiet would assume an average sentence of 5.5 years, but testifying against your partner would give you, on average, 1.5 years. Because you are selfish and logical, you betray your friend—and, for the same reasons, he betrays you. Instead of each receiving one-year sentences, you both serve three.

This is a famous problem. The situation was first described in 1950 by two mathematicians in an analysis group to the U.S.
armed forces, both experts in game theory. They realized that certain situations are mathematically stacked against cooperation, even when mutual cooperation would produce a better outcome. As long as each player in a game is selfish and logical, two opponents will not necessarily work together toward the best possible result.

They initially framed the dilemma in terms of military strategy, which would prove remarkably prescient. In 1950, the United States and the Soviet Union were just beginning an uneasy truce with each other. The United States had dropped nuclear bombs on Japan five years earlier, winning Japan’s immediate surrender, and the Soviet Union had exploded its own first nuclear device during the previous year. Those two American mathematicians may have had an inkling of the four-decade Cold War arms race that would follow.

According to some political scientists, the Cold War was one big prisoner’s dilemma. Each side had two options: arm or disarm. If both sides disarmed, nobody would spend money or get hurt—clearly, the best outcome. If both sides armed, each country would sink billions into a nuclear program instead of domestic projects, with the added possibility of mutual destruction. If one side armed itself while the other disarmed, though, the result would be immediate superiority. From either perspective, it was better to continue the arms race even though cooperation could have prevented the whole ridiculous, scary stalemate.

The dilemma crops up in lots of other situations, too: price and advertising wars between companies, use of performance-enhancing drugs in sports, and even the application of makeup by women. All of these situations fulfill, at least conceptually, the mathematical conditions of a prisoner’s dilemma in four possible scenarios: (1) Betraying your opponent while he
cooperates pays better than (2) both of you cooperating, which pays better than (3) both of you betraying each other, which pays better than (4) cooperating while your opponent betrays you.

It shows how purely logical beings can choose not to cooperate even when mutual cooperation would pay off better for everyone involved. To some degree, the prisoner’s dilemma confirms the general idea that everyone is selfish to the point of self-destruction. The dilemma predicts, given a certain set of conditions, that individuals will decide to thwart each other in an attempt to get ahead. Much like the tragedy of the commons, a related social dilemma in which members of a group deplete a shared resource for selfish reasons, the prisoner’s dilemma focuses on the fact that what’s best for a group isn’t necessarily best for any given individual.

When two individuals work together, at least one of them usually makes a short-term sacrifice. In fairy-wrens, helpers forfeit their own reproductive efforts to feed the nestlings of older adults. Such sacrifices probably lead to long-term benefits that outweigh any up-front costs of cooperation, but, assuming the birds are logical and selfish, can they really see that far into the future?

The trouble with the prisoner’s dilemma is that its conditions are rarely met in day-to-day life. Any given interaction between two individuals is usually not a one-off thing; you’re likely to meet your opponent again someday. If you burn him now, you might be sorry later. What goes around often does come around. This fact alone is enough to foster strategic cooperative relationships, as demonstrated by American political scientist Robert Axelrod in the 1980s.

Axelrod became interested in a game version of the prisoner’s dilemma in which the situation is repeated many times in succession, with two opponents choosing in each round whether
to cooperate or defect—a game that more closely mirrors real life. By the mid-1970s, more than 2,000 scholarly papers had been published about this one mathematical problem, many of them expounding on various possible strategies. Axelrod decided to host a tournament. Academics from all over the world entered their programmed algorithms, each one describing a slightly different strategy, and they all battled it out in a logical elimination until just one remained.

Many of the programs were quite complicated, but the winner turned out to be the simplest of all. Called Tit for Tat, its logic was unassailable: Cooperate at first, then, in each successive round, do whatever your opponent did in the previous round. This was interesting because in a game known to reward selfish behavior, the winning strategy played nice and punished an opponent only if the opponent didn’t cooperate.

Axelrod held the tournament again the next year, and Tit for Tat won again—and again the year after that. It was eventually defeated only by multiple programs entered together that had been preprogrammed to recognize one another and sacrifice themselves to boost one overall winner, in a sense subverting the rules of the game.

When you think about it, Tit for Tat makes sense. It’s the prisoner’s dilemma version of “an eye for an eye,” or, more optimistically, the Golden Rule (“Do to others as you would have them do to you”). If everybody cooperated all the time, the situation would be idyllic but highly unstable: anybody could saunter in at any time and take advantage. But if everyone always betrayed each other, nobody would gain anything. Stability lies somewhere in the middle.

Reviewing the best strategies in his tournaments, Axelrod found four main predictors of success: (1) The strategy must be nice; it won’t cheat before its opponent does. (2) If its opponent
cheats, the strategy must retaliate; otherwise, it will be walked over. (3) But it must be forgiving; instead of holding a grudge, the strategy should revert to being nice after retaliating. (4) The strategy, counterintuitively, must not be jealous; it must not score more points than its opponents in any given round. This last condition illustrates the essential difference between the single-round prisoner’s dilemma and its repeated version: If you’re interacting just once, the best strategy is to betray your opponent, but it’s best to play nice over the long haul.

As Axelrod found out, it’s easy to start generalizing these results with real human and animal behaviors, and he wrote a book about it.
The Evolution of Cooperation
explains how nice strategies, like cooperative nesting, often win in the long term, and offers evidence for how those behaviors might have evolved through natural selection. Whether or not do-gooders are acting out of the generosity of their own hearts, Axelrod argued, they’re ultimately being nice to get something out of it.

This can be true even if you don’t expect a good deed ever to be returned. Think of it this way: The cost of being nice in any given interaction is small, but the cost of burning someone might be huge. So, logically, even if 90 percent of your kind actions are never directly returned, those that are will more than compensate.

You can extend this line of reasoning to answer all kinds of questions about why humans and other social creatures are generally kind to one another, why we are horrified by violence, and why we cooperate. There will always be those who try to take advantage—any population without defectors would be unstable because it would invite exploitation—but, generally, it pays to be accommodating. If we weren’t so nice, anarchy would ensue. But there’s a less rosy side. If we cooperate only for our own selfish reasons, then does true kindness
even exist? Is there such a thing as real charity? Scientifically, altruism is nearly impossible to prove, and the concept is hotly debated. Ethicists cringe at the thought that all good human behaviors may be self-motivated. But those who study animals tend to accept this view more readily, explaining away any indication of altruism as an evolutionary benefit.

Which brings us back to fairy-wrens. Their cooperative nesting habits are often cited as an example of altruism in the animal kingdom, along with other seeming acts of kindness. But the bird world doesn’t really work that way. A bird that helps feed another bird’s nestlings must be doing it for an ultimately selfish reason. In the end, its own survival must benefit.


THE MORNINGTON STUDY
found that unrelated helpers are probably motivated by the prospect of inheriting a good territory. Because the supply of waterside habitat is limited, almost all of it is occupied by dominant adult fairy-wrens. Sometimes a young bird can carve out a place for itself only by joining an occupied space, paying rent by helping the current owners raise babies until they either die or move on. It’s a win-win agreement because the adults get extra help feeding their nestlings, and the helpers get a chance to inherit a nice place to live.

It comes as no surprise that purple-crowned fairy-wren nests with helpers tend to be more productive. Yet, in a closely related species, the superb fairy-wren, which uses a similar system of nest helpers, researchers have been unable to show that the extra help translates into healthier chicks. Study after study has found no difference in fledging success between nests with and without helpers. Scientists could only scratch their heads and wonder whether their data were correct; if helpers didn’t
increase nesting success, why did dominant adult fairy-wrens allow young birds to squat on their territories?

Other books

Heirs of the New Earth by David Lee Summers
Overkill by James Barrington
The Spooky Art by Norman Mailer
The Edge of Falling by Rebecca Serle


readsbookonline.com Copyright 2016 - 2024