Issue 64 / July - August 2008
Darwinism and Altruism, Is There A Problem?
In Darwinâ€™s theory of the survival of the fittest, those individuals best adapted to their environments are most likely to live another day and are therefore more likely to reproduce. Consequently, the fittest individuals have more offspring and these share some of their parentsâ€™ adaptive characters. Simply put, the theory of the survival of the fittest favors the strong, the tough and the healthy!
But if you look at nature more carefully, you will find that it does not always seem that way. You may see animals that are strikingly unselfish, most of the time for their own species, such as giving a warning of approaching predators, sharing food, licking others to remove parasites, adopting orphans. And occasionally they are unselfish to other species as well. They fight without killing or even injuring their enemies. Such behavior poses a problem for the Darwinian view of nature; this behavior is known as altruism. But is there really a problem?
Yes, there is!
One of Darwinâ€™s greatest challenges was explaining how altruism could have evolved. For example, consider how a single bird within a flock might act as an emergency alarm, giving a warning call when a predator approaches. By receiving advance warning, the other birds in the flock have a better chance of escaping from the predator. But this comes at a price to the bird that guards the flock. A warning call attracts a lot of attention, making the guard an easier target for the predator.
How could a character for such altruism be passed on from generation to generation if it is the altruists who are more easily eliminated, and should there not be some kind of reward or motivation for this self-sacrifice? At first glance, it seems that if altruistic birds are more likely to be preyed upon, then fewer altruists would survive to reproduce. Eventually, altruism should die out. In fact, it should have died out before we discovered it.
How can we explain this behavior? It is not possible to do so from an organism-centered view. So, letâ€™s imagine a gene that causes the organism containing that gene to behave in such a way that copies of it are made in other organisms. In other words, a bird is only helping the other birds if they have the same type of gene. A gene that makes its owner behave in this particular way, all other things being equal, is likely to survive. But then the natural question is that altruism is only directed to those with the altruistic gene. If the birds without the altruistic gene benefit from the help as much as those with it then how does the survival of the fittest theory favor this gene? It is not easy for a gene to recognize other such genes in other individuals. There must be another solution.
Sharing the same gene
One way of increasing the likelihood for altruism to reach its true target is to keep it within the â€śfamily.â€ť If there is a gene for behaving altruistically, then the members of the family are more likely to inherit the same gene than a random member of the population. The closer the family member, the greater the possibility of sharing the gene.
If, for example, an individual had the choice of saving his own life or the lives of two sisters or eight cousins then the survival of the fittest theory should be indifferent about the choice as they will save the same number of genes. But on the other hand if I could save three sisters instead or nine of my cousins then the theory would favor the self-sacrificial act, â€śsaving my kin rather than saving my skin!â€ť Hmm, what do we know, maybe indeed there is a way to be both Darwinist and altruist.
The idea of the â€śfittestâ€ť in Darwinâ€™s theory is the ability of an individual to survive and reproduce and therefore to pass on some of their genes. If a mother gives up her life for her offspring, the mother ceases to exist. But, from another point of view, the mother continues to live on in her child genetically. Although the mother as an individual has not benefited from her altruism, half of her genetic code survives.
This new definition of fitness shifts the focus from individual fitness to family fitness, which takes into account the survival of an individualâ€™s relatives. Evolution is no longer seen simply to be a process of individual selection, but also one of family selection.
Although this selection based on family seems to be a good explanation for our puzzle, at least in theory, there are many immediate questions that spring to mind. Assisting oneâ€™s kin often goes in one direction, although one would expect it to be a two-way process as there is symmetry between the genes. And how practical is the process of selecting oneâ€™s relatives? How would you differentiate the members of the same flock as to whether they are family or not, the ones that are getting the help secretly â€śconnectedâ€ť!?
Mathematics and game theory
At this point at least we have arrived at the conclusion that it is quite difficult to explain altruistic behavior in all circumstances. Maybe there is a solution to be found in mathematics, which has solved so many abstract problems.
It would be useful to take a different point of view for the entire matter. Maybe what looks like altruism is indeed to the advantage of the altruist. Two parties could be exchanging altruistic favors and in the end they both are better off. Another question that comes to mind is â€śEven when everybody knows that cooperation is good for all, why shouldnâ€™t one cheat as this is more beneficial from a selfish point of view?â€ť If everyone were to cooperate, everyone would be better off; but the best for an individual is to pursue their own self-interest, in which case therefore everybody ends up worse off.
A similar change of understanding also happened in economic theory in the 1950â€™s when the game theory was first introduced into economics. Initially, this theory was intended to deal with the problem of providing a theory for economic and strategic behavior where people interact directly, rather than through the market. Here, games are only used as a metaphor for more serious interactions in a general ecosystem. But the game theory addresses the serious interactions using the metaphor of a game: in these serious interactions, as in games, the individual's choice is essentially a choice of a strategy, and the outcome of the interaction depends on the strategies chosen by each of the participants. On this interpretation, a study of games may indeed tell us something about serious interactions.
In neoclassical economic theory, to choose rationally is to maximize the rewards for the individual. From one point of view, this is a problem in mathematics: choose the activity that maximizes rewards in any given circumstances. Thus, we may think of rational economic choices as the solution to a mathematical problem. But in the game theory, the case is more complex, since the outcome depends not only on oneâ€™s own strategies but also directly on the strategies that have been chosen by other individuals; we can still think of the rational choice of strategies as a mathematical problem â€“ and maximize the rewards of a group of interacting decision makers (Nash equilibrium).
Now that we have realized that we have to move in a somewhat similar direction to the economists, maybe we should consult the experts of the game theory at this point. In the 1980â€™s Axelrod and Hamilton worked on a famous problem in the game theory, the Prisonerâ€™s Dilemma, exactly because it deals with this problem. The rational pursuit of individual self-interest drives everybody into an outcome that is not favored by anybody. Imagine two partners in a crime being interrogated at the same time. Each one has two options, cooperate with the other and keep quiet or betray the other and confess. Case C, we can say, is if both cooperate then the police cannot get much out of them and they will both get a light sentence (2 years); if one defects and the other keeps quiet then the traitor will get an even lighter sentence (1 year) â€“ this is case B. If the one who cooperates gets the longest sentence (10 years), this is the worst end of the deal and we can call this case S. In a case when both betray one another they will both get a sentence (6 years) longer than if they had cooperated but lighter than if one had kept quiet and the other spoke, and this is case D.
Out of the four outcomes, B is the best and S is the worst from an individualistic point of view, while the order of preference is B, C, S, D. We should realize that this is a non-zero sum game. In a zero-sum game, my loss is your gain; for example, if we are trying to divide a certain amount of money in the bank into two, anything over fifty percent for me is a loss for you. On the other hand, in a non-zero sum game I can actually win without you losing. Each suspect has to make their decision without knowing what the other has done. What would a rational suspect do?
The answer is simple; he would betray his partner in crime! Regardless of what the other suspect does, betrayal always pays better than cooperating. Here is the simple reasoning one would follow. Suppose my partner in crime cooperates. I could do quite well by also cooperating, I would get 2 years, (C). But it is even better to betray him, since I would then get 1 year instead of 2, (B). What if he betrays me? If I keep quiet then the worst is going to happen and I will get 10 years, (S), therefore I should defect and get 6 years, (D). In summary, all we are saying is that the second row in the chart is always more favorable than the first row, hence no matter what, a rational prisoner would always betray his partner!
â€śAnd here comes the dilemma: it pays each of them to defect, whatever the other one does, yet if both defect, each does less well than if both had cooperated. What is best for each individual leads to a mutual defection, while everybody would be better off with mutual cooperation.â€ť
Fortunately, the dilemma has a solution in our case. So far, we have only played the game once. What happens if the parties play the game repeatedly and for an indefinite number of times? After every single time they play they know that they are likely to meet again later. Under such conditions there is actually a cooperative strategy for the players that could be successful; this is a somewhat twisted version of the cooperation defined earlier in the prisonerâ€™s dilemma case. First of all, we immediately realize that always defecting is clearly not the smartest strategy, knowing that you will meet the other individual again in the future. Instead, consider this natural strategy, called Tit for Tat, which is never to be the first to defect, always imitate the other from his previous move and retaliate only when you have been betrayed. It turns out that this highly cooperative strategy can survive, even though initially it withstands the challenges of readily defecting strategies. And it can be stable against diminishing altogether.
In order for this Tit for Tat strategy to have a chance to work, a critical proportion of the individuals have to cooperate. Otherwise the readily defective strategies would simply destroy the cooperative ones and dominate the whole system. But once the number of individuals who adopt a Tit for Tat strategy exceeds a critical ratio in the population then it survives and reaches a stable ratio able to withstand any other strategy.
Axelrodâ€™s theory is nice and easy to understand, but once again it prompts all sorts of other issues. How frequently the population is able to reach that critical level in the first place? On the other hand, what kind of a memory do individuals need in order to be able to execute a Tit for Tat strategy?
What do we make of it?
We could keep working on expanding these explanations and models, or at least the biologists should. But in any case, in order for any of these models to apply in what we see in nature we have to be demanding of the individuals who form the population. Every single time we come up with a new approach or an extended model it is difficult to explain many of the characteristics of altruistic behavior, even in idealized cases. Either there must be altruistic genes that can recognize the other altruistic genes carried in other individuals, or we all individuals have a well-developed memory that actually remembers all the moves that have been made by other individuals in the population, or one starts from an idealistic state in order to maintain the stability of the system without knowing how we are able to get there in the first place.
It seems that the smarter we get, the better our theories are developed, and the more flawless our models become, the more we realize that all the individuals in the great ecosystem of nature that show some kind of altruistic behavior must have a great deal of knowledge and power over the rest, they must have eyes that see many things about the whole picture and they must have an authority that has a great impact on others, on and on and on.