Part II: Rational Cooperation in the Finitely Repeated Prisoner’s Dilemma
Part II: Rational Cooperation in the Finitely Repeated Prisoner’s Dilemma: Experimental Evidence
James Andreoni and John Miller
Economics Journal, Vol 103, 1993, pp 570-585
This post took a little longer than I wanted it to- but I am all moved back onto campus and ready to begin the semester and keep chugging along with these articles and key concepts.
First let’s recap and set up the framework for this post:
We’re looking at PDs in a finite iteration setting. We are going to be manipulating the kind of person the other person thinks they are playing with via the α “ altruism parameter”. Once again, the author’s three models of altruism:
(i) Pure Altruism: Care directly about the payoff of the other player
(ii) Duty altruism: Feel obligated to cooperate
(iii) Reciprocal Altruism: Some special pleasure in successful cooperation
That’s where we left off.
So, the now authors run 4 experiments plus a “bonus” experiment:
The computer randomly paired the 14 subjects and each subject played a 10 period repeated PD.
The computer randomly paired the 14 subjects for every iteration of the PD, for a total of 200 iterations. Each subject had a new partner every iteration.
Same as Partners, but a 50% chance you will be matched with a tit for tat playing computer.
Same as Computer50, but a 1/1000 chance of getting a computer playing TFT (tit for tat).
5. Bonus Experiment
After all other experiments are finished, the players are told they are going to play a computer that will play TFT 100% for exactly 10 rounds. They cleverly do this to gauge if the players can figure out the optimal strategy of 9 cooperates then end on defect.
Partners and Computer50 cooperated more than strangers at a statistically significant level.
Both partners and Computer50 were significantly more cooperative in the first 5 rounds of each game.
Strangers did not vary much during the 20 rounds.
In the bonus experiment, all conditions were about the same, 7 or 8 of the 14 players played the optimal strategy.
Big take home lesson:
“Subjects in a finitely repeated prisoner’s dilemma were significantly more cooperative than subjects in a repeated single shot game. Moreover, by increasing subjects’ beliefs about the probability that their opponent is altruistic we can further increase reputation building.”
And since I was/am still kind of confused by the “sequential equilibrium reputation hypothesis” I’m copying a partial explanation of it from another article:
Skeptics noted, however, that cooperation need not be caused by altruism. First, inexperience and initial confusion may cause subjects to cooperate. Second, subjects in a finitely repeated version of the game may cooperate if they each believe there is a chance someone actually is altruistic. , this “sequential equilibrium reputation hypothesis” (Kreps, et al., 1982) does not actually require subjects to be altruistic, but only that they believe that they are sufficiently likely to encounter such a person.
A reputation as an altruist can matter in PD iterations. Also, cooperation does occur- maybe even more than you might expect. But it’s low. And it doesn’t last. The authors also didn’t hit on my earlier intuition about “R1CC” or round one cooperate-cooperate. I think that R1CC might set a “tone”. I wonder if there is a study that shows its by cooperation rate broken down into R1CC and R1CD and R1DD?
Picture is Creative Commons and found here
Posted on August 28, 2011, in Uncategorized and tagged altruism, cooperation, economics, experiment, game theory, prisoners' dilemma, R1CC, rational, selfish, sequential equilibrium reputation hypothesis. Bookmark the permalink. Leave a comment.