Confusion may be misinterpreted as altruism and prosocial behavior

This 2015 Oxford human study of altruism found:

“Division of people into distinct social types relies on the assumption that an individual’s decisions in public-goods games can be used to accurately measure their social preferences. Specifically, that greater contributions to the cooperative project in the game reflect a greater valuing of the welfare of others, termed “prosociality.”

Individuals behave in the same way, irrespective of whether they are playing computers or humans, even when controlling for beliefs. Therefore, the previously observed differences in human behavior do not need to be explained by variation in the extent to which individuals care about fairness or the welfare of others.

Conditional cooperators, who approximately match the contributions of their groupmates, misunderstand the game. Answering the standard control questions correctly does not guarantee understanding.

We find no evidence that there is a subpopulation of players that understand the game and have prosocial motives toward human players.

These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.

When attempting to measure social behaviors, it is not sufficient to merely record decisions with behavioral consequences and then infer social preferences. One also needs to manipulate these consequences to test whether this affects the behavior.”

The researchers are evolutionary biologists who had made similar points in previous studies. They addressed possible confounders in the study and supporting information, and provided complete details in the appendix. For example, regarding reciprocity:

“Communication was forbidden, and we provided no feedback on earnings or the behavior of groupmates. This design prevents signaling, reciprocity, and learning and therefore minimizes any order effects.

It might also be argued that people playing with computers cannot help behaving as if they were playing with humans. Such ingraining of behavior would suggest a major problem for the way in which economic games have been used to measure social preferences. In particular, behavior would reflect everyday expectations from the real world, such as reputation concerns or the possibility of reciprocity, rather than the setup of the game and the true consequences of choices.”


Some of the news coverage missed the lead point of how:

“Economic experiments are often used to study if humans altruistically value the welfare of others.

These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.”

Here are several expressions of beliefs in one news coverage article where the author attempted to flip the discussion to cast doubt on the study. It was along the lines of “There’s something wrong with this study (that I haven’t thoroughly read) because [insert aspersion about sample size, etc.]” What motivates such reflexive behavior?


This study should inform social behavior studies that draw conclusions from flawed experimental designs. For example, both:

based their findings on a video game of popping balloons. Neither study properly interpreted their subjects’ decisions per the current study’s recommendation:

“When attempting to measure social behaviors, it is not sufficient to merely record decisions with behavioral consequences and then infer social preferences. One also needs to manipulate these consequences to test whether this affects the behavior.”

http://www.pnas.org/content/113/5/1291.full “Conditional cooperation and confusion in public-goods experiments”


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.