Confusion may be misinterpreted as altruism and prosocial behavior

This 2015 Oxford human study of altruism found:

“Division of people into distinct social types relies on the assumption that an individual’s decisions in public-goods games can be used to accurately measure their social preferences. Specifically, that greater contributions to the cooperative project in the game reflect a greater valuing of the welfare of others, termed “prosociality.”

Individuals behave in the same way, irrespective of whether they are playing computers or humans, even when controlling for beliefs. Therefore, the previously observed differences in human behavior do not need to be explained by variation in the extent to which individuals care about fairness or the welfare of others.

Conditional cooperators, who approximately match the contributions of their groupmates, misunderstand the game. Answering the standard control questions correctly does not guarantee understanding.

We find no evidence that there is a subpopulation of players that understand the game and have prosocial motives toward human players.

These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.

When attempting to measure social behaviors, it is not sufficient to merely record decisions with behavioral consequences and then infer social preferences. One also needs to manipulate these consequences to test whether this affects the behavior.”

The researchers are evolutionary biologists who had made similar points in previous studies. They addressed possible confounders in the study and supporting information, and provided complete details in the appendix. For example, regarding reciprocity:

“Communication was forbidden, and we provided no feedback on earnings or the behavior of groupmates. This design prevents signaling, reciprocity, and learning and therefore minimizes any order effects.

It might also be argued that people playing with computers cannot help behaving as if they were playing with humans. Such ingraining of behavior would suggest a major problem for the way in which economic games have been used to measure social preferences. In particular, behavior would reflect everyday expectations from the real world, such as reputation concerns or the possibility of reciprocity, rather than the setup of the game and the true consequences of choices.”


Some of the news coverage missed the lead point of how:

“Economic experiments are often used to study if humans altruistically value the welfare of others.

These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.”

Here are several expressions of beliefs in one news coverage article where the author attempted to flip the discussion to cast doubt on the study. It was along the lines of “There’s something wrong with this study (that I haven’t thoroughly read) because [insert aspersion about sample size, etc.]” What motivates such reflexive behavior?


This study should inform social behavior studies that draw conclusions from flawed experimental designs. For example, both:

based their findings on a video game of popping balloons. Neither study properly interpreted their subjects’ decisions per the current study’s recommendation:

“When attempting to measure social behaviors, it is not sufficient to merely record decisions with behavioral consequences and then infer social preferences. One also needs to manipulate these consequences to test whether this affects the behavior.”

http://www.pnas.org/content/113/5/1291.full “Conditional cooperation and confusion in public-goods experiments”


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

A study on online cooperation with limited findings

This 2015 Cambridge/Oxford study found:

“Global reputational knowledge is crucial to sustaining a high level of cooperation and welfare.”

Basically, the subjects learned how to “game” a cooperative online game, and the researchers drew up their findings.

To me, the study demonstrated part of the findings of the Reciprocity behaviors differ as to whether we seek cerebral vs. limbic system rewards study, the part where the cerebrum was active in:

“Reputation-based reciprocity, in which they help others with good reputations to gain good reputations themselves.”

The current study ignored how people’s limbic system and lower brain areas may have motivated them to cooperate.

I didn’t see how excluding people’s emotional involvement when cooperating with others improved the potential reach of this study’s findings. Doesn’t a person’s willingness to cooperate in person and in online activities usually also include their emotional motivations?

The findings can’t be applied generally to cooperative motivations and behaviors that the researchers intentionally left out of the study. The study’s findings applied just to the artificial environment of their experiment, and didn’t provide evidence for how:

“Cooperative behavior is fundamental for a society to thrive.”

http://www.pnas.org/content/112/12/3647.full “The effects of reputational and social knowledge on cooperation”


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

Is it science, or is it a silly and sad farce when researchers “make up” missing data?

This 2014 French study was a parody of science.

The researchers “made up” missing data on over 50% of the men and over 47% of the women! All to satisfy their model that drove an agenda of the effects of adverse childhood experiences.

As an example of how silly and sad this was:

  • Two of the seven subject ages of interest were 23 and 33 consecutively, and
  • One of the nine factors was education level.

If I was a subject, and wasn’t around to give data at age 33 and later, how would the researchers have extrapolated a measurement of my education level of “high school” at age 23?

I’m pretty sure their imputation method would have “made up” education level data points for me of “high school” for ages 33 and beyond. I doubt that the model would have produced my actual education levels of a Bachelors and two Masters degrees at age 33.

Everything I said about the Problematic research on stress that will never make a contribution toward advancing science study applied to this study, including the “allostatic load” buzzword and the same compliant reviewer.

Studies like this both detract from science and are a misallocation of scarce resources. Their design and data aren’t able to reach levels where they can provide etiologic evidence.

Such studies also have limiting effects on how we “do something” about real problems, because the researchers won’t be permitted to produce findings that aren’t politically correct.

http://www.pnas.org/content/112/7/E738.full “Adverse childhood experiences and physiological wear-and-tear in midlife: Findings from the 1958 British birth cohort”

What happens next after a detox program predictably fails?

This 2014 study was a misguided example of looking solely at the presenting parts of a person’s condition rather than the whole historical person.

What did this study’s researchers decide after finding:

“Alcohol-dependent subjects..remained with high scores of depression, anxiety, and alcohol craving after a short-term detoxification program.”

Was it that the detox program didn’t work because it dealt with suppressing symptoms rather than addressing causes?

NO!

The researchers decided:

“Gut microbiota seems to be a previously unidentified target in the management of alcohol dependence.”

The researchers proceeded on some trendy, in-vogue aspect of their patients with which to tinker.

The researchers ignored that the correlation of the new treatment course didn’t show causation. They also ignored underlying causes for the ineffectiveness of the preceding treatments of symptoms.

Hard to see how the reviewer believed that this study would advance science.

Meanwhile, the researchers continued to ignore the elephants in the room: the relationships of the patients’ histories and their pain.

http://www.pnas.org/content/111/42/E4485.full “Intestinal permeability, gut-bacterial dysbiosis, and behavioral markers of alcohol-dependence severity”

Problematic research on oxytocin: If the study design excludes women, its findings cannot include women

This 2014 study’s findings that “the hormone oxytocin promotes group-serving dishonesty” can’t apply generally to humans because its subjects were ALL men.

Regarding oxytocin, the researchers certainly knew or should have known previous studies’ findings about sex differences, as did Is oxytocin why more women than men like horror movies? which cited:

“Oxytocin modulates brain activity differently in male and female subjects.”

Regarding differing reciprocal behaviors, the researchers also knew or should have been better informed about associated brain areas through studies such as Reciprocity behaviors differ as to whether we seek cerebral vs. limbic system rewards and its references.

And how could the study produce reliable, replicable evidence of:

Dishonesty to be plastic and rooted in evolved neurobiological circuitries”

when the researchers performed NO measurements of “neurobiological circuitries” that supported that finding?

What was the agenda in play here? What did the female Princeton reviewer see in this study that advanced science?

http://www.pnas.org/content/111/15/5503.full “Oxytocin promotes group-serving dishonesty”


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

Can a study exclude the limbic system and adequately find how we process value?

This 2014 human study was notable for defining away the limbic system and lower brain from consideration in processing positive and negative stimuli for value.

However, the researchers didn’t fully reveal their biases until the last paragraph of the supplementary material, where they were obligated to comment on a previous study that included the limbic system. Good for the reviewer if that was how the researchers became obligated to deal with the previous study.

It isn’t difficult to include the limbic system in studies of value. For example, the Teenagers value rewards more and are more sensitive to punishments than are adults study found:

  • Cerebral areas increased activity when the expected value of the reward increased.
  • Limbic system areas increased activity when the expected value of the reward decreased.

http://www.pnas.org/content/111/13/5000.full “Disentangling neural representations of value and salience in the human brain”

Reciprocity behaviors differ as to whether we seek cerebral vs. limbic system rewards

This 2014 Japanese human study showed which brain areas were involved in indirect reciprocity. It was mainly cerebral areas that were active in:

“Reputation-based reciprocity, in which they help others with good reputations to gain good reputations themselves.”

Previous studies found much the same with direct reciprocity, where an individual was reimbursed by someone who directly owed them a debt of cooperation.

It was mainly limbic system areas that were active in:

“Pay-it-forward reciprocity, in which, independently of reputations, they help others after being helped by someone else.”

The researchers compared and contrasted self-interested behaviors of:

  • direct reciprocity and
  • reputation-based reciprocity,

both of which sought rewards in the cerebrum, with empathetic behaviors of:

  • pay-it-forward reciprocity,

where the subjects sought emotional rewards in the limbic system.

http://www.pnas.org/content/111/11/3990.full “Two distinct neural mechanisms underlying indirect reciprocity”


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

Is this science, or a PC agenda? Problematic research on childhood maltreatment and its effects

This 2013 Wisconsin human study’s goal was to assess effects of childhood trauma using both functional MRI scans and self-reported answers to a questionnaire. The families of the study’s subjects (64 18-year-olds) participated with researchers before some of the teenagers were born.

How could the teenagers give answers that described events that may have taken place early in their lives, before their cerebrums were developed, around age 4? Even if the subjects were old enough to remember, would they give accurate answers to statements such as:

“My parents were too drunk or high to take care of the family.

Somebody in my family hit me so hard that it left me with bruises or marks.”

knowing that affirmative answers would prompt a visit to their family from a government employee?

Although some data may have been available, data from the teenagers’ prenatal, birth term, infancy, and early childhood wasn’t part of the study design. Intentional dismissal of early influencing factors ignored applicable research!

No

Was the study’s limited window due to the political incorrectness of placing importance in the development environment provided by the subjects’ mothers? The evidence was there for those willing to see.


One clue of ignored early traumatic events was provided by the lead researcher’s quote in news coverage:

“These kids seem to be afraid everywhere,” he says. “It’s like they’ve lost the ability to put a contextual limit on when they’re going to be afraid and when they’re not.”

This finding of “fear without context” possibly described the later-life effects of traumas that were encountered in utero and during infancy. A pregnant woman’s terror and fear can register on the fetus’ lower brain and the amygdala from the third trimester onward.

Storing a memory’s context is one of the functions that the hippocampus performs. Because the hippocampus develops later than the amygdala, though, it would be unable to provide a context for any earlier feelings and sensations such as fear and terror.

The researchers attempted to place the finding of unfocused fear into later stages of child development without doing the necessary research. They tried to force this finding into the subjects’ later development years by citing rat fear-extinction and other marginally related studies.

But citing these studies didn’t make them applicable to the current study. Cause and effect wasn’t demonstrated by noting various “is associated with” findings.


Was this science? Was it part of furthering an agenda like protecting publicly funded jobs?

Was this study published to make a contribution to science? Were the peer reviewers even interested in advancing science?

And what about the 64 18-year-old subjects? If the lead researcher’s statement was accurate, did these teenagers receive help that addressed what they really needed?

http://www.pnas.org/content/110/47/19119.full “Childhood maltreatment is associated with altered fear circuitry and increased internalizing symptoms by late adolescence”


This page has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other pages or posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

Problematic research on human brain development

This 2013 UK human study provided details of the growths of infants’ cerebral and limbic system structures. With 55 of the 65 infants in the study born prematurely, the UK researchers found:

“Rapidly developing cortical microstructure is vulnerable to the effects of premature birth, suggesting a mechanism for the adverse effects of preterm delivery on cognitive function.”

The infants’ first set of measurements were taken from 27 to 46 weeks after birth. Follow-up measurements were taken when the infants were two years old.

Only the politically-correct adverse effects on brain development were included in the study, which led to the researchers making only politically-correct findings. Is this what we want from publicly funded scientific research?

  • Although 40 of the 65 infants experienced Caesarian deliveries, no attempt was made by the researchers to study any effects on brain development of their delivery method, an omission presumably due to the political incorrectness of suggesting any adverse effects to non-vaginal deliveries.
  • Similarly disregarded for analysis were the effects on brain development in 14 infants of preeclampsia, a serious complication of pregnancy associated with the development of high blood pressure and protein in the urine.
  • Also disregarded for analysis were the effects on brain development in 13 infants of chorioamnionitis, a condition in pregnant women in which the membranes that surround the fetus and the amniotic fluid are infected by bacteria.

Further, was this all we should expect from the peer review process? The data was presumably there for the reviewers to go back to the researchers and suggest analysis of something other than the predetermined agenda.

http://www.pnas.org/content/110/23/9541.full “Development of cortical microstructure in the preterm human brain”

Problematic research with telomere length

This 2014 study purportedly linked the effect of shorter telomere length in children to twin causes of a disadvantaged social environment and genetics. Two questionable areas were even more egregious than the study’s lack of a control group.

The first questionable area was that the researchers purposely measured telomere length using methods that couldn’t be directly compared with the telomere length measurements found in almost all other telomere studies. There was no attempt to make findings equivalent, no map with cited studies! They offered up rationale after rationale, but the direct incomparability with other studies remained.

The largest questionable area was the way the researchers produced the study’s concluding sentence:

“We suggest that an individual’s genetic architecture moderates the magnitude and direction of the physiological response to exogenous stressors.”

The researchers’ process deliberately skewed the sample of forty 9-year old boys. Next, they split this forty-member sample in half according to maternal depression! Maternal depression is an experimentally proven contributor to epigenetic changes that are detrimental to developing fetuses, infants, and young children.

The researchers asserted that the results of compounding their questionable choices represented something about stress and genetics in a larger population of children.

Of course, “an individual’s genetic architecture moderates the magnitude and direction of the physiological response to exogenous stressors.” These researchers didn’t do the work to determine whether it was the genetic architecture that the 9-year-olds were either epigenetically changed into or conceived.

I presume that this additional work on genetic architecture wasn’t pursued by the researchers because it may not produce the race-baiting headlines of the press coverage this study achieved. If the additional work pointed to epigenetic causes of adverse effects, the headline may have been non-politically correct like “Maternal depression and poor caregiving damages fetuses, infants, and young children.”

Was this study published to further an agenda? If so, did this study also represent a failure of the peer review process?

Was it predetermined that this study would be published in PNAS regardless of its methods? Were the researchers and reviewers even interested in advancing science?

http://www.pnas.org/content/111/16/5944.full “Social disadvantage, genetic sensitivity, and children’s telomere length”