Why do we believe obvious lies?

Here are two accounts of this weekend’s news from real journalists, neither of whom are fans of the current US president.

Matt Taibbi of Rolling Stone
https://taibbi.substack.com/p/russiagate-is-wmd-times-a-million
“It’s official: Russiagate is this generation’s WMD”

He cited intentional misreporting (lying) multiple times from the New York Times, Washington Post, CNN, Wall Street Journal, MSNBC, Mother Jones; and from NBC, ABC, McClatchy, New Yorker, New York Magazine, Bloomberg, BuzzFeed, Slate, Yahoo, Fortune, Guardian; and from numerous US congressmen and senators. Most of these false stories have still not yet been corrected or retracted.

  • “Recapping: the reporter who introduced Steele to the world (his September 23, 2016 story was the first to reference him as a source), who wrote a book that even he concedes was seen as “validating” the pee tape story, suddenly backtracks and says the whole thing may have been based on a Las Vegas strip act, but it doesn’t matter because Stormy Daniels, etc.
  • When explosive #Russiagate headlines go sideways, the original outlets simply ignore the new development, leaving the “retraction” process to conservative outlets that don’t reach the original audiences.
  • The Russiagate era has so degraded journalism that even once “reputable” outlets are now only about as right as politicians, which is to say barely ever, and then only by accident.
  • Authorities have been lying their faces off to reporters since before electricity! It doesn’t take much investigation to realize the main institutional sources in the Russiagate mess – the security services, mainly – have extensive records of deceiving the media.
  • As noted before, from World War I-era tales of striking union workers being German agents to the “missile gap” that wasn’t (the “gap” was leaked to the press before the Soviets had even one operational ICBM) to the Gulf of Tonkin mess to all the smears of people like Martin Luther King, it’s a wonder newspapers listen to whispers from government sources at all.”


Glenn Greenwald of The Intercept
https://twitter.com/ggreenwald

  1. “Can’t the people who got rich exploiting liberal #Resistance fears by feeding them false conspiracies at least content themselves to their bulging bank accounts from the scam they pulled off & have one day of silence where they don’t try to pretend that they were right all along?
  2. If you’re just going to let stuff like this go – unexamined, unacknowledged, and unaccounted for – don’t expect anyone to be remotely sympathetic to the fact that public trust in big media is nonexistent and politicians benefit by making journalists their enemies.
  3. And just for future reference: documenting the falsehoods, baseless conspiracies, and deceitful narratives being peddled without dissent by the major corporate media isn’t “blogging” or “media criticism.” It’s journalism. It’s reporting. And it’s vital.
  4. Nothing kills journalism worse than cowardly group-think, and it’s worse than ever since they’re congregated in the same places in Brooklyn and the West Coast and petrified of saying anything that makes them unpopular among their peers.
  5. Check every MSNBC personality, CNN law “expert,” liberal-centrist outlets and #Resistance scam artist and see if you see even an iota of self-reflection, humility or admission of massive error.
  6. I wrote this with @GGreenwald in November 2016, warning Russiagate was being used to attack, smear, and censor alternative media. Those blacklisted alternative media ended up being correct about Russiagate – while the corporate media spread actual fake news.
  7. There should be major accountability in the US media and in the intelligence community they united with to drown US political discourse for 2 years straight in unhinged conspiratorial trash, distracting from real issues. That’s what should happen as a first step. But it won’t.”
Advertisements

Hidden hypotheses of epigenetic studies

This 2018 UK review discussed three pre-existing conditions of epigenetic genome-wide association studies:

“Genome-wide technology has facilitated epigenome-wide association studies (EWAS), permitting ‘hypothesis-free’ examinations in relation to adversity and/or mental health problems. Results of EWAS are in fact conditional on several a priori hypotheses:

  1. EWAS coverage is sufficient for complex psychiatric problems;
  2. Peripheral tissue is meaningful for mental health problems; and
  3. The assumption that biology can be informative to the phenotype.

1. CpG sites were chosen as potentially biologically informative based on consultation with a consortium of DNA methylation experts. Selection was, in part, based on data from a number of phenotypes (some medical in nature such as cancer), and thus is not specifically targeted to brain-based, stress-related complex mental health phenotypes.

2. The assumption is often that distinct peripheral tissues are interchangeable and equally suited for biomarker detection, when in fact it is highly probable that peripheral tissues themselves correspond differently to environmental adversity and/or disease state.

3. Analyses result in general statements such as ‘neurodevelopment’ or the ‘immune system’ being involved in the aetiology of a given phenotype. Whether these broad categories play indeed a substantial role in the aetiology of the mental health problem is often hard to determine given the post hoc nature of the interpretation.”


The reviewers mentioned in item #2 the statistical flaw of assuming that measured entities are interchangeable with one another. They didn’t mention that the problem also affected item #1 methodologies of averaging CpG methylation measurements in fixed genomic bins or over defined genomic regions, as discussed in:

The reviewers offered suggestions for reducing the impacts of these three hypotheses. But will doing more of the same, only better, advance science?

Was it too much to ask of researchers whose paychecks and reputations depended on a framework’s paradigm – such as the “biomarker” mentioned a dozen and a half times – to admit the uselessness of gathering data when the framework in which the data operated wasn’t viable? They already knew or should have known this.

Changing an individual’s future behavior even before they’re born provided one example of what the GWAS/EWAS framework missed:

“When phenotypic variation results from alleles that modify phenotypic variance rather than the mean, this link between genotype and phenotype will not be detected.”

DNA methylation and childhood adversity concluded that:

“Blood-based EWAS may yield limited information relating to underlying pathological processes for disorders where brain is the primary tissue of interest.”

The truth about complex traits and GWAS added another example of how this framework and many of its paradigms haven’t produced effective explanations of “the aetiology of the mental health problem”

“The most investigated candidate gene hypotheses of schizophrenia are not well supported by genome-wide association studies, and it is likely that this will be the case for other complex traits as well.”

Researchers need to reevaluate their framework if they want to make a difference in their fields. Recasting GWAS as EWAS won’t make it more effective.

https://www.sciencedirect.com/science/article/pii/S2352250X18300940 “Hidden hypotheses in ‘hypothesis-free’ genome-wide epigenetic associations”

The Not-Invented-Here syndrome

I have high expectations of natural science researchers. I assume that their studies will improve over time, and develop methods and experiments that produce reliable evidence to inform us of human conditions.

My confidence is often unrealistic. Scientists are people, after all, and have the same foibles as the rest of us.

I anticipate that researchers will keep abreast of others’ work around the world. If other groups in their research areas are developing better methods and exploring hypotheses that discover better applications for humans, why not adopt them in the interest of advancing science?

That’s not what happened with this 2018 UK rodent study. The rat model some of the coauthors have built their reputations on depends on disturbing rat pregnancies by administering glucocorticoids. But both the rat model and a guinea pig model in Do you have your family’s detailed medical histories? demonstrated that physicians who disturb their pregnant human patients in this way may be acting irresponsibly toward their patients’ fetuses and their future generations.

This study didn’t find mechanisms that explained transgenerational epigenetic birth weight effects through the F2 grandchild generation:

“Although the phenotype is transmitted to a second generation, we are unable to detect specific changes in DNA methylation, common histone modifications or small RNA [including microRNA] profiles in sperm.

The inheritance mechanism for the paternally derived glucocorticoid-reprogrammed phenotype may not be linked with the specific germline DNA, sRNA and chromatin modifications that we have profiled here.”


The linked guinea pig model was developed specifically to inform physicians of the consequences through the F3 great-grandchild generation of disturbing human pregnancies with glucocorticoids:

“Antenatal exposure to multiple courses of sGC [synthetic glucocorticoid] has been associated with hyperactivity, impaired attention, and neurodevelopmental impairment in young children and animals. It is imperative that the long-term effects of antenatal exposure to multiple courses of sGC continue to be investigated since the use of a ‘rescue’ (i.e. a second) course of sGC has recently re-introduced the practice of multiple course administration.”


If a study’s purpose is to investigate potential mechanisms of epigenetic inheritance, why not adopt a model that better characterizes common human conditions, regardless of which research group initially developed it?

The prenatal stress model used in The lifelong impact of maternal postpartum behavior is one model that’s more representative of human experiences. Those researchers pointed out in Prenatal stress produces offspring who as adults have cognitive, emotional, and memory deficiencies:

“Corticosterone-treated mice and rats exposed to chronic stress are models that do not recapitulate the early programming of stress-related disorders, which likely originates in the perinatal period.”

Animal models that chemically redirect fetal development also “do not recapitulate the early programming of stress-related disorders.”

Other than research that’s done to warn against disrupted development, how can animal studies like the current study help humans when their models don’t replicate common human conditions? This failure to use more relevant models has follow-on effects such as human intergenerational and transgenerational epigenetic inheritance being denigrated due to insufficient evidence.

Of course there’s insufficient human evidence! Researchers developed and sponsors funded animal study designs that ensured there wouldn’t be wide applicability to humans!! Few derivative human studies have been developed and funded as a result.

https://genomebiology.biomedcentral.com/articles/10.1186/s13059-018-1422-4 “Investigation into the role of the germline epigenome in the transmission of glucocorticoid-programmed effects across generations”

A review that inadvertently showed how memory paradigms prevented relevant research

This 2016 Swiss review of enduring memories demonstrated what happens when scientists’ reputations and paychecks interfered with them recognizing new research and evidence in their area but outside their paradigm: “A framework containing the basic assumptions, ways of thinking, and methodology that are commonly accepted by members of a scientific community.”

1. Most of the cited references were from decades ago that established these paradigms of enduring memories. Fine, but the research these paradigms excluded was also significant.

2. All of the newer references were continuations of established paradigms. For example, a 2014 study led by one of the reviewers found:

“Successful reconsolidation-updating paradigms for recent memories fail to attenuate remote (i.e., month-old) ones.

Recalling remote memories fails to induce histone acetylation-mediated plasticity.”

The researchers elected to pursue a workaround of the memory reconsolidation paradigm when the need for a new paradigm of enduring memories directly confronted them!

3. None of the reviewers’ calls for further investigations challenged existing paradigms. For example, when the reviewers suggested research into epigenetic regulation of enduring memories, they somehow found it best to return to 1984, a time when dedicated epigenetics research had barely begun:

“Whether memories might indeed be ‘coded in particular stretches of chromosomal DNA’ as originally proposed by Crick [in 1984] and if so what the enzymatic machinery behind such changes might be remain unclear. In this regard, cell population-specific studies are highly warranted.”


As an example of relevant research the review failed to consider, A study that provided evidence for basic principles of Primal Therapy went outside existing paradigms to research state-dependent memories:

“If a traumatic event occurs when these extra-synaptic GABA receptors are activated, the memory of this event cannot be accessed unless these receptors are activated once again.

It’s an entirely different system even at the genetic and molecular level than the one that encodes normal memories.”

What impressed me about that study was the obvious nature of its straightforward experimental methods. Why hadn’t other researchers used the same methods decades ago? Doing so could have resulted in dozens of informative follow-on study variations by now, which is my point in item 1 above.

The 2015 French What can cause memories that are accessible only when returning to the original brain state? was another relevant but ignored study that supported state-dependent memories:

“Posttraining/postreactivation treatments induce an internal state, which becomes encoded with the memory, and should be present at the time of testing to ensure a successful retrieval.”


The review also showed the extent to which historical memory paradigms depended on the subjects’ emotional memories. When it comes to human studies, though, designs almost always avoid studying emotional memories.

It’s clearly past time to Advance science by including emotion in research.

http://www.hindawi.com/journals/np/2016/3425908/ “Structural, Synaptic, and Epigenetic Dynamics of Enduring Memories”

Confusion may be misinterpreted as altruism and prosocial behavior

This 2015 Oxford human study of altruism found:

“Division of people into distinct social types relies on the assumption that an individual’s decisions in public-goods games can be used to accurately measure their social preferences. Specifically, that greater contributions to the cooperative project in the game reflect a greater valuing of the welfare of others, termed “prosociality.”

Individuals behave in the same way, irrespective of whether they are playing computers or humans, even when controlling for beliefs. Therefore, the previously observed differences in human behavior do not need to be explained by variation in the extent to which individuals care about fairness or the welfare of others.

Conditional cooperators, who approximately match the contributions of their groupmates, misunderstand the game. Answering the standard control questions correctly does not guarantee understanding.

We find no evidence that there is a subpopulation of players that understand the game and have prosocial motives toward human players.

These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.

When attempting to measure social behaviors, it is not sufficient to merely record decisions with behavioral consequences and then infer social preferences. One also needs to manipulate these consequences to test whether this affects the behavior.”

The researchers are evolutionary biologists who had made similar points in previous studies. They addressed possible confounders in the study and supporting information, and provided complete details in the appendix. For example, regarding reciprocity:

“Communication was forbidden, and we provided no feedback on earnings or the behavior of groupmates. This design prevents signaling, reciprocity, and learning and therefore minimizes any order effects.

It might also be argued that people playing with computers cannot help behaving as if they were playing with humans. Such ingraining of behavior would suggest a major problem for the way in which economic games have been used to measure social preferences. In particular, behavior would reflect everyday expectations from the real world, such as reputation concerns or the possibility of reciprocity, rather than the setup of the game and the true consequences of choices.”


Some of the news coverage missed the lead point of how:

“Economic experiments are often used to study if humans altruistically value the welfare of others.

These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.”

Here are several expressions of beliefs in one news coverage article where the author attempted to flip the discussion to cast doubt on the study. It was along the lines of “There’s something wrong with this study (that I haven’t thoroughly read) because [insert aspersion about sample size, etc.]” What motivates such reflexive behavior?


This study should inform social behavior studies that draw conclusions from flawed experimental designs. For example, both:

based their findings on a video game of popping balloons. Neither study properly interpreted their subjects’ decisions per the current study’s recommendation:

“When attempting to measure social behaviors, it is not sufficient to merely record decisions with behavioral consequences and then infer social preferences. One also needs to manipulate these consequences to test whether this affects the behavior.”

http://www.pnas.org/content/113/5/1291.full “Conditional cooperation and confusion in public-goods experiments”


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

A study on online cooperation with limited findings

This 2015 Cambridge/Oxford study found:

“Global reputational knowledge is crucial to sustaining a high level of cooperation and welfare.”

Basically, the subjects learned how to “game” a cooperative online game, and the researchers drew up their findings.

To me, the study demonstrated part of the findings of the Reciprocity behaviors differ as to whether we seek cerebral vs. limbic system rewards study, the part where the cerebrum was active in:

“Reputation-based reciprocity, in which they help others with good reputations to gain good reputations themselves.”

The current study ignored how people’s limbic system and lower brain areas may have motivated them to cooperate.

I didn’t see how excluding people’s emotional involvement when cooperating with others improved the potential reach of this study’s findings. Doesn’t a person’s willingness to cooperate in person and in online activities usually also include their emotional motivations?

The findings can’t be applied generally to cooperative motivations and behaviors that the researchers intentionally left out of the study. The study’s findings applied just to the artificial environment of their experiment, and didn’t provide evidence for how:

“Cooperative behavior is fundamental for a society to thrive.”

http://www.pnas.org/content/112/12/3647.full “The effects of reputational and social knowledge on cooperation”


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

Is it science, or is it a silly and sad farce when researchers “make up” missing data?

This 2014 French study was a parody of science.

The researchers “made up” missing data on over 50% of the men and over 47% of the women! All to satisfy their model that drove an agenda of the effects of adverse childhood experiences.

As an example of how silly and sad this was:

  • Two of the seven subject ages of interest were 23 and 33 consecutively, and
  • One of the nine factors was education level.

If I was a subject, and wasn’t around to give data at age 33 and later, how would the researchers have extrapolated a measurement of my education level of “high school” at age 23?

I’m pretty sure their imputation method would have “made up” education level data points for me of “high school” for ages 33 and beyond. I doubt that the model would have produced my actual education levels of a Bachelors and two Masters degrees at age 33.

Everything I said about the Problematic research on stress that will never make a contribution toward advancing science study applied to this study, including the “allostatic load” buzzword and the same compliant reviewer.

Studies like this both detract from science and are a misallocation of scarce resources. Their design and data aren’t able to reach levels where they can provide etiologic evidence.

Such studies also have limiting effects on how we “do something” about real problems, because the researchers won’t be permitted to produce findings that aren’t politically correct.

http://www.pnas.org/content/112/7/E738.full “Adverse childhood experiences and physiological wear-and-tear in midlife: Findings from the 1958 British birth cohort”