Manufacturing PTSD evidence with machine learning

What would you do if you were a scientist who had strong beliefs that weren’t borne out by experimental evidence?

Would you be honest with yourself about the roots of the beliefs? Would you attempt to discover why the beliefs were necessary for you, and what feelings were associated with the beliefs?

Instead of the above, the researchers of this 2017 New York human study reworked negative findings of two of the coauthors’ 2008 study until it fit their beliefs:

“The neuroendocrine response contributes to an accurate predictive signal of PTSD trajectory of response to trauma. Further, cortisol provides a stable predictive signal when measured in conjunction with other related neuroendocrine and clinical sources of information.

Further, this work provides a methodology that is relevant across psychiatry and other behavioral sciences that transcend the limitations of commonly utilized data analytic tools to match the complexity of the current state of theory in these fields.”


1. The limitations section included:

“It is important to note that ML [machine learning]-based network models are an inherently exploratory data analytic method, and as such might be seen as ‘hypotheses generating’. While such an approach is informative in situations where complex relationships cannot be proposed and tested a priori, such an approach also presents with inherent limitations as a high number of relationships are estimated simultaneously introducing a non-trivial probability of false discovery.”

2. Sex-specific impacts of childhood trauma summarized why cortisol isn’t a reliable biological measurement:

“Findings are dependent upon variance in extenuating factors, including but not limited to, different measurements of:

  • early adversity,
  • age of onset,
  • basal cortisol levels, as well as
  • trauma forms and subtypes, and
  • presence and severity of psychopathology symptomology.”

Although this study’s authors knew or should have known that review’s information, cortisol was the study’s foundation, and beliefs in its use as a biomarker were defended.

3. What will it take for childhood trauma research to change paradigms? described why self-reports of childhood trauma can NEVER provide direct evidence for trauma during the top three periods when humans are most sensitive to and affected by trauma:

The basic problem prohibiting the CTQ (Childhood Trauma Questionnaire) from discovering likely most of the subjects’ historical traumatic experiences that caused epigenetic changes is that these experiences predated the CTQ’s developmental starting point.

Self-reports were – at best – evidence of experiences after age three, distinct from the experience-dependent epigenetic changes since conception.”

Yet the researchers’ beliefs in the Trauma History Questionnaire’s capability to provide evidence for early childhood traumatic experiences allowed them to make such self-reports an important part of this study’s findings, for example:

“The reduced cortisol response in the ER [emergency room] was dependent on report of early childhood trauma exposure.”

https://www.nature.com/articles/tp201738 “Utilization of machine learning for prediction of post-traumatic stress: a re-examination of cortisol in the prediction and pathways to non-remitting PTSD”

Sleep and adult brain neurogenesis

This 2018 Japan/Detroit review subject was the impact of sleep and epigenetic modifications on adult dentate gyrus neurogenesis:

“We discuss the functions of adult‐born DG neurons, describe the epigenetic regulation of adult DG neurogenesis, identify overlaps in how sleep and epigenetic modifications impact adult DG neurogenesis and memory consolidation..

Whereas the rate of DG neurogenesis declines exponentially with age in most mammals, humans appear to exhibit a more modest age‐related reduction in DG neurogenesis. Evidence of adult neurogenesis has also been observed in other regions of the mammalian brain such as the subventricular zone, neocortex, hypothalamus, amygdala, and striatum.

Adult‐born DG neurons functionally integrate into hippocampal circuitry and play a special role in cognition during a period of heightened excitability and synaptic plasticity occurring 4–6 weeks after mitosis. Adult DG neurogenesis is regulated by a myriad of intrinsic and extrinsic factors, including:

  • drugs,
  • diet,
  • inflammation,
  • physical activity,
  • environmental enrichment,
  • stress, and
  • trauma.”


Some of what the review stated was contradicted by other evidence. For example, arguments for sleep were based on the memory consolidation paradigm, but evidence against memory consolidation wasn’t cited for balanced consideration.

It reminded me of A review that inadvertently showed how memory paradigms prevented relevant research. That review’s citations included a study led by one of those reviewers where:

“The researchers elected to pursue a workaround of the memory reconsolidation paradigm when the need for a new paradigm of enduring memories directly confronted them!”

Some of what this review stated was speculation. I didn’t quote any sections after:

 “We go one step further and propose..”

The review also had a narrative directed toward:

“Employing sleep interventions and epigenetic drugs..”

It’s storytelling rather than pursuing the scientific method when reviewers approach a topic as these reviewers did.

Instead of reading a directed narrative, read this informative blog post from a Canadian researcher. The post provided scientific contexts to summarize what was and wasn’t known in 2018 about human neurogenesis.

http://onlinelibrary.wiley.com/doi/10.1002/stem.2815/epdf “Regulatory Influence of Sleep and Epigenetics on Adult Hippocampal Neurogenesis and Cognitive and Emotional Function”

Obtaining convictions with epigenetic statistics?

This 2018 Austrian review subject was forensic applications of epigenetic clock methodologies:

“The methylation-sensitive analysis of carefully selected DNA markers (CpG sites) has brought the most promising results by providing prediction accuracies of ±3–4 years, which can be comparable to, or even surpass those from, eyewitness reports. This mini-review puts recent developments in age estimation via (epi)genetic methods in the context of the requirements and goals of forensic genetics and highlights paths to follow in the future of forensic genomics.”


The point of forensic analysis techniques should be to find the truth about an individual. Doesn’t the principle of “All presumptive evidence of felony should be admitted cautiously; for the law holds it better that ten guilty persons escape, than that one innocent party suffer” still hold?

The methods’ limitations weren’t discussed. Here are some concepts not mentioned in the review:

1) Summary statistics that describe a group or population NEVER necessarily describe an individual member.

For an epigenetic clock methodology example, take a look at Figure 2A in Using an epigenetic clock to assess liver disease. 16 of the 18 individual age acceleration estimates of the control group subjects aren’t close to the median value!

2) The reviewer outlined basic DNA methylation analysis:

“The most commonly pursued approach for analysing CpG sites is sequence analysis of bisulfite-converted DNA, during which single-stranded genomic DNA is treated with sodium bisulfite that deaminates unmethylated cytosine to uracil, while methylated cytosine remains unaffected.

With increasing age, not only genome-wide DNA hypomethylation has been observed but also regional DNA hypermethylation of CpG islands.”

The basic limitation of this analysis wasn’t mentioned, but A study of DNA methylation and age said:

“Due to the methods applied in the present study, not all the effects of DNA methylation on gene expression could be detected; this limitation is also true for previously reported results.

The textbook case of DNA methylation regulating gene expression (the methylation of a promoter and silencing of a gene) remains undetected in many cases because in an array analysis, an unexpressed gene shows no signal that can be distinguished from background and is therefore typically omitted from the analysis.

3) Another omission was that the numbers and types of targets in the discussed DNA methylation technique were severely limited per The primary causes of individual differences in DNA methylation are environmental factors:

“A main limitation with studies using the Illumina 450 K array is that the platform only covers ~1.5 % of overall genomic CpGs, which are biased towards promoters and strongly underrepresented in distal regulatory elements, i.e., enhancers.

The reviewer didn’t provide convincing justifications for using gene expression profiling to obtain convictions. Was it too much to expect a mini-review to offer a balanced view of using epigenetic age estimation in forensic analyses?

https://www.karger.com/Article/FullText/486239 “Age Estimation with DNA: From Forensic DNA Fingerprinting to Forensic (Epi)Genomics: A Mini-Review”

Science and technology hijacked by woo

I’m an avid reader of science articles, abstracts, studies, and reviews. I tried a free subscription to Singularity Hub for a few weeks last month because it seemed to be a suitable source of articles on both science and technology.

I unsubscribed after being disappointed by aspects of science and technology hijacked almost on a daily basis into the realm of woo. Discovering scientific truths and realizing technologies is inspiring enough to stand on its own. It’s sufficiently interesting to publish well-written articles on the process and results.

I was dismayed that the website didn’t host a feedback mechanism for the authors’ articles. We shield ourselves from information incongruent with our beliefs. It’s a problem when a publisher of science and technology articles similarly disallows non-confirming evidence as a matter of policy.

An article may or may not advance knowledge of the subject, and Singularity Hub enables author hubris in presenting their views as the final word on the subject. Directing readers elsewhere for discussion is self-defeating in that every publisher’s goals include keeping visitors on their website as long as possible.

Here’s my feedback on two articles that inappropriately bent reality.


Regarding What Is It That Makes Humans Unique?:

“This trait [symbolic abstract thinking] not only gives us the ability to communicate symbolically, it also allows us to think symbolically, by allowing us to represent all kinds of symbols (including physical and social relationships) in our minds, independent of their presence in the physical world. As a result, internal associations of novel kinds become possible.”

Why limit discussion of our capability for symbolic representations? Other features to explore are:

  1. Aren’t beliefs also products of symbolic abstract thinking?
  2. What attributes of human behavior provide evidence for hopes and beliefs as symbolic representations?
  3. What’s the evolved functional significance that benefits humans of using symbolic abstract thinking to develop hopes and beliefs?

“Our revolutionary traits stand out even more when we take a cosmic perspective..We are not only in the universe, but the universe is also within us..Our brains, as an extension of the universe, are now being used to understand themselves.”

This article should be written well enough to inspire without resorting to unevidenced assertions about revolutions, the cosmos, and the timing of brain functionality.

“Some of us possess higher consciousness than others. The question that we now have to ask ourselves is, how do we cultivate higher consciousness, structural building, and symbolic abstract thinking among the masses?”

What’s the purpose of steering an evolution topic into elitism?


How a Machine That Can Make Anything Would Change Everything received >53,000 views compared with <5,000 views of the above article. This was an indicator that readers of Singularity Hub are relatively more interested in the possible implications of future technology than those of our past biological evolution. Why?

“If nanofabricators are ever built, the systems and structure of the world as we know them were built to solve a problem that will no longer exist.”

We are to believe that we’ll soon have the worldwide solution to problems in food supply, energy supply, medicine availability, income, knowledge – all that’s needed for survival? Should we develop hopes that technology will be our all-providing savior? Hope sells, without a doubt, but why would Singularity Hub mix that in with science?

This article reminded me of the chip-in-the-brain article referenced in Differing approaches to a life wasted on beliefs. Both articles seemingly appealed to future prospects, but the hope aspect showed that the appeals were actually reactions to the past.

If we individually address the impacts of past threats to survival – that include beliefs about future survival – each of us can break out of these self-reinforcing, life-wasting loops. Otherwise, an individual’s thoughts, feelings, and behavior are stuck in reacting to their history, with hopes and beliefs being among the many symptoms.

“Human history will be forever divided in two. We may well be living in the Dark Age before this great dawn. Or it may never happen. But James Burke, just as he did over forty years ago, has faith.”

Is it inspiring that the person mentioned has had a forty-year career of selling beliefs in technology?

Yes, future technologies have promise. Authors can write articles that provide developments without soiling the promise with woo.


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

What is a father’s role in epigenetic inheritance?

The agenda of this 2017 Danish review was to establish a paternal role in intergenerational and transgenerational epigenetic inheritance of metabolic diseases:

“There are four windows of susceptibility which have major importance for epigenetic inheritance of acquired paternal epigenetic changes:

  1. paternal primordial germ cell (PGC) development,
  2. prospermatogonia stages,
  3. spermatogenesis, and
  4. during preimplantation.”

The review was a long read as the authors discussed animal studies. When it came to human studies near the paper’s end, though, the tone was of a “we know this is real, we just have to find it” variety. The authors acknowledged:

“To what extent the described DNA methylation changes influence the future health status of offspring by escaping remodeling in the preimplantation period as well as in future generations by escaping remodeling in PGC remodeling has yet to be determined.

These studies have not yet provided an in-depth understanding of the specific mechanisms behind epigenetic inheritance or exact effect size for the disease risk in offspring.

Pharmacological approaches have reached their limits..”

before presenting their belief that a hypothetical series of future CRISPR-Cas9 experiments will demonstrate the truth of their agenda.


The review focused on 0.0001% of the prenatal period for what matters with the human male – who he was at the time of a Saturday night drunken copulation – regarding intergenerational and transgenerational epigenetic inheritance of metabolic diseases.

The human female’s role – who she was at conception AND THEN what she does or doesn’t do during the remaining 99.9999% of the prenatal period to accommodate the fetus and prevent further adverse epigenetic effects from being intergenerationally and transgenerationally transmitted  – wasn’t discussed.

Who benefits from this agenda’s narrow focus?

If the review authors sincerely want to:

“Raise societal awareness of behavior to prevent a further rise in the prevalence of metabolic diseases in future generations..”

then EARN IT! Design and implement HUMAN studies to test what’s already known from epigenetic inheritance animal studies per Experience-induced transgenerational programming of neuronal structure and functions. Don’t disguise beliefs with the label of science.

http://jme.endocrinology-journals.org/content/early/2017/12/04/JME-17-0189.full.pdf “DNA methylation in epigenetic inheritance of metabolic diseases through the male germ line”

Beliefs about genetic and environmental influences in twin studies

This 2017 Penn State simulation found:

“By taking advantage of the natural variation in genetic relatedness among identical (monozygotic: MZ) and fraternal (dizygotic: DZ) twins, twin studies are able to estimate genetic and environmental contributions to complex human behaviors.

In the standard biometric model when MZ or DZ twin similarity differs from 1.00 or 0.50, respectively, the variance that should be attributed to genetic influences is instead attributed to nonshared environmental influences, thus deflating the estimates of genetic influences and inflating the estimates of nonshared environmental influences.

Although estimates of genetic and nonshared environmental influences from the standard biometric model were found to deviate from “true” values, the bias was usually smaller than 10% points indicating that the interpretations of findings from previous twin studies are mostly correct.”

The study model’s input was five phenotypes that varied the degrees of:

  1. Genetic and epigenetic heritability;
  2. Shared environmental factors; and
  3. Nonshared environmental factors.

Item 1 above was different than the standard model’s treatment of heritable factors, which considers only additive genetic influences.

The authors cited studies for moderate and significant shared environmental influences in child and adolescent psychopathology and parenting to support the model’s finding that overall, item 2 above wasn’t underestimated.


I wasn’t satisfied with the simulation’s description of item 1 above. With

  1. Environmental influences accounted for elsewhere, and
  2. No references to transgenerational epigenetic inheritance,
  3. Randomness seemed to be the only remaining explanation for an epigenetic heritability factor.

Inserting the model’s non-environmental randomness explanation for epigenetic heritability into the abstract’s statement above exposed the non sequitur:

In the standard biometric model when MZ or DZ twin similarity differs from 1.00 or 0.50, respectively, the variance that should be attributed to genetic [and non-environmental stochastic heritability] influences is instead attributed to nonshared environmental influences, thus deflating the estimates of genetic [and non-environmental stochastic heritability] influences and inflating the estimates of nonshared environmental influences.

Why did the researchers design their model with an adjustment for non-environmental epigenetic heritability? Maybe it had something to do with:

“Estimates of genetic and nonshared environmental influences from the standard biometric model were found to deviate from “true” values.”

In any event, I didn’t see that this simulation was much more than an attempt to reaffirm a belief that:

“The interpretations of findings from previous twin studies are mostly correct.”


Empirical rather than simulated findings in human twin study research are more compelling, such as The primary causes of individual differences in DNA methylation are environmental factors with its finding:

“Differential methylation is primarily non-genetic in origin, with non-shared environment accounting for most of the variance. These non-genetic effects are mainly tissue-specific.

The full scope of environmental variation remains underappreciated.”

https://link.springer.com/article/10.1007/s10519-017-9875-x “The Impact of Variation in Twin Relatedness on Estimates of Heritability and Environmental Influences” (not freely available)

Do preventive interventions for children of mentally ill parents work?

The fifth and final paper of Transgenerational epigenetic inheritance week was a 2017 German/Italian meta-analysis of psychiatric treatments involving human children:

“The transgenerational transmission of mental disorders is one of the most significant causes of psychiatric morbidity. Several risk factors for children of parents with mental illness (COPMI) have been identified in numerous studies and meta-analyses.

There is a dearth of high quality studies that effectively reduce the high risk of COPMI for the development of mental disorders.”


I found the study by searching a medical database on the “transgenerational” term. The authors fell into the trap of misusing “transgenerational” instead of “intergenerational” to describe individuals in different generations.

Per the definitions in A review of epigenetic transgenerational inheritance of reproductive disease and Transgenerational effects of early environmental insults on aging and disease, for the term “transgenerational transmission” to apply, the researchers needed to provide evidence in at least the next 2 male and/or 3 female generations of:

“Altered epigenetic information between generations in the absence of continued environmental exposure.”

The meta-analysis didn’t provide evidence for “transgenerational transmission of mental disorders.”


Several aspects of the meta-analysis stood out:

  1. Infancy was the earliest period of included studies, and studies of treatments before the children were born were excluded;
  2. Parents had to be diagnosed with a mental illness for the study to be included;
  3. Studies with children diagnosed with a mental illness were excluded; and
  4. Studies comparing more than one type of intervention were excluded.

Fifty worldwide studies from 1983 through 2014 were selected for the meta-analysis.

Per item 1 above, if a researcher doesn’t look for something, it’s doubtful that they will find it. As shown in the preceding papers of Transgenerational epigenetic inheritance week, the preconception through prenatal periods are where the largest epigenetic effects on an individual are found. There are fewer opportunities for effective “preventive interventions” in later life compared with these early periods.

Science provides testable explanations and predictions. The overall goal of animal studies is to help humans.

Animal studies provide explanations and predictions for the consequences of environmental insults to the human fetus – predictable disrupted neurodevelopment with subsequent deviated behaviors and other lifelong damaging effects in the F1 children. The first four papers I curated during Transgenerational epigenetic inheritance week provided samples of which of these and/or other harmful effects may be predictably found in F2 grandchildren, F3 great-grandchildren, and future human generations.

When will human transgenerational epigenetic inheritance be taken seriously? Is the root problem that human societies don’t give humans in the fetal stage of life a constituency, or protection against mistreatment, or even protection against being arbitrarily killed?


The default answer to the meta-analysis title “Do preventive interventions for children of mentally ill parents work?” is No. As for the “dearth of high quality studies” complaint: when treatments aren’t effective, is the solution to do more of them?

No.

The researchers provided an example of the widespread belief that current treatments for “psychiatric morbidity” are on the right path, and that the usual treatments – only done more rigorously – will eventually provide unquestionable evidence that they are effective.

This belief is already hundreds of years old. How much longer will this unevidenced belief infect us?

http://journals.lww.com/co-psychiatry/Abstract/2017/07000/Do_preventive_interventions_for_children_of.9.aspx “Do preventive interventions for children of mentally ill parents work? Results of a systematic review and meta-analysis” (not freely available)

How one person’s paradigms regarding stress and epigenetics impedes relevant research

This 2017 review laid out the tired, old, restrictive guidelines by which current US research on the epigenetic effects of stress is funded. The reviewer rehashed paradigms circumscribed by his authoritative position in guiding funding, and called for more government funding to support and extend his reach.

The reviewer won’t change his beliefs regarding individual differences and allostatic load pictured above since he helped to start those memes. US researchers with study hypotheses that would develop evidence beyond such memes may have difficulties finding funding except outside of his sphere of influence.


Here’s one example of the reviewer’s restrictive views taken from the Conclusion section:

Adverse experiences and environments cause problems over the life course in which there is no such thing as “reversibility” (i.e., “rolling the clock back”) but rather a change in trajectory [10] in keeping with the original definition of epigenetics [132] as the emergence of characteristics not previously evident or even predictable from an earlier developmental stage. By the same token, we mean “redirection” instead of “reversibility”—in that changes in the social and physical environment on both a societal and a personal level can alter a negative trajectory in a more positive direction.”

What would happen if US researchers proposed tests of his “there is no such thing as reversibility” axiom? To secure funding, the prospective studies’ experiments would be steered toward altering “a negative trajectory in a more positive direction” instead.

An example of this influence may be found in the press release of Familiar stress opens up an epigenetic window of neural plasticity where the lead researcher stated a goal of:

“Not to ‘roll back the clock’ but rather to change the trajectory of such brain plasticity toward more positive directions.”

I found nothing in citation [10] (of which the reviewer is a coauthor) where the rodent study researchers even attempted to directly reverse the epigenetic changes! The researchers under his guidance simply asserted:

“A history of stress exposure can permanently alter gene expression patterns in the hippocampus and the behavioral response to a novel stressor”

without making any therapeutic efforts to test the permanence assumption!

Never mind that researchers outside the reviewer’s sphere of influence have done exactly that, reverse both gene expression patterns and behavioral responses!!

In any event, citation [10] didn’t support an “there is no such thing as reversibility” axiom.

The reviewer also implied that humans respond just like lab rats and can be treated as such. Notice that the above graphic conflated rodent and human behaviors. Further examples of this inappropriate rodent / human merger of behaviors are in the Conclusion section.


What may be a more promising research approach to human treatments of the epigenetic effects of stress? As pointed out in The current paradigm of child abuse limits pre-childhood causal research:

“If the current paradigm encouraged research into treatment of causes, there would probably already be plenty of evidence to demonstrate that directly reducing the source of the damage would also reverse damaging effects. There would have been enough studies done so that the generalized question of reversibility wouldn’t be asked.

Aren’t people interested in human treatments of originating causes so that their various symptoms don’t keep bubbling up? Why wouldn’t research paradigms be aligned accordingly?”

http://journals.sagepub.com/doi/full/10.1177/2470547017692328 “Neurobiological and Systemic Effects of Chronic Stress”

The current paradigm of child abuse limits pre-childhood causal research

As an adult, what would be your primary concern if you suspected that your early life had something to do with current problems? Would you be interested in effective treatments for causes of your symptoms?

Such information wasn’t available in this 2016 Miami review of the effects of child abuse. The review laid out the current paradigm mentioned in Grokking an Adverse Childhood Experiences (ACE) score, one that limits research into pre-childhood causes for later-life symptoms.

The review’s goal was to describe:

“How numerous clinical and basic studies have contributed to establish the now widely accepted idea that adverse early life experiences can elicit profound effects on the development and function of the nervous system.”

The hidden assumptions of almost all of the cited references were that these distant causes could no longer be addressed. Aren’t such assumptions testable today?

As an example, the Discussion section posed the top nine “most pressing unanswered questions related to the neurobiological effects of early life trauma.” In line with the current paradigm, the reviewer assigned “Are the biological consequences of ELS [early life stress] reversible?” into the sixth position.

If the current paradigm encouraged research into treatment of causes, there would probably already be plenty of evidence to demonstrate that directly reducing the source of damage would also reverse damaging effects. There would have been enough studies done so that the generalized question of reversibility wouldn’t be asked.

Aren’t people interested in treatments of originating causes so that their various symptoms don’t keep bubbling up? Why wouldn’t research paradigms be aligned accordingly?


The review also demonstrated how the current paradigm of child abuse misrepresented items like telomere length and oxytocin. Researchers on the bandwagon tend to forget about the principle Einstein expressed as:

“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”

That single experiment for telomere length arrived in 2016 with Using an epigenetic clock to distinguish cellular aging from senescence. The review’s seven citations for telomere length that all had findings “associated with” or “linked to” child abuse should now be viewed in a different light.

The same light shone on oxytocin with Testing the null hypothesis of oxytocin’s effects in humans and Oxytocin research null findings come out of the file drawer. See their references, and decide for yourself whether or not:

“Claimed research findings may often be simply accurate measures of the prevailing bias.”

http://www.cell.com/neuron/fulltext/S0896-6273%2816%2900020-9 “Paradise Lost: The Neurobiological and Clinical Consequences of Child Abuse and Neglect”


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

A review that inadvertently showed how memory paradigms prevented relevant research

This 2016 Swiss review of enduring memories demonstrated what happens when scientists’ reputations and paychecks interfered with them recognizing new research and evidence in their area but outside their paradigm: “A framework containing the basic assumptions, ways of thinking, and methodology that are commonly accepted by members of a scientific community.”

A. Most of the cited references were from decades ago that established these paradigms of enduring memories. Fine, but the research these paradigms excluded was also significant.

B. All of the newer references were continuations of established paradigms. For example, a 2014 study led by one of the reviewers found:

“Successful reconsolidation-updating paradigms for recent memories fail to attenuate remote (i.e., month-old) ones.

Recalling remote memories fails to induce histone acetylation-mediated plasticity.”

The researchers elected to pursue a workaround of the memory reconsolidation paradigm when the need for a new paradigm of enduring memories directly confronted them!

C. None of the reviewers’ calls for further investigations challenged existing paradigms. For example, when the reviewers suggested research into epigenetic regulation of enduring memories, they somehow found it best to return to 1984, a time when dedicated epigenetics research had barely begun:

“Whether memories might indeed be ‘coded in particular stretches of chromosomal DNA’ as originally proposed by Crick [in 1984] and if so what the enzymatic machinery behind such changes might be remain unclear. In this regard, cell population-specific studies are highly warranted.”


Two examples of relevant research the review failed to consider:

1. A study that provided evidence for basic principles of Primal Therapy went outside existing paradigms to research state-dependent memories:

“If a traumatic event occurs when these extra-synaptic GABA receptors are activated, the memory of this event cannot be accessed unless these receptors are activated once again.

It’s an entirely different system even at the genetic and molecular level than the one that encodes normal memories.”

What impressed me about that study was the obvious nature of its straightforward experimental methods. Why hadn’t other researchers used the same methods decades ago? Doing so could have resulted in dozens of informative follow-on study variations by now, which is my point in Item A. above.

2. A relevant but ignored 2015 French study What can cause memories that are accessible only when returning to the original brain state? which supported state-dependent memories:

“Posttraining/postreactivation treatments induce an internal state, which becomes encoded with the memory, and should be present at the time of testing to ensure a successful retrieval.”


The review also showed the extent to which historical memory paradigms depend on the subjects’ emotional memories. When it comes to human studies, though, designs almost always avoid studying emotional memories.

It’s clearly past time to Advance science by including emotion in research.

http://www.hindawi.com/journals/np/2016/3425908/ “Structural, Synaptic, and Epigenetic Dynamics of Enduring Memories”

The link between scientific value and content is broken at PNAS.org

Should we expect content posted on the Proceedings of the National Academy of Sciences of the United States of America to have scientific value?

This 2016 Singapore study was a “PNAS Direct Submission” that claimed:

“This paper makes a singular contribution to understanding the association between biological aging indexed by leukocyte telomeres length (LTL) and delay discounting measured in an incentivized behavioral economic task.

LTL is an emerging marker of aging at the cellular level, but little is known regarding its link with poor decision making that often entails being overly impatient.”


1. Whether measured at the level of a human or of a blood cell, in 2016 there wasn’t incontrovertible evidence to support:

  • “Biological aging indexed by leukocyte telomeres length
  • LTL is an emerging marker of aging at the cellular level”

Using an epigenetic clock to distinguish cellular aging from senescence found:

“Cellular ageing is distinct from cellular senescence and independent of DNA damage response and telomere length.”

If that study was too recent, the researchers and reviewer knew or should have known of studies such as this 2009 study that found the correlation between a person’s chronological age and blood cell telomere length was r = −0.51 in women and r = −0.55 in men.

2. A study of biological aging in young adults with limited findings was cited for evidence that “the seeds of biological aging are widely thought to be planted early in life.” That study didn’t elucidate the point, however, as it didn’t fully link its measurements of 38-year-old subjects with measurements taken during the subjects’ early lives.

F2.large

3. Problematic research with telomere length was cited for evidence that “other factors, such as the early family environment, lifestyle, and stress, also have considerable impact on cellular aging.” The researchers had to be willing to overlook that study’s multiple questionable practices in order to cite it as evidence for anything.

4. Deliberately overlooking abundant disconfirming evidence, the current study used a one-to-one correspondence of telomere length and cellular aging.


The researchers went on to speciously model a relationship between telomere length and the behavioral trait “poor decision making that often entails being overly impatient.” That overreach was further stretched to the breaking point:

“We then asked if genes possibly modulate the effect of impatient behavior on LTL.

The oxytocin receptor gene (OXTR) polymorphism rs53576, which has figured prominently in investigations of social cognition and psychological resources, and the estrogen receptor β gene (ESR2) polymorphism rs2978381, one of two gonadal sex hormone genes, significantly mitigate the negative effect of impatience on cellular aging in females.”

The “significantly mitigate” finding was “fun with numbers” that produced false effects rather than solid evidence. Consider that:

  1. The study’s model disregarded the probability that “Cellular ageing is independent of telomere length.”
  2. The researchers provided no mechanisms that plausibly linked performance “in an incentivized behavioral economic task” with telomere length.
  3. The researchers didn’t demonstrate any causal mechanisms whereby two gene variants plausibly affected the task performance’s purported effect on telomere length.

What’s the real reason this poor-quality paper’s reviewer forwarded it to PNAS.org?

http://www.pnas.org/content/113/10/2780.full “Delay discounting, genetic sensitivity, and leukocyte telomere length”

A problematic study of oxytocin receptor gene methylation, childhood abuse, and psychiatric symptoms

This 2016 Georgia human study found:

“A role for OXTR [oxytocin receptor gene] in understanding the influence of early environments on adult psychiatric symptoms.

Data on 18 OXTR CpG sites, 44 single nucleotide polymorphisms, childhood abuse, and adult depression and anxiety symptoms were assessed in 393 African American adults. The Childhood Trauma Questionnaire (CTQ), a retrospective self-report inventory, was used to assess physical, sexual, and emotional abuse during childhood.

While OXTR CpG methylation did not serve as a mediator to psychiatric symptoms, we did find that it served as a moderator for abuse and psychiatric symptoms.”

From the Limitations section:

  1. “Additional insight will likely be gained by including a more detailed assessment of abuse timing and type on the development of biological changes and adverse outcomes.
  2. The degree to which methylation remains fixed following sensitive developmental time periods, or continues to change in response to the environment, is still a topic of debate and is not fully known.
  3. Comparability between previous findings and our study is limited given different areas covered.
  4. Our study was limited to utilizing peripheral tissue [blood]. OXTR methylation should ideally be assessed in the tissues that are known to express OXTR and directly involved in psychiatric symptoms. The degree to which methylation of peripheral tissues can be used to study methylation changes in response to the environment or in association with behavioral outcomes is currently a topic of debate.
  5. Our study did not evaluate gene expression and thus cannot explore the role of study CpG sites on regulation and expression.”

Addressing the study’s limitations:

  1. Early-life epigenetic regulation of the oxytocin receptor gene demonstrated – with no hint of abuse – how sensitive an infant’s experience-dependent oxytocin receptor gene DNA methylation was to maternal care. Treating prenatal stress-related disorders with an oxytocin receptor agonist provided evidence for prenatal oxytocin receptor gene epigenetic changes.
  2. No human’s answers to the CTQ, Adverse Childhood Experiences, or other questionnaires will ever be accurate self-reports of their prenatal, infancy, and early childhood experiences. These early development periods were likely when the majority of the subjects’ oxytocin receptor gene DNA methylation took place. The CTQ self-reports were – at best – evidence of experiences at later times and places, distinct from earlier experience-dependent epigenetic changes.
  3. As one example of incomparability, the 2009 Genomic and epigenetic evidence for oxytocin receptor deficiency in autism was cited in the Introduction section and again in the Limitations section item 4. Since that study was sufficiently relevant to be used as a reference twice, the researchers needed to provide a map between its findings and the current study.
  4. Early-life epigenetic regulation of the oxytocin receptor gene answered the question of whether or not an individual’s blood could be used to make inferences about their brain oxytocin receptor gene DNA methylation. The evidence said: NO, it couldn’t.
  5. It’s assumed that oxytocin receptor gene DNA methylation directly impacted gene expression such that increased levels of methylation were associated with decreased gene transcription. The study assumed but didn’t provide evidence that higher levels of methylation indicated decreased ability to use available oxytocin due to decreased receptor expression. The study also had no control group.

To summarize the study’s limitations:

  1. The study zeroed in on childhood abuse, and disregarded evidence for more relevant factors determining an individual’s experience-dependent oxytocin receptor gene DNA methylation. That smelled like an agenda.
  2. The study used CTQ answers as determinants, although what happened during the subjects’ earliest life was likely when the majority of epigenetic changes to the oxytocin receptor gene took place. If links existed between the subjects’ early-life DNA methylation and later-life conditions, they weren’t evidenced by CTQ answers about later life that couldn’t self-report relevant experiences from conception through age three that may have caused DNA methylation.
  3. There was no attempt to make findings comparable with cited studies. That practice and the lack of a control group reminded me of Problematic research with telomere length.
  4. The researchers tortured numbers until they confessed “that CpG methylation may interact with abuse to predict psychiatric symptoms.” But there was no direct evidence that each subject’s blood oxytocin gene receptor DNA methylation interacted as such! Did the “may interact” phrase make the unevidenced inferences more plausible, or permit contrary evidence to be disregarded?
  5. See Testing the null hypothesis of oxytocin’s effects in humans for examples of what happens when researchers compound assumptions and unevidenced inferences.

The study’s institution, Emory University, and one of the study’s authors also conducted Conclusions without evidence regarding emotional memories. That 2015 study similarly disregarded relevant evidence from other research, and made statements that weren’t supported by that study’s evidence.

The current study used “a topic of debate” and other disclaimers to provide cover for unconvincing methods and analyses in pursuit of..what? What overriding goals were achieved? Who did the study really help?

http://onlinelibrary.wiley.com/enhanced/doi/10.1111/cdev.12493/ “Oxytocin Receptor Genetic and Epigenetic Variations: Association With Child Abuse and Adult Psychiatric Symptoms”


This post has somehow become a target for spammers, and I’ve disabled comments. Readers can comment on other posts and indicate that they want their comment to apply here, and I’ll re-enable comments.

Does vasopressin increase mutually beneficial cooperation?

This 2016 German human study found:

“Intranasal administration of arginine vasopressin (AVP), a hormone that regulates mammalian social behaviors such as monogamy and aggression, increases humans’ tendency to engage in mutually beneficial cooperation.

AVP increases humans’ willingness to cooperate. That increase is not due to an increase in the general willingness to bear risks or to altruistically help others.”


One limitation of the study was that the subjects were all males, ages 19-32. The study’s title was “human risky cooperative behavior” while omitting subjects representing the majority of humanity.

Although the researchers claimed brain effects from vasopressin administration, they didn’t provide direct evidence for the internasally administered vasopressin in the subjects’ brains. A similar point was made about studies of vasopressin’s companion neuropeptide, oxytocin, in Testing the null hypothesis of oxytocin’s effects in humans.

A third limitation was that although the researchers correlated brain activity with social behaviors, they didn’t carry out all of the tests necessary to demonstrate the claimed “novel causal evidence for a biological factor underlying cooperation.” Per Confusion may be misinterpreted as altruism and prosocial behavior, the researchers additionally needed to:

“When attempting to measure social behaviors, it is not sufficient to merely record decisions with behavioral consequences and then infer social preferences. One also needs to manipulate these consequences to test whether this affects the behavior.”

http://www.pnas.org/content/113/8/2051.full “Vasopressin increases human risky cooperative behavior”

A problematic study of testosterone’s influence on behavior and brain measurements

This 2015 US/Canadian human study of people ages 6 to 22 years found:

“Testosterone-specific associations between amygdala volume and key prefrontal areas involved in emotional regulation and impulse control:

  1. Testosterone-specific modulation of the covariance between the amygdala and medial prefrontal cortex (mPFC);
  2. A significant relationship between amygdala-mPFC covariance and levels of aggression; and
  3. Mediation effects of amygdala-mPFC covariance on the relationship between testosterone and aggression.

These effects were independent of sex, age, pubertal stage, estradiol levels and anxious-depressed symptoms.

For the great majority of individuals in this sample, higher thickness of the mPFC was associated with lower aggression levels at a given amygdala volume. This effect diminished greatly and disappeared at more extreme amygdala values.”

The study provided noncausal associations among the effects (behavioral, hormonal, and brain measurements).


From the Limitations section:

“No umbilical cord or amniotic measurements were available in this study and we therefore cannot control for testosterone levels in utero, a period during which significant testosterone-related changes in brain structure are thought to occur.”

There’s evidence that too much testosterone for a female fetus and too little testosterone for a male fetus both have lifelong adverse effects. The researchers dismissed this etiologic line of inquiry with a “supporting the notion” referral to noncausal studies.


The researchers were keen to establish:

“A very specific, aggression-related structural brain phenotype.”

This putative phenotype hinged on:

  • Older subjects’ behavioral self-reports, and
  • Parental assessments of younger subjects’ behavior

exhibited during the previous six months, and within six months of their fMRI scan.

These self-reports and interested-party observations were the entire bases for the “aggressive behavior” and “anxious–depressed” associations! The researchers disingenuously provided multiple references and models for the reliability of these assessments.


Experimental behavioral measurements – such as those done to measure performance in decision studies – may have been more accurate and informative than what the older subjects chose to self-report about their own behavior over the previous six months.

People of all ages have an imperative to NOT be completely honest about their own behavior. One motivation for this condition is that some of our historical realities are too painful to enter our conscious awareness and inform us about our own behavior. As a result, our feelings, thoughts, and behavior are sometimes driven by our histories without us being aware of it.

For example, would a teenager/young adult subject self-report an impulsive act, even if they didn’t fully understand why they acted that way? Maybe they would if the act could be viewed as prosocial, but what if it was antisocial?

What are the chances that the lives of these teenager/young adult subjects were NOT filled with impulsive actions during the six months before their fMRI scans? Could complete and accurate self-reports of such behaviors be expected?

Experimental behavioral measurements may have also been more accurate and informative than second-hand, interested-party observations of the younger subjects. Could a parent who provided half of the genes and who was responsible for many of their child’s epigenetic changes make anything other than subjective observations of their handiwork’s behavior?


Epigenetic studies have shown that adaptations to environments are among the long-lasting causes for effects that include behavior, hormones, and brain measurements. Why, in 2015, did researchers spend public funds developing what they knew or should have known would be noncausal associations, while not investigating possible causes for these effects?

Why weren’t the researchers interested enough to gather and assess etiologic genetic and epigenetic evidence? Was it that difficult to get blood samples at the same time the subjects gave saliva samples, and perform selected genetic and DNA methylation analyses?

What did the study contribute towards advancing science? Who did the study really help?

My judgment: less than nothing; and nobody. The researchers only wasted public funds advancing a meme, giving it an imprimatur of science.

http://www.psyneuen-journal.com/article/S0306-4530%2815%2900924-5/fulltext “A testosterone-related structural brain phenotype predicts aggressive behavior from childhood to adulthood”

A problematic study of beliefs and dopamine

This 2015 Virginia Tech human study found:

“Dopamine fluctuations encode an integration of RPEs [reward prediction errors, the difference between actual and expected outcomes] with counterfactual prediction errors, the latter defined by how much better or worse the experienced outcome could have been.

How dopamine fluctuations combine the actual and counterfactual is unknown.”

From the study’s news coverage:

“The idea that “what could have been” is part of how people evaluate actual outcomes is not new. But no one expected that dopamine would be doing the job of combining this information in the human brain.”

Some caveats applied:

  • Measurements of dopamine were taken only from basal ganglia areas. These may not act the same as dopamine processes in other brain and nervous system areas.
  • The number of subjects was small (17), they all had Parkinson’s disease, and the experiment’s electrodes accompanied deep brain stimulation implantations.
  • Because there was no control group, findings of a study performed on a sample of people who all had dysfunctional brains and who were all being treated for neurodegenerative disease may not apply to a population of people who weren’t similarly afflicted.

The researchers didn’t provide evidence for the Significance section statement:

“The observed compositional encoding of “actual” and “possible” is consistent with how one should “feel” and may be one example of how the human brain translates computations over experience to embodied states of subjective feeling.”

The subjects weren’t asked for corroborating evidence about their feelings. Evidence for “embodied states of subjective feeling” wasn’t otherwise measured in studied brain areas. The primary argument for “embodied states of subjective feeling” was the second paragraph of the Discussion section where the researchers talked about their model and how they thought it incorporated what people should feel.

The study’s experimental evidence didn’t support the researchers’ assertion – allowed by the reviewer – that the study demonstrated something about “states of subjective feeling.” That the model inferred such “findings” along with the researchers’ statement that it “is consistent with how one should “feel” reminded me of a warning in The function of the dorsal ACC is to monitor pain in survival contexts:

“The more general message you should take away from this is that it’s probably a bad idea to infer any particular process on the basis of observed activity.”


The same researcher who hyped An agenda-driven study on beliefs, smoking and addiction that found nothing of substance was back again with statements such as:

“These precise, real-time measurements of dopamine-encoded events in the living human brain will help us understand the mechanisms of decision-making in health and disease.”

It’s likely that repeated hubris is one way researchers respond to their own history and feelings, such as their need to feel important as mentioned on my Welcome page.

The Parkinson’s patients were willing to become lab rats with extra electrodes that accompanied brain implantations to relieve their symptoms. Findings based on their playing a stock market game didn’t inform us about “mechanisms of decision-making in health and disease” in unafflicted humans. As one counter example, what evidence did the study provide that’s relevant to healthy humans’ decisions to remain healthy by taking actions to prevent disease?

The unwarranted extrapolations revealed a belief that the goal of research should be to explain human actions by explaining the actions of molecules. One problem caused by the preconceptions of this widespread belief is that it leads to study designs and models that omit relevant etiologic evidence embedded in each of the subjects’ historical experiences.

This belief may have factored into why the subjects weren’t asked about their feelings. Why didn’t the study’s design consider as relevant subject-provided evidence for feelings? Because the model already contrived explanations for feelings underlying the subjects’ actions.

http://www.pnas.org/content/113/1/200.full “Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward”