A troubling new trend is arising in published science. Why aren’t the conclusions matching the data? Are the authors trying to tell us something important?
It was January 2020, the very beginning of COVID, when news articles began appearing that connected the genetics of the virus with gain-of-function research on bat coronaviruses at the Wuhan Institute of Virology.
These speculations were put to rest by an authoritative statement in the prestigious journal Nature Medicine, echoed by a summary in Science and an unusual affidavit in the Lancet signed by an impressive list of prominent scientists.
The message in the Nature Medicine article was dispositive: “Our analyses clearly show that SARS-CoV-2 is not a laboratory construct or a purposefully manipulated virus.”
But where was the support for this confident conclusion in the article itself?
The 2,200-word article in Nature Medicine (Anderson, et al) contained a lot of natural history and sociological speculation, but only one tepid argument against laboratory origin: that the virus’s spike protein was not a perfect fit to the human ACE-2 receptor.
The authors expressed confidence that any genetic engineers would certainly have computer-optimized the virus in this regard, and since the virus was not so optimized, it could not have come from a laboratory. That was the full content of their argument.
Most readers, even most scientists, take in the executive summary of an article and do not wade through the technical details. But for careful readers of the article, there was a stark disconnect between the Cliff Notes and the novel, between the article’s succinct (and specious) conclusion and its detailed scientific content.
This was the beginning of a new practice in the write-up of medical research. Recent revelations in the Fauci/Collins emails shed light on the origins of this tactic and the motives behind it.
In the past, if a company wanted, for example, to make a drug look more effective than it really was, it would choose a statistical technique that masked its downside, or it would tamper with the data.
What companies would not do, in the past, was describe the results of a statistical analysis that proves X is false, then publish it with an Abstract that claims X is true.
But this strange practice has become more common in the last two years. Academic papers are being published in which the abstract, the discussion section and even the title flatly contradict the content within.
Why is this happening? There are at least three possibilities:
- The authors cannot understand their own data.
- The authors are being impelled by the editorial staff to arrive at conclusions that match the ascendant narrative.
- The authors and editors realize the only way to get their results into publication is to avoid a censorship net that gets activated by any statement critical of vaccination efficacy or safety.
Before reaching any conclusions, let’s take a closer look at some examples of this troubling phenomenon arising in what should be the foundation of what is known: published scientific data.
In this article, we present five different published studies. Each to varying degrees exemplifies a disconnect between the data and the conclusions.