Replicability advocate John Ioannidis might be a bad actor

[Note: this was published originally in May 2020, but didn’t get migrated to the new site right away]

When you publish a finding titled “Why most published research findings are false,” the impact of your report is likely to have two major effects. The first is to encourage scientists to perform their research carefully and rigorously to ensure robust, reliable conclusions. The second is to provide a touchpoint for a general anti-science agenda to support those who want to push dangerous, self-interested ideas and need to be able to say “don’t listen to science or scientists, listen to me.”

Like a lot of Psychology departments, ours assumed the research was driven by the first idea and have done a lot of self-study and introspection to see what we can do to improve research processes. I have been frequently somewhat bemused by this as we have always had a strong commitment to rigorous science in my lab and my impression is that this is true of all of my colleagues here at Northwestern.

I have become more concerned about the persistent apparent misunderstanding associated with the phrase “fails to replicate.” We all know from basic statistics that this does not mean “false.” When a study does not achieve the standard statistical threshold of p<.05 confidence to reject the null hypothesis, it means the study didn’t work. Technically it means that magnitude of the effect size the study tried to measure was not robustly larger than the error in measurement. A “false” hypothesis means the effect size is truly exactly zero. “Fails to replicate” doesn’t mean we are sure the effect was zero, but only that probably it is smaller than we hypothesized when the study was designed. A study with “power” to detect an 0.4 effect size won’t reliably find an 0.2 effect size, even though an 0.2 effect size is not zero and often meaningful. And power calculations are probabilistic (80% power means 20% of rigorous studies don’t work) and require precise estimates of both the magnitude and variance of your measures, which are based on previous studies and may be imprecise, especially in a new relatively unstudied research area.

Nothing in the above paragraph is controversial or revolutionary. It’s basic statistics all scientists learn in their first stats class. But if you conflate ‘fails to replicate’ with ‘false’ as in the title of Ioannidis’s paper, you risk misleading a large segment of the non-scientist community who is not trained on these ideas. Maybe it was just an accident or a slightly sensationalized title to draw attention to the issue. Or maybe not.

Which is why this report from Buzzfeed (excellently sourced, you can check their links) about a recent report from Stanford with Ioannidis as a co-author is of particular interest. It is a paper claiming COVID-19 is not as dangerous as previously thought because there are many more people who have been exposed to it (i.e., the asymptomatic rate is potentially 50x higher than previously thought). Which would be very important science, if true, and so we’d want it to meet very rigorous standards. But…

  • One of the co-authors was so concerned about weak methodology, she refused to be associated with the paper. The conclusion depends on a test presence of COVID antibodies that has a very high false positive rate (potentially dramatically over estimating the number of asymptomatic cases). Furthermore, she was so concerned about the impact of the paper she filed a complaint to the Stanford research compliance office.
  • The manuscript was released to the public through a preprint mechanism leading to headlines in news media all over the world starting on April 17th before the manuscript had received any peer review at all.
  • Ioannidis was on Fox News a few days after the non-peer-reviewed preprint release telling their audience that the COVID virus was much less dangerous than previously thought. His arguments were then echoed around the world by those arguing to release movement and travel restrictions.
  • The owner of airline Jet Blue was found to be a donor who supported the research through a directed donation to Stanford, was unacknowledged on the manuscript, but was in constant email contact with the authors through the scientific and publication process.

This is all, of course, textbook ‘worst case scenario’ for non-rigorous science with the potential to have high and highly damaging impact. Ioannidis is quoted in the article as describing the results as preliminary but “the best we can do” and that his work is driven by data, not politics (“I’m just a scientist”).

As a scientist with long experience and training in drawing conclusions from data, looking at this and other peculiarities, I’m going to propose another hypothesis: the concern about the Replicability Crisis in psychology (and science broadly) is at least partly being driven by people with an anti-science agenda who want to de-emphasize the value of science in effective, accurate public policy.

When you promote this agenda, even in a well-meaning manner to promote improved practices, you may be accidentally furthering the cause of people who want to, for example, sell you hydroxychloroquine (snake oil) or claim drinking bleach will cure you of anything.

Instead, you can simply continue to do your science rigorously. Replicate findings that you think are important — we do a lot of “replicate and extend” in my lab, it’s pretty standard technique. Don’t rely on splashy new unexpected findings in a new research domain or methodology — we describe those as “cool if true” and wait for the second study. Think about what your science will mean to people outside the scientific community as well as within it. And if somebody asks you, suggest journalists use phrases like “preliminary studies suggest” for that cool new result instead of “scientists say” (or worse, “a new study proves”).

Illinois Primary Day March 17

 

If you are attending the meeting of the Cognitive Neuroscience Society in Boston this year, you will be out of town for the IL primary that Tuesday. If you are registered in Evanston, you can vote early at the Civic Center starting on March 2.

If you are unfamiliar with voting in Evanston/Chicago, the election process is handled through Cook County and here is where you can get information about what will be on your ballot before heading to the polls.

One of the interesting things about local elections is that there may be a handful of offices and candidates you don’t know much about. For example, judges are elected and unless you happen to know somebody, it can be unclear how to use your vote well. There are both nonpartisan and partisan organizations that will provide information, endorsements and recommendations so that you can inform yourself easily and rapidly before voting (not linked here).

Voting is a straightforward and quick process here, especially early voting. Highly recommended.

 

One damned thing after another

Life is just one damned thing after another.

Elbert Hubbard
US author (1856 – 1915)

This quote popped to mind as I was thinking about how to describe why sequence learning is a fundamental cognitive operation.  Life is sequential!

I had to google it to figure out who said it.  I didn’t actually recognize the name, but he’s an awesome quote machine (via quotationspage.com).  And an interesting life story, too (Wikipedia link).

I’m still unsure how I knew the quote, but the author’s name seems completely unfamiliar.  I recognized a lot of his other quotes as well.