That many scientific fields are rife with error, bias and outright fraud may be unsurprising given the succession of news stories about prominent psychologists faking results and pop-psychology gurus fooling gullible Conservative politicians into ‘power posing’ at party conferences. But Stuart Ritchie’s Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science, published this month, demonstrates that similar problems exist throughout virtually all the sciences, including ‘hard’ sciences such as chemistry or medicine that can end up with deadly consequences.
Psychology, Ritchie’s own field, has been particularly badly hit. A so-called ‘replication crisis’ has run through the field, with psychologists unable to repeat the results of key experiments, including a famous one on the ‘priming effect’, where researchers were apparently able to influence people’s behaviour simply by showing them the right words – subjects shown words associated with elderly people (old, grey, wise, knits, and Florida) were reported as having walked more slowly out of the lab than the control group. The Stanford Prison Experiment, the well-known role-play experiment which purportedly showed people’s willingness to abuse power, was recently exposed as being virtually fraudulent, with its designer egging on the guards to act into their roles, rendering the results of the whole exercise useless.
Studies like these formed the basis for some of the most hyped-up social science of the 21st century. Two of the most popular and influential books of the past twenty years, Thinking, Fast and Slow by Nobelist Daniel Kahneman and Nudge by Cass Sunstein and Nobelist Richard Thaler, both cite dozens of papers that either failed to replicate or should have been too small to count as real evidence in the first place. The fact that behavioural economics sits on a throne of lies – weak, unreplicable or even downright fraudulent research – has not yet reached many policymakers, who still think of things like priming effects and ‘choice architecture’ as the hottest show in town.
But psychology, by now, is a soft target. Ritchie’s four cardinal sins – fraud, bias, negligence and hype – are found across the sciences. Fraud is staggeringly common: of all papers retracted for any reason, more than half are retracted for consciously unethical behaviour like fraud or plagiarism, rather than because some accidental error has been found. Bias – which leads researchers to cherry-pick their data and contort their hypotheses to generate interesting results – is such a problem that only 13% of studies in the five highest-impact medical journals reported everything they had said they would in advance.
Negligence with things like data and samples has led errors to pervade entire fields. One example is the study of cell lines, cultures of animal cells that can be used in place of primary cells for biological and medical research, which are essentially immortal and can thus be the basis for experiments over long periods of time. Here, mislabeled samples and cluttered labs have led to thousands of errors, with scientists thinking they were working with, say, human bone cancer cells when they were actually working with cells from a pigs’ colon. This problem has been so widespread that a 2017 analysis found that a whopping 32,755 studies had been published using so-called imposter cells, and over half a million further studies published that cited those misidentified experiments.
The most surprising thing about Ritchie’s fourth sin, hype, is that it is the scientists themselves – not journalists – who are often responsible. These scientists are frequently heavily involved in writing the press releases about their own papers, and researchers have found that if the press release hyped up a finding there was a much greater chance of the media exaggerating them too. Press releases that gave advice (for example, about what foods to avoid to lower your risk of cancer), that made causal claims about a correlational finding, or that reported findings generated in lab rats as having clear implications for humans were vastly more likely to be ‘sexed up’ in the media than press releases that were more circumspect. For once, it is not the journalists who are to blame.
As Ritchie is a friend of mine, I will not commit one of the sins he identifies (scientists referring their papers to their friends for peer review) by trying to review the book neutrally. In my own, biased opinion, it is an enjoyable, informative and important book about the problems faced by modern science. As well as the book’s devastating assessment of the sciences, it is extremely useful: Ritchie takes the time to clearly explain the concepts and jargon used in debates about scientific accuracy so that any intelligent layperson can understand them – p-values, statistical power, h-indexes – and lays out with clear, simple diagrams how to visualise and understand problems like p-hacking and publication bias. The book even includes a short appendix, ‘How to read a scientific paper’, that should be required reading for every student and media pundit.
But the problems Ritchie identifies require more than people simply understanding them. They are endemic to the whole structure of science, driven in large part, he says, by the ‘publish or perish’ culture in academia that gives even well-intentioned, truth-seeking scientists incentives to cook the books.
What, then, should we do about it? Ritchie offers a reform agenda to fix science based on greater openness and transparency: more pre-registration of scientific research, so that even boring null findings get reported; more support for replications of papers, including an expectation that if eye-catching papers subsequently fail to be replicated in further studies, the top journals like Nature, Science and The Lancet that printed the originals will also carry the new demonstrations that there was less to them than it seemed. (Ritchie’s own experience of this is instructive: after he produced a paper that demonstrated that a much-discussed paper claiming to show the existence of telepathic powers could not be replicated in other circumstances, the prestigious journal that had carried the original paper dismissed his replication out of hand.)
For policymakers in Britain and the United States, there is a clear opportunity to make this ‘meta-science’ a priority, with greater funding to study the effectiveness of different interventions like the ones Ritchie proposes, so that we can experiment with science itself to figure out how to make it work better. Support for the broader ‘Open Science’ movement, which seeks to free research from academic publishers’ paywalls, and to make the public sharing of raw data and other study results the norm, may give the countries who prioritise this a competitive edge in developing new research.
Improving our own regimes may have a double benefit if paired with reforms to make our countries more science-friendly overall. Though he does not dwell on it, Ritchie occasionally mentions problems in countries like China and India where vast numbers of exceptionally talented scientists face even worse problems than our own, including sometimes being explicitly required to engage in fraud for prestige reasons. This is bad for all of us.
One of the Johnson government’s first announcements was to create a fast-track visa scheme for foreign scientists – expanding this so that anyone with a PhD in the sciences and a relevant job offer could come with their families and settle in the UK would be one way of giving more of the world’s researchers the benefit of our institutions, however flawed they may be. Make it easier for qualified scientists to move to Britain and the United States, and the whole world may benefit from increased and improved scientific output.
A complement to this would be to prioritise the liberalisation of urban planning laws in places like Oxford, Cambridge and Silicon Valley, where enormous demand for housing and insufficient supply have driven house prices so high that many researchers simply cannot afford to move there, even if they are not stopped by the immigration system.
A final lesson for policymakers is to view scientific advice with a little more scepticism, especially when it comes from fields like psychology and nutrition that, as Ritchie demonstrates, are permeated with shoddy research based on poorly-controlled ‘observational’ studies. We now know that decades of nutritional advice about avoiding cholesterol and saturated fat was probably misguided, potentially causing malnutrition since high-cholesterol foods like eggs and oily fish also contain valuable nutrients like choline and Omega 3 fatty acids.
But the lesson has not been to be more cautious and humble when trying to direct how people eat from the top down. This week the British government has launched yet another anti-obesity drive which will, apparently, involve forcing supermarkets to ‘hide’ supposedly-unhealthy foods that are high in sugar and fat. I leave it to the reader to speculate the robustness of the scientific evidence behind such moves. The signs from Science Fictions are bleak.
Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.
CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.