Contents
Error and Fraud
AIMS, ORIGINS AND STRUCTURE OF THIS BOOK
For many years, I have been writing about major scientific mistakes such as the four examples below which are discussed more fully in :
The mistaken belief that protein deficiency was the most important cause of world malnutrition because of a massive deficit in world protein supply, the so-called protein gap.
The promotion of front sleeping for babies that led to large worldwide increases in cot death rates in the 1970s and 1980s.
The belief that a defect in the heat-generating capacity of brown fat was a major cause of human obesity and that drugs that stimulated brown fat might be a viable treatment for human obesity.
The belief that antioxidant supplements, when taken by well-nourished adults, would reduce cancer and heart disease and so increase life expectancy.
A common feature of these errors has been an uncritical and unjustified extrapolation from findings at a low level in the evidence hierarchy. Such as:
Assuming that an epidemiological association is due to a cause and effect relationship
Assuming that a favourable change in some biochemical risk marker will inevitably lead to reduced disease risk or increased life expectancy
Prematurely applying the results from small animal studies to people
Extrapolating suggested benefits for a small high-risk group to the whole population
In recent years, evidence-grading hierarchies have been developed. Normally changes in clinical practice or health policy should only be made if there is clear supportive evidence at the highest levels of the evidence hierarchy or pyramid. Rigorous application of this system would have prevented most of the practical consequences of these past errors. In , I briefly review the observational and experimental methods available to scientists in the biomedical sciences and discuss the strengths and limitations of these various lines of enquiry. I also discuss how results from this variety of investigative approaches can be integrated and graded to optimise the chances of making correct scientific, clinical and policy judgements. Meta-analysis has become a very popular technique for trying to get a consensus from similar studies with common outcomes, and meta-analyses of controlled trials are at the top of the evidence hierarchy. Meta-analysis involves a weighted amalgamation of similar studies, so it is prone to distortion by large or multiple fabricated trials; this reinforces the importance of identifying false or fabricated data and removing it from the scientific record.
IS THERE A SYSTEMIC PROBLEM WITH THE SOUNDNESS OF PUBLISHED RESEARCH?
In are not just isolated cases but are symptomatic of a more general problem with the soundness of much scientific research. I review some of the problems with the design, execution and analysis of scientific studies that may increase the likelihood that their results and conclusions will be unreliable. A major factor undermining confidence in published research is the lack of reproducibility or lack of any attempt to reproduce most of it. Despite this lack of reproducibility, many scientists still have great faith in the traditional belief that incorrect or fraudulent science will be quickly detected when other scientists are unable to reproduce it; the evidence suggests that this faith may not be wholly justified.
The drive to generate research papers has led to an avalanche of research papers that are largely unread. Many of these papers are of low-quality and published in low-quality journals with low or very low thresholds of acceptance. Much of this research seems to have little obvious potential for improving scientific understanding or little chance for improving healthcare or health advice. An improbable claim based upon statistically weak or flawed evidence may generate a succession of similar papers oscillating between supporting and refuting the original claim. For example, weak evidence of an association between eating dairy products such as yogurt and ovarian cancer risk has helped to spawn scores of follow-up papers over several decades that have not advanced our understanding of the causes of ovarian cancer or our ability to make recommendations to reduce it. This is discussed more fully in where I come to the conclusion that further similar research is unlikely to change that conclusion.
MY PERSONAL JOURNEY FROM ERROR TO FRAUD
Error and fraud may seem like two quite distinct issues:
Largely honest production of flawed data or misinterpretation of data to support a false hypothesis.
As opposed to:
Wilful fabrication of data or manipulation of real data to convince others of the correctness of a hypothesis.
My interest in both error and fraud was sparked by personal involvement. My doctoral research project was part of a programme to develop an alternative fungal protein source that could be produced industrially and so contribute to alleviating the perceived large and increasing shortage of protein for human consumption the protein gap. I was shocked when I later discovered that this protein gap had probably never existed. My later research involved the use of genetically obese mice. The notion that a defect in the heat-generating system in brown fat might be the primary cause of obesity in these mice and that defective thermogenesis might be an important cause of human obesity became briefly fashionable at this time (late 1970s/early 1980s). Our observation that mice could lower their body temperature and become torpid when fasted led us to suggest that the well-documented persistent mild hypothermia of these mice and their intolerance to sudden cold exposure was not a failure of thermogenesis but manifestations of an adaptive energy conserving response to perceived starvation. Their genetic defect is now known to leave their brains unable to detect their huge fat stores and so they respond as if in a permanent state of starvation; they respond by entering a permanent semi-torpid state. The defective brown fat theory of human obesity was the result of an incorrect interpretation of research observations in mice and its inappropriate application to people.
I first became conscious of research fraud when I discovered that Ranjit K Chandra, whose publications I had cited in several of my books and papers, had been accused of fabricating his data. I had also cited a Nature paper of Jatinder Ahluwalia to support my case against the likely benefits of antioxidant supplements. Ahluwalia had become a colleague by the time news first broke that he had been accused of research fraud and I subsequently became involved in efforts to persuade my employer to take action against him. At around this time, my daughter Kate was taking an MA in publishing at University College, London and because of my frequent discussions (ranting?) about research fraud she chose to write about an aspect of research fraud for her dissertation. It was her research and her discussions with me that helped to convert what had been a general interest with sporadic bouts of reading about individual cases to more systematic research about fraud and its causes and consequences that eventually led to this book. She made me aware of many more cases and showed me that there is a substantial body of academic literature dealing with various aspects of research fraud. She also made me aware of organisations that deal with research fraud like the Office of Research Integrity (ORI) in the USA and the UK Research Integrity Office (UKRIO).