Unreliable research findings are widespread and can lead to poor health and poor spending choices, according to an article co-authored by Stephen Soumerai, Harvard Medical School professor of population medicine at Harvard Pilgrim Health Care Institute, published June 25 in the journal Preventing Chronic Disease. Though there are many reasons research misses the mark, faulty study design is often at the heart of unreliable research findings.
The article analyzes five case studies that show how some of the most common biases and flawed study designs impact research on health policies and interventions.
Each case is followed by examples of weak study designs that cannot control for bias, stronger designs that can, and the unintended clinical and policy consequences that may result from exaggerated reporting of positive findings from poorly designed studies.
Soumerai discussed the paper and the challenges of research design with Harvard Medicine News.
HMN: Why is study design so important in health care effectiveness research?
SS: Many studies of clinical treatments and health care policies do not really prove the cause-and-effect relationships that they claim.
All too often early studies of new treatments show dramatic positive health effects that diminish, disappear or even reverse direction as more rigorous studies are conducted.
These early findings also make great stories for journalists, since the exaggerated results make a compelling narrative about the power of innovations and reform to improve health.
The result: mistaken conclusions that often lead to wasteful, or harmful, public policies.
Our work, based on decades of research conducted by ourselves and others, examines the systemic errors in research design that have led to such mistakes, and offers some solutions on how to avoid them. We hope it will be useful to the public, policymakers, research trainees and journalists.
HMN: What kind of impact can bad study design have on health policy?
SS: The most credible systematic reviews commonly exclude from evidence 50 to 75 percent of published studies because they do not meet the basic research design standards required to yield trustworthy conclusions. In many such studies researchers need to statistically manipulate the data to “adjust for” irreconcilable differences between intervention and control groups. Yet it is these very differences that often create the reported but invalid effects of the health services or policies that were studied.
HMN: So, policymakers and journalists don’t use those same guidelines of reliability before they decide whether to discuss or act on research findings?
SS: The problem has become recognized as so widespread that there are even media websites dedicated to exposing these issues on a daily basis.
The flawed results of these sorts of experiments led to the premature adoption of unproven health information technologies, resulting in trillions of dollars in waste. Similarly, weak studies of popular hospital safety programs claiming to have saved hundreds of thousands of lives led to the widespread adoption of ineffective initiatives.
HMN: What’s another example?
SS: One example is the nationwide campaign to vaccinate all elderly people against influenza. Clearly, flu vaccines can sometimes prevent the symptoms of flu. But the nationwide campaign is based on the assumption that flu jabs would lower rates of mortality and hospitalizations in older people—those at highest risk of dying or being hospitalized during flu season.
Poorly designed studies compared healthy users of flu vaccines with unhealthy non-users—people who were already too sick to get the shot—and attributed the differences in mortality to the vaccines, instead of the patients’ pre-existing health status.
Just imagine comparing two people of the same age—one eats well, exercises regularly, takes all of her meds as prescribed, has health insurance and regularly sees a physician; the other is overweight, sedentary, doesn't like pills, has partial insurance coverage and obtains most care from urgent care clinics or in a hospital emergency department. The first patient gets vaccinated every year, the latter does not. Is it a surprise that the first patient is less likely to die or be hospitalized during the cold winter months?
Indeed, a strong longitudinal study showed that the fourfold increase in flu jabs in U.S. elderly over the last few decades has had no effect on lowering mortality.
Even more convincing, a series of clever “replications” of the flawed flu studies found the same “lowered mortality” after flu season, when a vaccine couldn’t possibly affect death rates because there wasn’t anyone in the population dying of the flu!
HMN: So what’s the solution?
SS: I coauthored the article with Douglas Starr, a journalist and co-director of the graduate program in science and medical journalism at Boston University, and Sumit Majumdar, a physician and professor of medicine and endowed chair in patient health management at the University of Alberta, Canada.
I think it was important to collaborate with a journalist because this is not a problem that scientists can solve by themselves, and more important, I wanted to ensure that our messages were conveyed in a way that all readers could take away what really mattered.
Researchers, reporters and the editors of journals all have a role to play in getting research design right. Sometimes putting the evidence first means skipping the sexy headline or rejecting a study that supports a popular program with weak evidence.