Authors

  1. BROOME, MARION E. PhD, RN, FAAN

Article Content

Integrity without knowledge is weak and useless; knowledge without integrity is dangerous and dreadful. (Samuel Johnson, 1709-1784)

 

The tale of scientific misconduct (SM) perpetrated by Dr Scott Rueben from Baystate Medical Center that resulted in the retraction of 21 manuscripts reporting the effectiveness of various anesthetic and analgesic interventions based on fabricated data is astounding. This kind of behavior, although fortunately relatively uncommon,1 shocks and dismays anyone, but produces very real ethical and moral distress in healthcare professionals. Clinicians have relied on the data in these papers to develop guidelines and protocols that guide their practice with patients, in this case patients in pain. And these same clinicians find themselves asking over and over: Why would a well-respected, knowledgeable, and successful professional engage in SM that could clearly result in harm to patients?

 

The definition of SM, according to the Office of Research Integrity at the National Institutes of Health, is "fabrication, falsification or plagiarism in proposing, performing or reviewing research, or in reporting of research results."2 Factors speculated to "cause" individuals to engage in SM include the need for recognition and advancement, conflicts of interest (usually associated with financial gain), competition, pressure for promotion and tenure, poor mentoring, and psychiatric illness.1 Although many ask the "why" question when reading about these sensational cases, it is even more important to ask how this author fooled editors and peer reviewers. The "how" is the more complicated question. How does an investigator fabricate data so consistently, in study after study, that support his/her hypotheses each and every time? How much time, effort, and energy does it take to systematically create a fictional design and to fabricate and falsify data that provide coauthors, editors, and reviewers with plausible scenarios in study after study? One wonders, would it not be simpler (not to mention more honest) to just collect the data?

 

When one attempts to answer those questions, it is clear that anyone who engages in fabrication of data in 21 studies over a period of 10 years goes beyond any mistake, error, or unintentional misinterpretation. And one reason this activity goes undetected is that the behavior falls so outside the realm of not only good science and research integrity but also anyone's value system. There is never any good, rational answer that clinicians and other scientists can use to "make sense of it all." In fact, SM is a deliberate and intentional behavior meant to deceive others and provide a picture of a phenomenon that is not real. And it is on that fabricated reality that clinicians based their choices of interventions for patients in pain and set expectations for the effectiveness of those interventions. In these cases, clinicians who used the "evidence" to guide their practice with patients rightfully feel angry and betrayed by another professional, maybe even betrayed by the larger professional community (editors and peer reviewers) whom they counted on to monitor the science. They ask, "How could everyone have been fooled and not known this?" This reaction is appropriate.

 

It is helpful to remember that, based on most of the research about SM and research integrity published in the last 10 years, the risk of an occurrence of SM is relatively small.1,3 Most of the time, the system of coauthorship, peer review, and expectation of ethical conduct on the part of most scientists does work. How, you might ask? As an editor over 15 years, there have been some few occasions when I have been alerted to fabrication or falsification of data by a reviewer who just could not see how the data in a specific study could account for findings that were so different from that in the rest of the literature. Closer examination and questioning usually revealed authors who could not produce strong or persuasive enough arguments for why their study's data should be judged more plausible than previous rigorous studies. Those studies, at that point, do not get published. One could also ask about the coauthors on all these studies-did they not see the data? Although not a surprise in retrospect, it turns out that Dr Rueben was the first author on every published (and now retracted) paper and, as such, had control over writing the first draft, overseeing the analyses, and writing up the results. Although coauthors are expected to contribute to some of the draft of the paper, critique drafts of the manuscript, and be able to defend the findings,4 it is likely that they would not have seen the raw data. In fact, the investigation held by the Bayside Medical Center found all coauthors to be free of blame.

 

How then are clinicians to know if data are "real," accurate, and true? There is no fail-safe way, but there are some strategies to use. Do not rely on 1 source of data (specific research team, study, or journal) to base one's protocol, intervention, and others, on. Always look for convergence of findings from different teams of investigators, different sources such as professional associations, and different journals. Do not underestimate your own ability to look at data and compare the findings against your own experiences (and you have likely have many) with patients similar to those in the research. Do the findings resonate with what you and your colleagues have seen? If not, what could account for such a discrepancy? Critique the evidence in an area with your peers and colleagues from other disciplines who may have been to recent professional meetings and heard data that are either congruent or not with the published studies. Use your own professional networks across the country and across specialties to raise questions you may have. Knowledge is cumulative and undergoing continual evaluation and updating. As new knowledge comes to forefront, some medical "facts" and substantiated treatments (eg, estrogen supplementation during menopause) will certainly change. However, if you base your practice on the "best evidence available at the time" and you continually evaluate any protocol for its effectiveness with your patients, you have done the ethically and morally right thing to do.

 

References

 

1. Pryor E, Habermann B, Broome M. Scientific misconduct from the perspective of research coordinators: a national survey. J Med Ethics. 2007;33:365-369. [Context Link]

 

2. United States Department of Health and Human Services. Public health services policies on research misconduct: final rule. 42 CFR parts 50 and 93. Fed Regist. 2005;70:28370-28400. [Context Link]

 

3. Gaddis B, Helton-Fauth W, Scott G, et al. Development of two measures of climate for scientific organization. Account Res. 2003;10:253-288. [Context Link]

 

4. International Committee of Medical Journal Editors. http://www.icmje.com. Accessed May 15, 2009. [Context Link]