Kashrut for the scientific literature keeps science going
Last week, I woke up and read a New York Times article by James Glanz and Agustin Armendariz on a famous and celebrated cancer scientist, Carlo Croce of Ohio State, who has been repeatedly implicated in scientific fraud. This moved me to shelve my plans for a post on Neuro-Generosity and proceed immediately to GO with a post on scientific misconduct. I aim to make fourpoints in this post:
- The scientific record must be protected. We have to practice metaphorical kashrut (the set of Jewish laws that separate kosher foods – fit for consumption – from dirty or unfit, not-to-be-eaten foods) on scientific communications.
- Playing detective to assign blame is a strategy that flies in the face of neurobiological and psychological science.
- Simple regulatory changes can mitigate the inherent intrinsic conflict of interest inherent in research institutions investigating their own.
- All allegations, regardless of their source, should be investigated
My introduction to ethics
In the early 2000s, I was as ignorant and unconcerned about scientific misconduct as the next scientist. It really wasn’t on my radar screen. I remember the John Darsee case in which a Harvard cardiologist fabricated data (it is my recall that he made up traces!). This came to light early during my graduate school training. I don’t think I thought about RCR, an acronym for Responsible Conduct of Research, again for years. Back then, NIH had not yet mandated that all institutions that receive NIH support (=nearly all universities) train their students in RCR (which occurred in 1990).
Fast forward to the mid-2000s when I was an Associate Editor of The Journal of Neurophysiology and also on the Publications Committee for JNp’s publisher, The American Physiological Society (APS). APS did and continues to publish more than a dozen scientific journals. Pubs’ (as we were known) was responsible for choosing editors-in-chief for all APS journals, setting submission policies, okaying associate editor appointments, and, increasingly with adjudicating allegations of scientific misconduct.
One case really sticks in my mind. We (which really means the Pubs’ chair) asked for the raw data from an experiment. Those data were sent to a statistician who found that among hundreds of measurements (of a continuous variable), the last digit of the values (hundredth or thousandth place as I recall) was not randomly distributed. I loved the cleverness of this analysis. Despite this case, the vast majority of scientific misconduct cases that come to light involve manipulated images.
Serving on APS’s Publications Committee served as my introduction into RCR. By around 2005-6, I was regularly participating in UChicago’s NIH-mandated RCR course for graduate students. I worked each year with a small group of graduate students on understanding the evolving view toward manipulated images, which can be summed up as, “Don’t alter anything. In this age of digital imaging, if you don’t like an image for any reason, take another one! No manipulations. Just say No to Photoshop.”
In 2008, I was asked to serve on the Society for Neuroscience’s (SfN) Responsible Conduct Working Group. Under the leadership of David van Essen, we were tasked with re-writing SfN’s guidelines and ethics policy. The committee was packed with truly thoughtful and enormously experienced individuals and was an eye-opening intellectual experience. We talked and debated openly, considering multiple possible solutions to each important issue (e.g. What constitutes authorship?). Thoughts, ideas, caveats flew through the room when we met in person and then through email as we continued our discussions remotely. We came up with a Policy and Guidelines that we all felt comfortable with and even proud of their scope and thoughtfulness.
With the Responsible Conduct Committee’s work done, I went back to minding my own business, watching rats help other rats out of a jam, teaching medical students, and happily not writing a book. Then, in 2012, SfN approached me to serve as the inaugural chair of a new Ethics Committee. The volume of alleged scientific misconduct cases was increasing at SfN, just as it had been at APS some years before. The Editor-in-Chief at SfN’s single (at that time) publication, The Journal of Neuroscience, was finding that an inordinate proportion of his time was taken up with investigating possible misconduct. He wanted relief. I took a metaphorical deep breath and jumped in.
The scientific literature as client
For the next few years, I worked with a stellar group of neuroscientists to set up a system to investigate allegations of misconduct. We tried to give every party involved – the accused, accuser and the scientific literature – due process.
You may think it odd that I have personified scientific literature but in fact, we always considered the scientific literature to be our most important “client.” That attitude drove the Ethics Committee’s central task: deciding when to recommend that corrections, retractions, or editorial statements of concern be published.
Fraudulent data in the published record may appear to be a victim-less crime. However, that could not be further from the truth. The victims are science and those that pursue scientific truths. Scientists build off each others’ results. That is what we do. Thus, misleading and erroneous reports can send scientists on wild goose chases that waste time, energy and prevent more fruitful lines of investigation from taking place.
Do we ever know who’s to blame?
A neurobiological perspective tells us that people make up stories as to why they do what they do but that these stories are just that – stories – with typically very little accuracy. Take a simple act of apparent kindness such as holding a door open for someone on crutches. Maybe you are being nice. Alternatively you may want to appear nice to others or simply to yourself. With more complicated actions – calling your father, not calling your father, choosing a school to attend, eating a dessert or having a second drink – the veracity of self-reported justifications grows ever increasingly difficult to test.
A favorite example of how inaccurate our self-report comes from an experiment by Norman Maier in which participants used pendulum action to solve a task. They only arrived at the pendulum-solution after seeing Maier casually swing a cord. Afterwards they denied gaining a clue from Maier’s actions, with one Psychology professor even musing about his Eureka vision of swinging monkeys.
A second favorite example of how clueless we are about the true motivations for our own actions comes from the study of an Israeli Parole Board which I wrote about previously. Briefly, the likelihood of parole being granted varied with the case number, progressively declining from high points first thing in the morning and after snacks. These data emerged from more than a thousand cases heard over 50 separate days. No other relevant variable, such as number of previous incarcerations, varied across case number. Thus one is left to the conclusion that truly immaterial factors play a major role in making a legal determination where they have no relevance.
Dr Croce called the repeated and plentiful problems in his articles the result of “honest error.” In my experience, this is the most common justification of a duplication or manipulation of data.The more the data are manipulated – flipped, reversed, shrunk or stretched and so on – the less believable the excuse of honest error becomes. Moreover the greater number of errors within an article or between articles, the less likely that the problem is a one-off honest error.
As I wrote in in 2014, “To paraphrase The Shadow, “who knows what intentions lurk in the hearts [brains] of neuroscientists?” Short of hard evidence, we really don’t know, in any factual sense, who did what. And even with a confession, we are unlikely to know the true reasons why people acted or failed to act in the way they did. My take-home from this is that instead of punishment, emphasis should be placed on helping people understand what structural factors allowed RCR violations to occur and to identify systemic changes that can be put into place to increase future research rigor and integrity.
Universities have an inherent self-interest when investigating their own faculty
While I was chair of the SfN Ethics Committee, we endeavored to identify cases with particularly egregious problems, numerous problems, or both. If we felt unsatisfied with the corresponding author’s explanation, we then detailed our concerns in a formal letter to the institution where the work was performed. It was then the job of the institution to investigate the allegations. This makes sense, of course, since we had no proximity to or authority over the relevant people and material record. Thus, we alerted the institution and the institution was then responsible for looking into the allegations. For research supported by NIH, the Office of Research Integrity or ORI mandates that all allegations be assessed. If the record suggests that data fabrication or falsification may have occurred, then a formal inquiry must be launched.
The committee was fortunate to work with several very impressive Research Integrity Officer or RIOs. These individuals are paid by the academic institution for whom they work. Yet they hone to clear professional guidelines that render their work impartial and ombudsperson-like in nature.
Unfortunately, not all institutions have RIOs and not all institutions appear to act in a disinterested manner. Personal and professional relationships that naturally occur among the individuals at an institution must lead interested parties to recuse themselves from an investigation. This does not always happen. Dr John Dahlberg of ORI [affiliation corrected on 3/17/17.] is quoted in the NYT article as saying,”“My sense was, Carlo Croce’s too big to make findings of misconduct on. It just wasn’t going to happen.” Dr Dahlberg is bringing up a critical point. Having individuals who are too-big-to-be-impartially-investigated is a situation that does not serve scientific progress well.
As reported by the New York Times reporters Glanz and Armendariz, there is a suggestion that Ohio State passed on clear misconduct charges for Dr Croce out of its own self-interest in the grant monies brought in by Dr Croce. It is beyond an outsider to know the veracity of this possibility. Yet there are measures that would surely improve the quality of institutional investigations and address the inherent institutional conflict of interest. For example, requiring an outside review of every investigation would be a step in the right direction. Such a review would be for process rather than content. Ohio State is contracting for an external review in the case of Dr Croce but this is a close-the-barn-door-after-the-horse-has-bolted situation; too little and too late. Unfortunately, any initiative that involves greater governmental regulation is unlikely to happen today (in the dark days of March 2017).
All allegations should be investigated
Some of the allegations regarding Dr Croce were made by Clare Francis, an anonymous complainant whose identity has been debated in several venues. Clare Francis sent many allegations to the SfN Ethics Committee in the time that I was chair. Sometimes I could get more than a dozen emails within a 12-24 hour period from Clare Francis. Many of the complaints were phrased in highly emotional and passionate language similar to that quoted by Glanz and Armendariz, “You misunderstand that Carlo Croce is a great scientist…You simply misunderstand science. Please stop talking about reputations and look at the reality!”
Dismissing allegations because of their source, volume, or the language in which they are couched is dangerous. As Adam Marcus and Ivan Oransky from Retraction Watch wrote in Lab Times, “facts are stubborn things and we haven’t seen any evidence yet that people who identify themselves have any more of a monopoly on them than those who want to remain anonymous.” I agree completely. With respect to Clare Francis in particular, I would say that their/her/his hit rate (number of allegations warranting more than 15 minutes of investigation relative to the total number of allegations made) was sizable enough to take very seriously. Some Clare Francis allegations became cases that warranted being sent on to the relevant academic institution. For Ohio State to warn off Clare Francis saying that the allegations received were frivolous is irresponsible; this attitude does not put the scientific literature first.
In sum, all allegations, regardless of the source, should be a cause for worry among scientists. All allegations merit our concern and investigation by relevant bodies.
I want to close by making three points.
First, the end does not justify the means, ever. The idea that even if an error occurred, it makes no difference because the findings were later confirmed or at least not refuted. In other words, if we got it right, what difference does it make that we cheated? Well the answer is that you didn’t get it right if you cheated. The bias against publishing replications (either successful or failures) means that even if a result is wrong, it is difficult to get that information published. Furthermore, the sloppiness that yields numerous errors, even if we grant that those errors are accidental rather than deliberate, is not compatible with serious and rigorous science.
Second, my experience with journals mirrors that described by Glanz and Armendariz. Even when I alerted another journal to problems that were confirmed by an institution, editors-in-chief often took no action. I think there are two reasons for this. First, there is somewhat of a conflict of interest with respect to the journal’s reputation. A journal that issues many retractions could be perceived as an untrustworthy journal with an ineffective peer-review system. The second reason is that having an active Ethics Committee is an expensive proposition. SfN is one of the financially healthier scientific societies. Many scientific societies simply do not have the resources to staff and support a serious Ethics process.
Finally, let me end with a possible way forward inspired by talks and conversations that I had at the 2016 Keeping the Pool Clean conference held at Colorado State University. [The slides from all the talks at this conference are freely available here.] As the sociologist and research scientist Brian Martinson said, “Let’s stop bobbing for bad apples and start looking upstream.”
To understand Martinson’s comment, consider the criminology analogy presented by Kenneth Pimple. Crime happens because of 1) a likely offender, 2) a suitable target and 3) a favorable setting. Targeting likely offenders is bobbing for apples. Sometimes you get ’em, more often you don’t. The likelihood of coming up apple-less so to speak is increased by the relatively low number of truly bad apples. The vast majority of scientists, with a few possible exceptions, don’t wake up and say to themselves, “let me go make some stuff up and screw the scientific record today.” As DuBois and colleagues reported, most researchers sent to Ethics Rehab made mistakes out of inattention or ignorance. This means that scientific misconduct can happen to any of us who are busy and/or not cognizant of all the rules. That means all scientists.
What is left for us to do is to change the conditions, look upstream to the sociological features that favor scientific shortcuts and sloppiness. Rework the perverse incentives to report falsely positive findings and the ways in which bad science is selected for. Restructure funding for basic research that could impact on human health in ways as of yet unknown. Reform scientific publishing. Alter the labor framework of science but make sure that we aren’t blinded by test scores when we do so. There is no clear path forward but it is patently obvious that changes are needed and for that we need discussion and free exchange of ideas.