Scientific misconduct

Jump to navigation Jump to search

Template:Inappropriate toneScientific misconduct is the violation of the standard codes of scholarly conduct and ethical behavior in professional scientific research. A Lancet review on Handling of Scientific Misconduct in Scandinavian countries provides the following sample definitions: [1]

  • Danish Definition: "Intention(al) or gross negligence leading to fabrication of the scientific message or a false credit or emphasis given to a scientist"
  • Swedish Definition: "Intention(al) distortion of the research process by fabrication of data, text, hypothesis, or methods from another researcher's manuscript form or publication; or distortion of the research process in other ways."

Scientific misconduct may take place simply out of reasons of reputation - academic scientists are under pressure to produce publications in peer reviewed journals. Alternatively there may be commercial or political motivations where the financial or political success of a project depends on publishing evidence of efficacy. The consequences of scientific misconduct can be severe at a personal level for both perpetrators and any individual who exposes it. In addition there are public health implications attached to the promotion of medical or other interventions based on dubious research findings.

Motivation to commit scientific misconduct

According to David Goodstein of Caltech, there are three main motivators for scientists to commit misconduct, which are briefly summarised here.

Career pressure
Science is still a very strongly career-driven discipline. Scientists depend on a good reputation to receive ongoing support and funding; and a good reputation relies largely on the publication of high-profile scientific papers. Hence, there is a strong imperative to "publish or perish". Clearly, this may motivate desperate (or fame-hungry) scientists to fabricate results.
Pride
Even on the rare occasions when scientists do falsify data, they almost never do so with the active intent to introduce false information into the body of scientific knowledge. Rather, they intend to introduce a fact that they believe is true, without going to the trouble and difficulty of actually performing the experiments required.
The ability to get away with it
In many scientific fields, results are often difficult to reproduce accurately, being obscured by noise, artifacts and other extraneous data. That means that even if a scientist does falsify data, they can expect to get away with it - or at least claim innocence if their results conflict with others in the same field. There is no "scientific police" which is trained to fight scientific crimes, all investigations are made by experts in science but amateurs in dealing with criminals. It is relatively easy to cheat.

Forms of scientific misconduct

Forms of scientific misconduct include:

  • fabrication – the publication of deliberately false or misleading research, often subdivided into:
    • fabrication – the actual making up of research data and (the intent of) publishing them
    • falsification – manipulation of research data and processes or omitting critical data or results[2]

Another form of fabrication is where references are included to give arguments the appearance of widespread acceptance, but are actually fake, and/or do not support the argument [3].

  • plagiarism – the act of taking credit (or attempting to take credit) for the work of another. A subset is citation plagiarism – willful or negligent failure to appropriately credit other or prior discoverers, so as to give an improper impression of priority. This is also known as, "citation amnesia", the "disregard syndrome" and "bibliographic negligence"[1]. Arguably, this is the most common type of scientific misconduct. Sometimes it is difficult to guess whether authors intentionally ignored a highly relevant cite or lacked knowledge of the prior work. Discovery credit can also be inadvertently reassigned from the original discoverer to a better-known researcher. This is a special case of the Matthew effect.[4]
  • self-plagiarism – or Multiple publication of the same content with different titles and/or in different journals is sometimes also considered as misconduct; scientific journals explicitly ask authors not to do this.
  • the violation of ethical standards regarding human and animal experiments – such as the standard that a human subject of the experiment must give informed consent to the experiment.
  • ghostwriting – the phenomenon where someone other than the named author(s) makes a major contribution. Typically, this is done to mask contributions from drug companies. It incorporates plagiarism and has an additional element of financial fraud.

In addition, some academics consider suppression--the failure to publish significant findings due to the results being adverse to the interests of the researcher or his/her sponsor(s)--to be a form of misconduct as well; this is discussed below.

In some cases, scientific misconduct may also constitute violations of the law, but not always. Being accused of the activities described in this article is a serious matter for a practicing scientist, with severe consequences should it be determined that a researcher intentionally or carelessly engaged in misconduct.

Three percent of the 3,475 research institutions that report to the US Department of Health and Human Services' Office of Research Integrity, indicate some form of scientific misconduct. (Source: Wired Magazine, March 2004)

The validity of the methods and results of scientific papers are often scrutinized in journal clubs. In this venue, members can decide amongst themselves with the help of peers if a scientific paper's ethical standards are met.

Responsibility of authors and of coauthors

Authors and coauthors of scientific publications have a variety of responsibilities. Contravention of the rules of scientific authorship may lead to a charge of scientific misconduct. All authors, including coauthors, are expected to have made reasonable attempts to check findings submitted to academic journals for publication. Simultaneous submission of scientific findings to more than one journal or duplicate publication of findings is usually regarded as misconduct, under what is known as the Ingelfinger rule, named after the editor of the New England Journal of Medicine 1967-1977, Franz Ingelfinger [2].

Guest authorship (where there is stated authorship in the absence of involvement, also known as gift authorship) and ghost authorship (where the real author is not listed as an author) are commonly regarded as forms of research misconduct. In some cases coauthors of faked research have been accused of inappropriate behavior or research misconduct for failing to verify reports authored by others or by a commercial sponsor. Examples include the case of Gerald Schatten who co-authored with Hwang Woo-Suk, the case of Professor Geoffrey Chamberlain who co-authored papers with Pearce (see lessons from the Pearce affair), and the coauthors with Jan Hendrik Schön at Bell Laboratories. More recent cases include the Charles Nemeroff, then the editor-in-chief of Neuropsychopharmacology, and a well documented case involving the drug Actonel Sheffield Actonel affair.

Authors are expected to keep all study data for later examination even after publication. The failure to keep data may be regarded as misconduct. Some scientific journals require that authors provide information to allow readers to determine whether the authors might have commercial or non-commercial conflicts of interest. Authors are also commonly required to provide information about ethical aspects of research, particularly where research involves human or animal participants or use of biological material. Provision of incorrect information to journals may be regarded as misconduct. Financial pressures on universities have encouraged this type of misconduct. The majority of recent cases of alleged misconduct involving undisclosed conflicts of interest or failure of the authors to have seen scientific data involve collaborative research between scientists and biotechnology companies (Nemeroff, Blumsohn).

Photo manipulation

In 2006, the Journal of Cell Biology gained publicity [3] for instituting tests to detect photo manipulation in papers that were being considered for publication. This was in response to the increased usage of programs by scientists such as Photoshop, which facilitate photo manipulation. Since then more publishers, including the Nature Publishing Group have instituted similar tests and require authors to minimize and specify the extent of photo manipulation when a manuscript is submitted for publication

Although the type of manipulation that is allowed can depend greatly on the type of experiment that is presented and also differ from one journal to another, in general the following manipulations are not allowed:

  • splicing together different images to represent a single experiment
  • changing brightness and contrast of only a part of the image
  • any change that conceals information, even when it is considered to be aspecific, which includes:
    • changing brightness and contrast to leave only the most intense signal
    • using clone tools to hide information
  • showing only a very small part of the photograph so that additional information is not visible

And more in general, most journals nowadays urge authors to use photo manipulation with restraint and great care.

Suppression/non-publication of data

A related issue concerns the deliberate suppression, failure to publish, or selective release of the findings of scientific studies. Such cases may not be strictly definable as scientific misconduct as the deliberate falsification of results is not present. However, in such cases the intent may nevertheless be to deliberately deceive. Studies may be suppressed or remain unpublished because the findings are perceived to undermine the commercial, political or other interests of the sponsoring agent or because they fail to support the ideological goals of the researcher. Examples include the failure to publish studies if they demonstrate the harm of a new drug, or truthfully publishing the benefits of a treatment while omitting harmful side-effects.

This is distinguishable from other concepts such as bad science, junk science or pseudoscience where the criticism centres on the methodology or underlying assumptions. It may be possible in some cases to use statistical methods to show that the datasets offered in relation to a given field are incomplete. However this may simply reflect the existence of real-world restrictions on researchers without justifying more sinister conclusions.

Some cases go beyond the failure to publish complete reports of all findings with researchers knowingly making false claims based on falsified data. This falls clearly under the definition of scientific misconduct, even if the result was achieved by suppressing data. In the case of Raphael B. Stricker, M.D.[4], for instance, the U.S. Office of Research Integrity has found the removal of samples from a data set in order to reach a desired conclusion to be grounds for disbarment from funding.

Consequences for science

The consequences of scientific fraud vary based on the severity of the fraud, the level of notice it receives, and how long it goes undetected. For cases of fabricated evidence, the consequences can be wide ranging, with others working to confirm (or refute) the false finding, or with research agendas being distorted to address the fraudulent evidence. The Piltdown Man fraud is a case in point: The significance of the bona-fide fossils being found was muted for decades because they disagreed with Piltdown Man and the pre-conceived notions that those faked fossils supported. In addition, the prominent paleontologist Arthur Smith Woodward spent time at Piltdown each year until he died trying to find more Piltdown Man remains. The misdirection of resources kept others from taking the real fossils more seriously and delayed the reaching of a correct understanding of human evolution. (The Taung Child, which should have been the death knell for the view that the human brain evolved first, was instead treated very critically because of its disagreement with the Piltdown Man evidence.)

Consequences for those who expose misconduct[citation needed]

The potentially severe consequences for individuals who are found to have engaged in misconduct also reflect back on the institutions that host or employ them and also on the participants in any peer review process that has allowed the publication of questionable research. This means that a range of actors in any case may have a motivation to suppress any evidence or suggestion of misconduct. This means that persons who expose such cases can find themselves open to retaliation by a number of different means. These negative consequences for exposers of misconduct have driven the development of whistle blowers charters - designed to protect those who raise concerns. A whistleblower is almost always alone in his fight - his career becomes completely dependent on the decision about alleged misconduct. If the accusations prove false, his career is completely destroyed, but even in case of positive decision the career of the whistleblower can be under question: his reputation of "troublemen" will prevent many employers from hiring him. There is no international body where a whistleblower could give his concerns. If a university fails to investigate suspected fraud or provides a fake investigation to save their reputation the whistleblower has no right of appeal. High profile journals like "Nature" and "Science" usually forward all allegations to the university where the authors are employed, or may do nothing at all. An organized web community of scientific whistleblowers also does not exist.

Exposure of Falsified Data

With the advancement of the internet, there are now several tools available to aid in the detection of plagiarism and multiple publication within biomedical literature. One tool developed in 2006 by researchers in Dr. Harold Garner's laboratory at University of Texas Southwestern Medical Center at Dallas is Déjà Vu, an open-access database containing several thousand instances of duplicate publication.

Cases of alleged scientific misconduct and related incidents

Below is an incomplete list of cases of alleged scientific misconduct. Some of the cases are relatively minor, such as Robert Millikan's data selection in his famous oil-drop experiment, which, while potentially suspicious, does not seem to have been used in a misleading way or change the fundamental correctness of the experimental results. In other cases, the accusations are for things such as outright fabrication or fraud and are considered very serious. In some cases the accusations are in regards to the ethics of research subjects. In some cases, the question as to whether they are actually instances of misconduct or not is still in debate.

See also

Categories

Notes

  1. "Review: Handling of scientific dishonesty in the Nordic countries" (PDF). The Lancet. 354: 11–18. 1999. Retrieved 2006-09-02. Unknown parameter |month= ignored (help)
  2. Emmeche, slide 3
  3. Emmeche, slide 5
  4. Emmeche, slide 3, who refers to the phenonemon as Dulbecco's law
  5. "For Science's Gatekeepers, a Credibility Gap". New York Times. May 2, 2006. Retrieved 2008-03-26. Recent disclosures of fraudulent or flawed studies in medical and scientific journals have called into question as never before the merits of their peer-review system. The system is based on journals inviting independent experts to critique submitted manuscripts. The stated aim is to weed out sloppy and bad research, ensuring the integrity of the what it has published. Check date values in: |date= (help)

References

  • William Broad & Nicholas Wade, Betrayers of the Truth. Oxford University Press, 1982
  • Brock K. Kilbourne and Maria T. Kilbourne, The Dark Side of Science, Proc. of the 63rd Annual Meeting of the Pacific Division, AAAS, April 30, 1983.
  • Claus Emmeche. "An old and a recent example of scientific fraud" (PowerPoint). Retrieved 2007-05-18.
  • Mounir Errami, Justin M. Hicks, Wayne Fisher, David Trusty, Jonathan D. Wren, Tara C. Long, and Harold R. Garner. Déjà vu – A Study of Duplicate Citations in Medline. Bioinformatics, Dec 2007.

External links

da:Videnskabelig uredelighed de:Betrug und Fälschung in der Wissenschaft