<br><h3> Chapter One </h3> <b>Setting the Stage</b> <p> <p> Fraud in science is, in essence, a violation of the scientific method. It is feared and denigrated by all scientists. Let's look at a few real cases that have come up in the past. <p> Piltdown Man, a human cranium and ape jaw found in a gravel pit in England around 1910, is perhaps the most famous case. Initially hailed as the authentic remnants of one of our more distant ancestors, the interspecies skeletal remains were exposed as a fraud by modern dating methods in 1954. To this day no one knows who perpetrated the deception or why. One popular theory is that the perpetrator was only trying to help along what was thought to be the truth. Prehistoric hominid remains had been discovered in France and Germany, and there were even rumors of findings in Africa. Surely humanity could not have originated in those uncivilized places. Better to have human life begin in good old England! <p> As it turned out, the artifact was rejected by the body of scientific knowledge long before modern dating methods showed it to be a hoax. Growing evidence that our ancient forebears looked nothing like Piltdown Man made the discovery an embarrassment at the fringes of anthropology. The application of modern dating methods confirmed that both artifacts were not much older than their discovery date. <p> Sir Cyril Burt was a famous British psychologist who studied the heritability of intelligence by means of identical twins who had been separated at birth. Unfortunately there seem not to have been enough such convenient subjects to study, so he apparently invented thirty-three additional pairs, and because that gave him more work than he could handle, he also invented two assistants to take care of them. His duplicity was uncovered in 1974, some three years after his death. <p> That same year, William Summerlin, a researcher at the Sloan-Kettering Institute for Cancer Research in New York City, conducted a series of experiments aimed at inducing healthy black skin grafts to grow on a white mouse. Evidently, nature wasn't sufficiently cooperative, for he was caught red-handed trying to help her out with a black felt-tipped pen. <p> John Darsee was a prodigious young researcher at Harvard Medical School, turning out a research paper about once every eight days. That lasted a couple of years until 1981, when he was caught fabricating data out of whole cloth. <p> Stephen Breuning was a psychologist at the University of Pittsburgh studying the effects of drugs such as Ritalin on patients. In 1987 it was determined that he had fabricated data. His case was particularly bad, because protocols for treating patients had been based on his spurious results. <p> Science is self-correcting, in the sense that a falsehood injected into the body of scientific knowledge will eventually be discovered and rejected. But that fact does not protect the scientific enterprise against fraud, because injecting falsehoods into the body of science is rarely, if ever, the purpose of those who perpetrate fraud. They almost always believe that they are injecting a truth into the scientific record, as in the cases above, but without going through all the trouble that the real scientific method demands. <p> That's why science needs active measures to protect it. Fraud, or misconduct, means dishonest professional behavior, characterized by the intent to deceive—the very antithesis of ethical behavior in science. When you read a scientific paper, you are free to agree or disagree with its conclusions, but you must always be confident that you can trust its account of the procedures that were used and the results produced by those procedures. <p> For years it was thought that scientific fraud was almost always restricted to biomedicine and closely related sciences, and although there are exceptions, most instances do surface in these fields. There are undoubtedly many reasons for this curious state of affairs. For example, many misconduct cases involve medical doctors rather than scientists with Ph.D.s (who are trained to do research). To a doctor, the welfare of his or her patient may be more important than scientific truth. In a case that came up in the 1980s, for example, a physician in Montreal was found to have falsified the records of participants in a large-scale breast-cancer study. Asked why he did it, he said it was in order to get better medical care for his patients. However, the greater number of cases arises from more self-interested motives. Although the perpetrators usually think that they're doing the right thing, they also know that they're committing fraud. <p> In recent cases of scientific fraud, three motives, or risk factors, have always been present. In nearly all cases, the perpetrators <p> 1. were under career pressure; 2. knew, or thought they knew, what the answer to the problem they were considering would turn out to be if they went to all the trouble of doing the work properly; and <p> 3. were working in a field where individual experiments are not expected to be precisely reproducible. <p> <p> It is by no means true that fraud always arises when these three factors are present. In fact, just the opposite is true: These factors are often present, and fraud is quite rare. But they do seem to be present whenever fraud occurs. Let us consider them one at a time. <p> <i>Career pressure.</i> This is clearly a motivating factor, but it does not offer us any special insights into why a small number of scientists stray professionally when most do not. All scientists, at all levels, from fame to obscurity, are pretty much always under career pressure. On the other hand, simple monetary gain is seldom if ever a factor in scientific fraud. <p> <i>Knowing the answer.</i> Scientific fraud is almost always a transgression against the methods of science, not purposely against the body of knowledge. Perpetrators think they know how the experiment would come out if it were done properly, and they decide that it is not necessary to go to all the trouble of doing it properly. <p> <i>Reproducibility.</i> In reality, experiments are seldom repeated by others in science. Nevertheless, the belief that someone else can repeat an experiment and get—or not—the same result can be a powerful deterrent to cheating. Here a pertinent distinction arises between biology and the other sciences, in that biological variability may provide apparent cover for a biologist who is tempted to cheat. Sufficient variability exists among organisms that the same procedure, performed on two test subjects as nearly identical as possible, is not expected to give exactly the same result. If two virtually identical rats are treated with the same carcinogen, they are not expected to develop the same tumor in the same place at the same time. This last point certainly helps to explain why scientific fraud is found mainly in the biomedical area. (Two cases in physics offer an interesting test of this hypothesis. They are addressed in more detail later in this volume.) <p> No human activity can stand up to the glare of relentless, absolute honesty. We build little hypocrisies and misrepresentations into what we do to make our lives a little easier, and science, a very human enterprise, is no exception. For example, every scientific paper is written as if the particular investigation it describes were a triumphant progression from one truth to the next. All scientists who perform research, however, know that every scientific experiment is chaotic—like war. You never know what's going on; you cannot usually understand what the data mean. But in the end you figure out what it was all about, and then, with hindsight, you write it up as one clear and certain step after another. This is a kind of hypocrisy, but one that is deeply embedded in the way we do science. We are so accustomed to it that we don't even regard it as a misrepresentation. Courses are not offered in the rules of misrepresentation in scientific papers, but the apprenticeship that one goes through to become a scientist does involve learning them. <p> The same apprenticeship, however, also inculcates a deep respect for the inviolability of scientific data and instructs the neophyte scientist in the ironclad distinction between harmless fudging and real fraud. For example, it may be marginally acceptable, in writing up your experiment, to present your best data and casually refer to them as typical (because you mean typical of the phenomenon, not typical of your data), but it is not acceptable to move one data point just a little bit to make the data look better. All scientists would agree that to do so is fraud. That is because experiments must deal with physical reality, a major point that can only be assured by an honest presentation of all the data. <p> In order to define as precisely as possible what constitutes scientific misconduct or fraud, we need first to have the clearest possible understanding of how science actually works. Otherwise, it is all too easy to formulate plausible-sounding ethical principles that would be unworkable or even damaging to the scientific enterprise if they were actually put into practice. Here, for example, is a plausible but unworkable set of such precepts. <p> 1. A scientist should never be motivated to do science for personal gain, advancement, or other rewards. 2. Scientists should always be objective and impartial when gathering data. 3. Every observation or experiment must be designed to falsify a hypothesis. 4. When an experiment or an observation gives a result contrary to the prediction of a certain theory, all ethical scientists must abandon that theory. 5. Scientists must never believe dogmatically in an idea or use rhetorical exaggeration in promoting it. 6. Scientists must "bend over backwards" (in the words of iconic physicist Richard Feynman) to point out evidence that is contrary to their own hypothesis or that might weaken acceptance of their experimental results. <p> 7. Conduct that seriously departs from commonly accepted behavior in the scientific community is unethical. 8. Scientists must report what they have done so fully that any other scientist can reproduce the experiment or calculation. Science must be an open book, not an acquired skill. 9. Scientists should never permit their judgments to be affected by authority. For example, the reputation of the scientist making a given claim is irrelevant to the validity of the claim. <p> 10. Each author of a multi-author paper is responsible for every part of the paper. 11. The choice and order of authors on a multi-author paper must strictly reflect the contributions of the authors to the work in question. 12. Financial support for doing science and access to scientific facilities should be shared democratically, not concentrated in the hands of a favored few. 13. There can never be too many scientists in the world. 14. No misleading or deceptive statement should ever appear in a scientific paper. 15. Decisions about the distribution of scientific resources and publication of experimental results must be guided by the judgment of scientific peers who are protected by anonymity. <p> <p> Let's now look at each of our <i>diktats</i> in turn, beginning with principle 1. In a parallel case in economic life, well-intentioned attempts to eliminate the role of greed or speculation can have disastrous consequences. In fact, seemingly bad behavior such as the aggressive pursuit of self-interest can, in a properly functioning system, produce results that are generally beneficial. <p> Principles 2 and 3 derive from the following arguments. According to Francis Bacon, who set down these ideas in the seventeenth century, science begins with the careful recording of observations. These should be, insofar as is humanly possible, uninfluenced by any prior prejudice or theoretical preconception. When a large enough body of observations is present, one generalizes from these to a theory or hypothesis by a process of induction—that is, working from the specific to the general. <p> Historians, philosophers, and those scientists willing to venture into such philosophic waters are virtually unanimous in rejecting Baconian inductivism as a general characterization of good scientific method (adieu, principle 2). You cannot record all that you observe; some principle of relevance is required. But decisions about what is relevant depend on background assumptions that are highly theoretical. This is sometimes expressed by saying that all observation in science is "theory-laden" and that a "theoretically neutral" language for recording observations is impossible. <p> The idea that science proceeds only and always by means of inductive generalization is also misguided. Theories in many parts of science have to do with things that can't be directly observed at all: forces, fields, subatomic particles, proteins, and so on. For this and many other reasons, no one has been able to formulate a defensible theory of Baconian inductivist science. Although few scientists believe in inductivism, many have been influencedbythefalsifiabilityideasofthetwentieth-centuryphilosopher Karl Popper. According to these ideas, we assess the validity of a hypothesis by extracting from it a testable prediction. If the test proves the prediction to be false, the hypothesis is also by definition false and must be rejected. The key point to appreciate here is that no matter how many observations agree with the prediction, they will never suffice to prove that the prediction is true, or verified, or even more probable than it was before. The most that we are allowed to say is that the theory has been tested and not yet falsified. Thus an important asymmetry informs the Popperian model of verification and falsification. We can show conclusively that a hypothesis is false, but we can never demonstrate conclusively that it is true. In this view, science proceeds entirely by showing that seemingly sound ideas are wrong, so that they must be replaced by better ideas. <p> Inductivists place much emphasis on avoidance of error. By contrast, falsifiability advocates believe that no theory can ultimately be proved right, so our aim should be to detect errors and learn from them as efficiently as possible. Thus, a laudable corollary of the Popperian view is that if science is to progress, scientists must be free to be wrong. <p> But falsifiability also has serious deficiencies. Testing a given hypothesis, H, involves deriving from it some observable consequence, O. But in practice, O may depend on other assumptions, A (auxiliary assumptions, philosophers call them). So if H is false, it may be that O is false, but it may also be that O is true and A is false. <p> One immediate consequence of this simple logical fact is that the asymmetry between falsifiability and verification vanishes. We may not be able to conclusively verify a hypothesis, but we can't falsify it either. Thus it may be a good strategy to hang onto a hypothesis even when an observation seems to imply that it's false. The history of science is full of examples of this sort of anti-Popperian strategy succeeding where a purely Popperian strategy would have failed. Perhaps the classic example is Albert Einstein's seemingly absurd conjecture that the speed of light must be the same for all observers, regardless of their own speed. Many observations had shown that the apparent speed of an object depends on the speed of the observer. But those observations were not true for light, and the result was the special theory of relativity (so much for principle 3). <p> Both inductivism and falsifiability envision the scientist encountering nature all alone. But science is carried out by a community of investigators. Suppose a scientist who has devoted a great deal of time and energy developing a theory is faced with a decision about whether to hold onto it in the face of some contrary evidence. Good Popperian behavior would be to give it up, but the communal nature of science suggests another possibility. Suppose our scientist has a rival who has invested time and energy developing an alternative theory. Then we can expect the rival to act as a severe Popperian critic of the theory. As long as others are willing to do the job, our scientist need not take on the psychologically daunting task of playing his own devil's advocate. In fact, scientists, like other people, find it difficult to commit to an arduous long-term project if they spend too much time contemplating the various ways in which the project might be unsuccessful (principle 4). <p> <i>(Continues...)</i> <p> <!-- copyright notice --> <br></pre> <blockquote><hr noshade size='1'><font size='-2'> Excerpted from <b>On Fact and Fraud</b> by <b>David Goodstein</b> Copyright © 2010 by Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.<br>Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.