In an opinion piece entitled Science and finance: same symptoms, same dangers?, published in EMBO Reports nearly a year ago, Laurent Ségalat, a principal investigator at the CNRS Center for Molecular and Cellular Genetics in Lyon, describes modern science as dangerously analogous to high finance. Once one comes to terms with the obvious shortcomings of the simile – scientists don't do business lunches at Michelin-starred restaurants or buy a new Maserati just because the color of the old one went out of fashion – the essay proves enjoyable and highly instructive. To summarize it in one phrase, the finance analogy is based on the observation that publications accumulate at ever increasing speed, much as if they were part of a gigantic Ponzi scheme of knowledge.
To critically assess the current situation, one has to keep in mind that the point of science is to gain knowledge about the system under study and then, and this is crucial, to disseminate this knowledge freely, thus contributing to understanding and progress. Experiments, study, discussions and hypotheses are the essence of science. Results form the basis of further experiments; every answer spawns ten new questions that dig deeper into the system. The publication of results is necessary to spread the knowledge, and as long as a publication is properly indexed and freely available, the results and discussions in it contribute to the increase in human knowledge. In the traditional view of science, any publication is judged by how much it pushes the boundaries of knowledge and facilitates continued work down the road. This is not how it is these days, though.
In scientific publishing there is a clearly established and viciously defended hierarchy, as defined by the average number of citations papers in a given journal get. As science builds upon earlier findings – scientists really do stand on the shoulders of giants – these earlier findings are cited and acknowledged in a list of references at the end of each paper. The most influential papers are cited most frequently and rack up the highest citation score. The average citation score for all papers published in a journal in a given year is called the impact factor. The higher it is, the more prestigious and authoritative a journal is considered. The published papers inherit that prestige and a positive feedback loop ensures that papers in the most prestigious journals are cited most often. By being published there, they carry authority. Thus, the higher the impact factor of a journal, the more desirable publishing in it is and the more manuscripts are submitted to it by scientists wishing to benefit from the prestige of the journal. The higher the impact factor, the more ruthlessly the editor has to weed out the submissions.
In the life sciences, three journals are considered to stand above the rest. Getting published in one or more of them is prerequisite for a good job and an accelerating career. It's also key to being taken seriously by peers. Papers published in the top journals count for more than papers published in lower-key journals. This is of course profoundly irrational and contradicts science as it should be. As mentioned above, papers must eventually be judged by the significance of the results they present, by the impact they make on science. The experiment whether publication in a top journal predicts high impact is impossible to do. After all, one can't just publish the same paper twice in journals of vastly differing repute without anyone being aware of it; or observe parallel realities in which the paper has been published in either one or the other journal. But it's easy to argue, at least retrospectively, that the scientific content or the quality of the work are independent of where a paper is published.
One could make the reverse argument – and this is indeed made by those in the publishing industry – that the top three journals attract the strongest submissions, and that the best of these submissions are chosen for publication. It's a bold argument to make given the abundance of manuscripts and the dense content of scientific publications. How could one possibly identify the best before their impact on their fields becomes clear? But much like commodities futures traders in the City, journal editors claim to have the ability to make predictive bets that will bear out. This is preposterous in both cases. In addition, the vagaries of fashion change what's hot and important from year to year, and editors always prefer articles that make a journal appear at the cutting edge of science as perceived by the scientific public.
Another problem is that the pressure to publish in top journals (for job, promotion and respect) and, more fundamentally, the pressure to publish first – because in the sciences, second winner is really first loser – can lead to hasty experiments and overblown conclusions. By some estimate, up to a quarter of all papers published in the big three eventually turn out to be wrong. By the time they are put right, these papers will have left a mark on scientific thinking that’s hard to erase because their readers have considered these papers more significant than they were by virtue of their place of publication.
Are scientists so stupid that they cannot see when something is wrong? Well it's not the case anymore – if it ever was – that any experiment is immediately repeated and verified by other researchers working in the same field. It wouldn't be possible, given the complexity of method and apparatus. Published results and, to a lesser extent, conclusions are important guides for the design of new experiments. They are always taken with a grain of salt, but a basis of trust in the quality of publications and in their veracity underlies all scientific work.
Ségalat calls flawed, incorrect and redundant papers the toxic assets of science and is rather concerned about their spread, and it is true that they have unhealthy effects on science and scientists. Publication of results that later turn out not to be true is a costly mistake, mostly in terms of time and effort wasted. It's costly to those who try in vain to repeat earlier experiments and to those whose experimental designs are based upon flawed premises derived from earlier conclusions. Until earlier flaws are exposed in follow-up papers, progress in a certain field of science will be held back for months, maybe years.
Between each of the steps in the path of warped scientific inquiry are long delays. Experiments need doing, and doing again; results need interpretation and writing up. Papers need reviewing and publishing, then reading. Experiments and resulting conclusions need independent verification, and repeating if doubts arise. A refutation of an earlier claim isn't straightforward or obvious, but relatively easy compared to the struggle to get the refutation published. The whole process can takes years, especially if the initial result was published in a prestigious journal or received enthusiastically and quickly accepted into the scientific canon. In this case, editors may categorically block submission and even if that first hurdle is surmounted, reviewers will want to see extra-rigorous proof. Unfair as it is, it's harder to right a wrong than to get something wrong accepted.
The current approach is potentially lethal to young scientist on their way into the profession. Graduate students and fresh Ph.D.s alike have only a short period of time to complete their projects and build a career that is defined by a paper trail of high-impact publications. There is no time to squander opportunities. Working on a project that turns out to be based on misinterpreted evidence, years are easily wasted and a nascent career is potentially wrecked.
At the heart of this problem is the artificial and misleading authority of the impact factor. If science weren't so focused on it and would instead concentrate on publishing results without discrimination and haste but with care and thought, knowledge would probably accumulate faster and more smoothly. I already get email reminders of new papers on all topics I'm interested in, as indexed in a public repository of science that comprises the vast majority of scientific publications. I don't really need to know what journal they're in. A quick glance through the abstracts tells me if I should invest the time reading them. Some community ranking, on a paper by paper basis, would let the important papers stand out, not the lucky ones. Flawed papers would be exposed and marked quickly, with no need for a drawn-out and contentious retraction process. Science could progress on merit alone.
No comments:
Post a Comment