Scientific Judges

Does the System Work?

The pervasiveness of politics and the subtlety of cognitive biases mean that the academic peer review system needs reforming. Our system of just trusting scientists to be unbiased doesn't work. Like many other systems in American society, it's biased so
The pervasiveness of politics and the subtlety of cognitive biases mean that the academic peer review system needs reforming. Our system of just trusting scientists to be unbiased doesn't work. Like many other systems in American society, it's biased so
block
Alexander Danvers :
How do we know what good science is? There are some good intuitive benchmarks. Good science helps us explain things that we wouldn’t otherwise understand. Good science can be applied to help us build new things or improve the way we live. Good science provides reliable facts, things that can be verified by many people from observations (even if these observations need some interpretation, as in the output of a brain scan).
These are all criteria that work well when we look back on science and pick out the greatest hits. But what about when we’re judging science in the moment-when we don’t know how much a scientific theory will be able to explain, or whether it can be applied to solve a problem, or even whether other people will be able to confirm a new fact as reliable? Science is a highly competitive career, with many people struggling to make contributions-and find work. How do we decide what good science is for the purposes of handing out jobs or money for research?
Right now, the answer is “we ask scientists in the same general area to look at each other’s work and judge it.” New science needs to be judged by someone who understands its context-the facts, theories, and important questions that still need to be answered by research in this area. It also needs to be judged by someone who has training in interpreting and critiquing the methods and statistics used to arrive at conclusions, and that requires some specialized training in these areas. Who else already understands the context and has the specialized skills? Other scientists. So the current standard is to have two or three scientists working in an area read newly submitted descriptions of scientific findings, critique them, and then give a thumbs up or thumbs down judgment about whether they are worth publishing in a given journal.
But if our everyday judgments about what is good, bad, or mediocre scientific achievement is based on the opinions of other scientists, aren’t we worried that these opinions won’t live up to the aspirations of science as a rational system? It’s possible that subtle biases will influence their judgments without them even realizing, or even that the biases aren’t so subtle-and that they are aren’t so unaware. Put another way, if we were to ask a bunch of students to grade each other’s papers-and those grades were going to determine which of them got jobs and how good those jobs were-would you trust them if they said just said “Scout’s Honor, we’re unbiased”?
Here’s a potential subtle bias that might influence judgments of science. Psychology since at least 1980-when Robert Zajonc published a paper called “Thinking and Feeling: Preferences Need no Inferences”-has documented several instances in which just being familiar with something increases our liking of it. Importantly, this doesn’t even need to enter our conscious awareness-as Zajonc put it, preferences (liking) need no inferences (thinking). In science, the theories people are already familiar with, the Big Name researchers who have already published a lot, the familiar methods used to test questions all might get an unconscious liking boost over the new theory, new research, or new method.
At the day-to-day level, an experiment on medical reviewers found that when they were given the name of a prestigious person and university while reviewing-as opposed to getting an anonymous manuscript-they were much more likely to accept it for publication (87% accepted it when the prestigious name was available, 68% when it was not). A similar paper in computer science created a model predicting acceptance of papers to a prestigious conference. Three factors predicted a greater likelihood of acceptance when people had information about who submitted (vs. having an anonymous paper): whether it was submitted from a famous researcher, whether it was submitted from a famous university, or whether it was submitted from a famous tech company. Scientists expected to make unbiased decisions when reviewing research seem to be routinely biased by fame.
Beyond just accepting a specific result, researchers also hate giving up on an old theory to accept a new one. As physicist Max Planck put it: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Or, as it is often paraphrased, “science advances one funeral at a time.” As long as the proponents of an old theory are reviewing work, it will be very difficult to get them to change their opinions on the validity of a new one.
Another common bias is more self-serving: a researcher who champions one important theoretical perspective might actively try to give bad reviews to-or to get their friends and colleagues to give bad reviews to-competing theories. This practice is perceived as so routine that many journals allow researchers to provide lists of people who they do not want to review their work, because they don’t think that person will give them a fair hearing. I was introduced to this type of theoretical tribalism as an early graduate student when a famous researcher from an Opposing Theoretical Perspective (which I hadn’t fully made up my mind about) corrected my **question** to a guest speaker so that it was more in line with their views.
Anecdotally, I have heard stories of psychologists choosing which journal (or which section of a journal) to submit to, based on how friendly they think the editor is to their “perspective” as opposed to the content of the paper; famous psychologists getting angry when a reviewer they knew personally didn’t give their student an extra boost in ratings; and famous psychologists who get rejections writing angry letters to the editor to appeal the decision-an option that would be highly unlikely to work if a junior colleague employed it. All of these strategies allow people with a good professional network to game the publication system, maintaining their position as top scientists through insider knowledge and relationships-as opposed to by doing top science.
The pervasiveness of politics and the subtlety of cognitive biases mean that the academic peer review system needs reforming. Our system of just trusting scientists to be unbiased doesn’t work. Like many other systems in American society, it’s biased so that the rich tend to get richer-and everyone else has to work extra hard to break in to the elite. If I want to get a good job-or really any job-doing research at a university, I need to land a paper at an A-level research journal. The way I should be able to do that is by finding a scientific discovery that meets all the criteria we reviewed up top: it explains things we didn’t already understand, it can be applied to help people, and/or it presents a new reliable fact about the world. As philosopher Thomas Merton wrote*, “The acceptance or rejection of claims entering the lists of science is not to depend on the personal or social attributes of their protagonist.” When scientists allow the familiar and the politically expedient to bias their judgments, they fail to meet the requirements of science. It’s time to do better.
· I actually got this quote from a paper by one of my scientific heroes, Simine Vazire, so I’ve linked to her paper (as opposed to Merton).

(Alexander Danvers, Ph.D., a Postdoctoral Fellow at University of Oklahoma, researches emotions and social interactions).

block