Identifying Implicit Bias in Grant Reviews
Jude Mikal, Stuart GrandeIn my first experience as a grant review panelist, I received a short training on implicit bias in grant reviews.We were told to “be aware of biases” and to “limit evaluation to the research,” leaving the onus of identifying implicit bias on us, the reviewers. But what happens when you aren’t aware of the biases you hold? How does a reviewer “focus on the research” when so much of the application materials reveal telling information about the applicant?
Following are eight criticisms of grant proposals that could indicate implicit bias in grant reviews. We developed this list through conversations with colleagues and mentors who review grant proposals.
1.Basing decisions on a track-record of extramural funding
It can be intimidating for a reviewer to disrupt the track record of prominent researchers. After all, they may think, other reviewers have found this person’s work to be worthwhile – am I missing something? Unfortunately, this approach relies on an investigator’s previous research as a proxy for the quality of the proposal under review. And while there are no doubt transferable skills in research, each proposal should stand on its own merits. Additionally, new investigators, junior faculty, and non-tenure-track researchers are unable to lean on a track record of funding as evidence of research quality.
2. Basing decisions on investigator rank, or university prestige
Ambitious proposals presented by any investigator should give a reviewer pause, and reviewers are wise to ask questions about whether the proposal has the personnel necessary to conduct the project. However, evidence of feasibility should come from the methods and the team, not from tenure. The assumption that new investigators are not capable of spearheading innovative projects leads young investigators to write smaller, cautious proposals that are less innovative or impactful–and therefore less likely to be funded.
3. Basing judgement on grammar or syntax
Allowing syntax or grammar errors to influence evaluation of the quality of a scientific proposal can put undue burden on investigators who are foreign or for whom English was not their first language. These researchers already have to grapple with publishing academic articles in a foreign language alongside writers and speakers for whom English is their native language. Additionally, scholars at elite institutions are more likely to have access to writing review services.
4. Basing judgement on sensationalist introductions
Evaluating proposals based primarily on the gravity of the problem can lead to funding significant problems instead of rigorous solutions. I have seen it happen. Dollars are directed towards the most emotionally compelling problems, with less attention given to the contribution of the research to understanding and ultimately resolving those problems. Furthermore, certain experienced and well-funded researchers know how to play this card better than others, and this can lead to a concentration of research dollars.
5. Placing a high value on methodological innovations
Not all research needs to reinvent the wheel. Compelling problems can have straightforward solutions. Rejecting research proposals that are based on a more straightforward approach to research risks encouraging convoluted research design and overly complicated methods.
6. Evaluating based on an investigator’s advisor or social networks
Social networks and social capital play an important role in access to resources in academics – oftentimes too important. Unfortunately, access to those networks and capital are not distributed equally among individuals and can be affected by university rank, rank within the university, department, gender, sex, immigration status, or race. Allowing social networks to influence our evaluation of research tips the scales towards more well-represented investigators, institutions, departments, and demographic groups – and the tradition of putting much into the hands of few passes down to another generation.
7. Requiring pilot data for small projects
Reviewers often expect to see pilot work,, even for small research grants. The problem is that not every researcher on campus has access to pilot funding. Furthermore, some of the mechanisms–at the National Institutions of Health, for example–were designed for junior faculty or faculty pursuing “proof of concept” studies. Requiring pilot funding for small projects favors researchers with an extended research track record and has the potential to bar access to new researchers.
8. Not requiring diversity in investigators
While reviewers are comfortable requiring investment from senior investigators on juniors’ projects, seldom do reviewers review investigative teams for evidence of diversity in investigative teams. Additionally, there is no clear way to highlight diversity within a proposal: e.g., the number of – and critical role played by – women and people of color, partnership with underfunded states or universities, or the contribution of non-tenure-track researchers and staff.
Hailed as an unbiased way to assess quality and allocate resources, peer review takes the decision-making power out of the hands of the few and puts it into the hands of the many. Yet without active resistance against implicit bias – peer review is just as likely to favor the status quo.
All comments will be reviewed and posted if substantive and of general interest to IAPHS readers.