“Our interests and our values are the engine of scientific discovery.”
“What science needs is not a return to ideological obliviousness but growing ideological awareness: a collective move to uncover the social and political values that influence our scientific research in order to critically evaluate those values and the roles they play.”
Like all human minds, those belonging to scientists use lived experience to create and store representations of the world around them. While these representations can be brought into consciousness, for the most part they operate automatically to organize what we focus on when we look out into the world, how we feel about what we see, and how we respond.
In previous posts, we defined values as our beliefs about what is important to us and the world, and we discussed the motivation for exploring how values and science are enmeshed. In this post, we consider how carrying around a (not fully conscious) set of values and assumptions can affect the work that scientists do. As the first quote above suggests, this is not a morality tale. Without values, science wouldn’t exist – we seek knowledge because it’s important to us, whether as a means to an end or as an end in itself. However, as we’ll discuss, the conduct of science allows plenty of room to let values influence how we go about our work, for better or for worse.
How Values Affect Our Questions, Theories, and Concepts
Values play a central role in influencing what we choose to study. Today’s medical science is the result of thousands of years of effort by people with a passion to improve health. Dissatisfaction with only individual-level or biological explanations led many of us to study population health. Scholars wanting to broaden the understanding of racial inequities pioneered the study of institutional racism. But values can also lead us to be narrow or misguided in what we study. For example, eugenics arose out of Darwinian theory applied through the lens of racial, class, and ableist prejudices. Similarly, male domination of medical research stifled research on women’s health.
This is not a morality tale.
Much research is problem-driven, and how we define and view a problem depends on our values. White middle-class values in the late 20th century promoted the definition of teen pregnancy as a social problem. Researchers and advocates saw becoming pregnant before age 20 as harmful to girls’ life chances because it curtailed education and the accumulation of human capital. Driven by this perspective, they asked questions rooted in a deficit model: What were the negative outcomes of teen pregnancy, and what risky attitudes and behaviors caused it? Eventually, qualitative work uncovered answers to questions the researchers weren’t asking – why teenagers in disadvantaged circumstances might welcome motherhood as a path to adulthood when other paths were blocked.
The theories we choose to guide our work can also be affected by values. “Culture of poverty” theory, widely criticized for attributing poverty to poor people’s values, was abandoned decades ago, but new theories of culture are now arising for studying culture in the context of poverty. In current population health research, many of us look first to theories about systemic or structural determinants of health despite the fact that all levels of analysis are relevant in population health science. One reason for focusing less on individual behavior as a source of unhealthy outcomes may be that we are afraid of “blaming the victim.” We may shy away from studying these behaviors out of respect for those whose opportunities may be limited. However, even people with limited opportunities exercise agency, and the value of respect might also prompt scientists to focus on how individuals act to optimize health even in the presence of constraints.
. . . “fair and just” does not mean the same thing to all people.
Another area in which values affect our science is how we define and use concepts. The Robert Wood Johnson Foundation defines health equity as a “fair and just opportunity” for everyone to be as healthy as possible. Yet, as social scientists have pointed out, “fair and just” does not mean the same thing to all people. In these circumstances, what we choose to measure is consequential. Do we take health disparities as a proxy, implicitly equating equity with equality? Or do we study access to all the things we think improve health (e.g., education, income, healthy neighborhoods, nutritional food, care) and exposure to all the things that undermine it? How do we distinguish equality of opportunity from equal use of opportunities?
Into the Weeds: Values and the Process of Doing Science
So far, we’ve been talking about high-level issues in how we do science – the topics we study, the questions we ask, the theories and concepts we adopt. Beyond this, there are a myriad of decisions about how we proceed – the hypotheses we adopt, the design of our studies including the populations to study and how (and how many) observations to obtain, the measures we use, the methods we use to analyze our data, the extent to which we experiment with alternative analytic approaches, how we report and interpret our results, how we respond to grant opportunities, what we seek to publish and where we submit it, and whether and how we disseminate results beyond academic audiences.
Norms of scientific behavior and practical considerations drive many of these decisions. Beyond these constraints, however, scientists have many degrees of freedom in how they conduct their research. Given this, some analysts have observed that values can – intentionally or not – steer research towards results that conform with preconceived beliefs. I suspect that all of us may have, one time or another, fiddled with our models and methods when our research didn’t support our hypotheses; some of us may even have decided to pare down the results we report to those that best fit the desired narrative, or used language that implied causality when we knew we had only a correlation. Since scientists tend to discount the extent of their own biases compared to those they perceive in other scientists, this makes it even harder to monitor and prevent our own biases from creeping in.
What’s a Scientist to Do?
Given that our values are inextricably enmeshed in our scientific practice, what do we do? The quote above from Angela Potochnik asks that we be mindful of our values and how they affect our work. She calls this a collective move. It can be as simple as choosing collaborators who think and feel differently about the topics we study, or who represent the perspectives of the people and communities we study. It can also mean creating a community with a culture that welcomes diverse perspectives and supports constructive debate about how values and science are intertwined.
Can IAPHS create such a community?
Please comment below! Do you believe that scientists should take steps to minimize the effect of personal values on their work? If so, what helps you do this? If not, tell us how you address the intertwining of values and science.
You can access the comment box by signing into the IAPHS website here. Once you’re signed in, scroll down below this post to find the comment box. If you’re inspired, you can enrich the conversation by posting your thoughts as a separate post.
Additional read: Stuart Richie, 2020. Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth.