There are a number of different connections between values to science. These sometimes get lumped together in the values and science literature. Even when they are distinguished, it isn’t always noted that each connection applies to somewhat different values and applies to somewhat different aspects or parts of science.
Values are involved in deciding to do science at all (rather than something spending our time on something else) and which science we do (e.g., physics, biomedicine, or geology). Also, our ethical commitments influence which research methods we use.
Here, values play what Heather Douglas calls a direct role. The values can be anything that motivates us, but this only applies to our decisions launching enquiry. Data analysis and theory choice are insulated from these kinds of considerations.
At least since Kuhn’s “Objectivity, Value Judgement, and Theory Choice“, there’s been a tendency to count theoretical desiderata as values. We want theories that are accurate, consistent, simple, broad in scope, and fruitful.
At bigthink.com they point out that although one might revise Kuhn’s list, this connection presumes some list of values as the rationally relevant ones. My passions and moral commitments don’t get to add extra things to the list. This connection is compatible with a hedged value-free ideal which says that no other values are relevant to theory choice.
Some concepts or terms have ineliminable normative weight to them. When observations and theories are posed in such terms, they are inescapably value-laden. Although such terms could be replaced with operationalized substitutes, this might lose the phenomena of interest. Replacing well-being with mean income, for example, would involve a serious shift in what an economic theory could be about.
Any values could be implicated in this way, provided a thick concept can be found which involves them. The connection to values is most readily made in medical and social sciences. Some connections can be made to non-human biology. It is harder to see how more abstruse sciences like particle physics involve any thick concepts.
There is always a tension between wanting to believe true things but wanting not to believe false things. What balance should be struck is a question of values. (This is what Douglas calls the indirect role of values.)
At nap.edu they state that the values are not valuations of outcomes simpliciter, but instead of making certain kinds of error. How bad would it be to believe something false, how bad would it be to disbelieve something true, and what would the cost be of suspending judgment?
As I blogged recently, I think this is ubiquitous. Of course, for a question of abstruse science, we might judge that there is really no special cost to any possible error. That judgement that there are no extraordinary values to consider is itself a value judgment. So the connection applies.
Even granting that, the connection doesn’t really do any work if it is generally agreed that there are no extraordinary values to consider. And I guess it only does a little bit of work in cases where the values on both sides are obvious, as in product safety testing. It would help if I could spell out in a useful way where the connection does interesting work. (That’s effectively what Joyce asks in the discussion that prompted my recent post.)
In “Value Judgements and the Estimation of Uncertainty in Climate Modelling“, Justin Biddle and Eric Winsberg argue that both the predictions and the estimate of uncertainties that come out of climate models depend on how the models are constructed. For example, a model of surface temperature might have a module to account for surface ice and then have another module added for ocean currents. When the surface ice module is developed, various kluges are employed to make it work with the model.
These become generatively entrenched as the further model for ocean current is developed, and that involves further kluges. A different project which started in a different way and added refinements in a different order would yield different results. This means that earlier choices influence the results of the ultimate model. Deciding that it was more pressing to reckon with surface ice than with ocean currents changed the outcome in ways that are not readily apparent.
In principle, one could rebuild the whole thing in a different order and compare the results. That is practically impossible. It takes a long time to build climate models. Although we can continue to construct more and different models, we will never be able to sum over all the possible orders in which various factors could be considered.
I think of the general connection as path dependence: Past choices about which phenomena to attend to effected the development of scientific accounts such that their content is different than if different choices had been made.
Importantly, it’s not just whether we know something or not; that’s just the that-and-which connection that everybody acknowledges. The past path of development effects not just which things we know about but also what we think we know about those things.
Interestingly, the values connected to science here are the values in the past which influenced decisions to add factors to the model in one order rather than another. Contrast this with epistemic risk, where what matters to our theory choice is how we now assess various costs. Path dependence suggests that past values (whether or not we endorse them now) are in a sense baked in to the details of scientific claims.
This is not a connection between values and science which I’ve seen discussed outside the Biddle and Winsberg paper, and they just make the case for climate science. The details of their argument turn on the fact that climate is too complicated to model in one go, and that climate models are constructed piecemeal over time. I suspect that path dependence applies more broadly, but I’m not sure.