Woodward, J. (2008). “Mental Causation and Neural Mechanisms,” in J. Hohwy and J. Kallestrup, eds., Being Reduced: New Essays on Reduction, Explanation, and Causation (. Oxford: Oxford University Press), 218–262.
In Woodward and Hitchcock (2003), we proposed a model of causal explanation in which an explanation of the falling picture might take the following form:
Lombrozo, T., and Carey, S. (2010). “Functional Explanation and the Function of Explanation,” Cognition 99: 167–204.
Hoffmann-Kolss, V. (2014). “Interventionism and Higher-level Causation,” International Studies in the Philosophy of Science 28: 49–64.
Yablo, S. (1992a). “Mental Causation,” Philosophical Review 101: 245–280.
P…Suppose we are considering several different causal claims/explanations formulated in terms of different candidate cause variables V1…Vn… involving some target effect or explanandum E… Then a choice of variable Vi (and of the dependency claims regarding E in which Vi figures) satisfies proportionality better than an alternative choice…to the extent that those dependency claims avoid falsity…and omission. (Woodward forthcoming, 369)
Lombrozo, T. (2010). “Causal-Explanatory Pluralism: How Intentions, Functions, and Mechanisms Influence Causal Ascriptions,” Cognitive Psychology 61: 303–332.
Woodward offers a modified definition of proportionality labeled (P). Here is an excerpt:
Yablo (1992a, 1992b) proposed a proportionality constraint on causal relationships. We may illustrate it with an example: A picture hook can support up to 20 pounds. I use it to hang a heavy framed picture weighing 32 pounds, and the picture falls off the wall. (Woodward (2008) gives a similar example.) Consider the following two claims:
I wish to highlight the importance of the parenthetical clause.
I think that a tacit assumption of Yablo’s proportionality condition, and of almost all variants that have been proposed in the literature, is that there can no division of labor along the lines of (3a) and (3b). One and the same thing, the CAUSE, must perform both tasks: specifying what actually happened, and conveying information about how the effect depends upon changes in the cause. And I think the source of this assumption is the failure to appreciate Woodward’s methodological point (II).
1. The picture’s weighing 32 pounds caused it to fall.
2. The picture’s weighing more than 20 pounds caused it to fall.
An empirical question is whether humans employ causal representations in the form of (3). Do we employ separate representations of the cause variable, and of the way the effect depends upon the cause? Or do we instead try to capture both with a single representation? I do not think that existing literature, such as Lien and Cheng (2000), addresses this question. Indeed, I think it would be difficult to test. In part, this is because prompts employing the word “cause” may be unsuitable for eliciting representations of distinct causal relationships. And in part, this is because it is difficult to express the difference between (2) and (3) in colloquial English.
Yablo, S. (1992b). “Cause and Essence,” Synthese 93: 403–449.
Lien, Y., and Cheng, P. (2000). “Distinguishing Genuine from Spurious Causes: A Coherence Hypothesis,” Cognitive Psychology 40: 87–137.
Causal explanation (3) perfectly satisfies Woodward’s condition (P): the information it provides about how falling depends on weight is both accurate and complete. Thus there is a successful causal explanation that identifies the specific weight—32 pounds—as a cause of the picture falling. We can also give a causal explanation satisfying (P) that uses a different variable, W*, which takes the value one if the weight is greater than twenty pounds, zero otherwise. But Woodward’s condition gives us no reason to prefer this explanation over (3).
Woodward, J., and Hitchcock, C. (2003). “Explanatory Generalizations, Part I: A Counterfactual Account,” Noûs 37: 1–24.
According to the proportionality requirement, (2) is true and (1) is false. The problem with (1) is that it identifies a cause at the wrong “grain”—more specific than appropriate. Yablo uses proportionality to argue that sometimes high-level mental causes exclude low-level physical causes.
3a. W = 32
3b. F = g(W)
3c. F = 1
James Woodward’s Causation with a Human Face defends three methodological proposals: (I) The empirical study of causal reasoning can fruitfully inform the philosophical analysis of causation, and vice versa. (II) Philosophers should attend to distinctions among different kinds of causal relationship, and not just the distinction between causal and non-causal relationships. (III) Our understanding of causation is illuminated by consideration of the practical function of causal concepts and reasoning. I am in broad agreement with Woodward on all of these proposals. My discussion will focus on Woodward’s final chapter on proportionality, and particularly highlight (II).
Woodward, J. (Forthcoming). Causation with a Human Face. Oxford: Oxford University Press
W is a variable representing the weight of the picture in pounds, F is a variable that takes the value one if the picture falls and zero otherwise, and g is a function that takes the value one for arguments greater than 20, and zero otherwise. (3b) has counterfactual import: it says that if W were set to w by an intervention, then F would be equal to g(w). There is a division of labor in (3): (3a) tells us what actually happened—which value of the cause variable was realized; (3b) tells us how the effect variable causally depends on the cause variable. Thus (3) acknowledges that there are different possible ways in which the effect might depend on the cause; for example, a different hook might be able to support more weight without the picture falling.
I think there are reasons to prefer W over W* as a causal variable. Woodward stresses the importance of causal relationships that are highly invariant. One reason we value such relationships is that they are highly portable: we are likely to find them in a variety of contexts. My fellow commentator Tania Lombrozo has also emphasized the importance of portable causable relationships (Lombrozo 2010, Lombrozo and Carey 2006). I think a similar consideration applies to causal variables. W* is ad hoc. It is constructed specifically to be a proportional cause of the falling picture. By contrast, we may expect W to figure in many different causal relationships. For example, suppose I want to know how much it will cost to ship the framed picture, whether I will be able to lift it, and whether I should buy a more expensive picture hook that supports 40 pounds instead of 20. It makes more sense to formulate a causal model with the single variable W as a cause of all these other variables, rather than a model with several different variables for weight, each one proportional to a different effect. (Hoffmann-Kolss (2014) makes a similar point.)