Empirical Results concerning Causal Reasoning and Their Philosophical Significance

As explained in the previous post, one very rich area of inquiry concerns the extent to which normative theories of causal reasoning, from both philosophy and elsewhere, successfully characterize how, as an empirical matter,  various sorts  of subjects (adults, children, non-human animals) reason. This has been explored by psychologists and other researchers but many of the results are less well-known to philosophers than they should be. These results matter not only for their intrinsic interest but for at least two other reasons. First, as noted previously many philosophical theories of causation make (in addition to whatever other claims they make) what look like empirical claims about how people will in fact judge and reason and experimental results are relevant to the assessment of these.  Second, although as acknowledged in Causation With a Human Face (CHF), there  is no direct entailment from (i) how people in fact reason to (ii)  how they ought to reason, results about (ii) can sometimes be brought to bear on (i) if certain additional assumptions hold. To take just one possibility,  suppose we find , as an empirical matter that that people’s causal reasoning sometimes (but not always)  embodies feature X,  that when this is the case their reasoning is successful when evaluated in terms of goals associated with causal reasoning  and that when their causal reasoning does not embody X, it is less successful. Particularly if we can see, via mathematical or empirical analysis, how X contributes to success, this provides some support for the normative appropriateness of employing causal reasoning embodying X and puts some pressure on normative theories according to which X is irrelevant to how we should think about causation or according to which incorporation of X into causal reasoning is a mistake.  For example, although a number of philosophers reject a role for proportionality considerations (in the sense of Yablo, as discussed in a later post) in causal judgment, people’s judgments are, as a matter of empirical fact, influenced by such considerations and it is fairly straightforward that judgments incorporating such considerations do better at serving goals associated with causal reasoning.  

A second issue is why, if projectivist views are correct, we  humans engage in all that projecting, making the rather gross mistake  of confusing the contents of our minds with the contents of the world. As noted above, the evidence seems to strongly support the claim that  the additional cognitive structure  that goes beyond the representation of patterns of association is functional in that  helps to enables  activities like planning, extrapolation of observed patterns to not yet realized  conditions and so on.  Does it make sense to suppose that there is is nothing “out there” (besides associations) that this additional cognitive structure tracks or latches on to– no wordly correlate that helps to explain its functionality? I don’t put this forward as a conclusive argument against projectivism but it is an argument that (it seems to me) projectivists need to address. 
Many but not all philosophical theories of conceive of causal relations in terms of  difference-making; causes make a difference to their effects. Difference-making in turn is often spelled out in terms of counterfactuals and, if one is an interventionist like me, in terms of interventionist counterfactuals– that is, counterfactuals that describe what would happen if an intervention were to be performed. According to other theorists, however, causation does not have anything intrinsically to do with difference-making but should instead be understood in terms of a connecting process or a “mechanical” relation  of some kind between cause and effect. These philosophers (and others as well) often claim that counterfactuals are unclear, context dependent and generally unsuitable for the elucidation of causal relationships. It is thus relevant that experimental studies show that ordinary subjects do readily associate counterfactuals with causal claims and that these counterfactuals are interventionist counterfactuals, which subjects distinguish from other sorts of counterfactuals (e.g. backtracking counterfactuals). Indeed even young children are capable of reasoning correctly about mechanical  devices by making use of counterfactuals with complex antecedents. (See the discussion of the conditional intervention principle in CHF, Chapter 4.)  CHF describes a number of additional results along similar lines including experiments  that suggest that children pass through a stage in which they are able to learn about dependency relations from their own interventions and by observing the interventions of others but not are not able to successfully combine this with  information about  dependency relations that does not involve interventions by agents. This is in then followed by a stage in which agent-based and non-agent based  information is successfully combined, as it is in adult causal cognition. CHF argues that this has implications for how we should think about “agency” theories of causation of the sort developed by Price, Menzies and others. 
With this as background CHF explores a number of empirical results concerning causal cognition. Some of these  (particularly results concerning the role that  invariance and proportionality play in people’s causal reasoning)  will be subjects of later posts. Here I  survey some additional results not discussed in later posts. 
This is interesting for several reasons. First it provides another context besides familiar ones like language learning (e.g., Fodor and Pylyshyn vs connectionism)  in which the contrast between associationist approaches and models that claim that additional structure is required for the characterization of learning and cognition. In other words,  we have sophisticated computational models  of causal cognition which we can compare with the predictions of more associationist models. A closely related issue is this: supposing that this additional structure is present in human causal cognition, what should we make of  this fact? One possible response amounts to a kind of “error” theory: Human causal cognition may have this richer structure but all that is “out there” in the world is “Humean” patterns of association.  For some reason, our causal cogntion misrepresents what is really out there, endowing this with more structure than it really has. Of course there is precedent for such a view in Hume himself and in more recent “projectivist” accounts of causation according to which the modal commitments apparently carried by causal thinking represent a kind of projection on to the world of features that have to do in some way with our psychology (e.g. the fact that we are deliberating agents located in time, as  argued by Huw Price). One point worth noting  is that if such views are broadly correct, what is projected  must be understood as having a very rich and sophisticated structure since this what is present in empirically supported “rational” models of human causal cognition developed by Gopnik, Cheng, Tenenbaum and others. Humean generalities about the mind “spreading itself over the world” will not capture this. (Of course most of these psychologists are not projectivists  in the philosophical  sense just described– I’m describing what would need to be projected if their views were to take a projectivist form).
Another important issue in the empirical psychology of causal cognition that has philosophical significance concerns the extent to which human causal cognition can be  understood in terms of models of conditioning like the Rescorla- Wagner (R-W) model  and variants of this. (Some prominent researchers in animal learning have claimed that this is the case). The issues here are complex but a standard assumption, made by many on both sides of this debate, is that at least in some contexts, models like R-W predict that human causal judgment involving a single candidate cause and effect,  represented by dichotomous variables, closely tracks a quantity called Dp, where D p = Pr(E/C)- Pr(E/not C). (It is assumed that no confounding is present.) Several experiments described in CHF (including Cheng’s experiments on causal power described in a later post) are inconsistent with this prediction but consistent with the models that ascribe a richer structure to human causal cognition that goes beyond what is captured by models like R-W. Roughly speaking these richer models claim that human causal representations and reasoning patterns have a structure that cannot be fully characterized in terms of “associationist” statistics like D p. This is the case for example, for models that characterize causal cognition in terms of causal Bayes nets. This additional structure, in my interpretation, has modal or counterfactual content, including content having to do with what would happen if various interventions or other changes were to occur. In other words, people’s causal representations include the representation of information about interventiionist conterfactuals.This additional content is, I claim, essential for activities like planning and is thus highly functional.