Interventionist approaches to causation and explanation seek to show how a given event outcome depends on other variables or factors. They are a species of counterfactual and contrastive accounts of causation that many, including us, think provide a useful and intuitive way to understand causation. Here, we briefly introduce this approach to causation and then offer some speculations about how to apply the causal modeling used in interventionist theories to some debates about free will.
When considering whether something is a cause, we ask, “What if things had been different?” and by answering this question we identify factors whose manipulation would produce changes in the outcome being explained. If this (cause) variable were altered in these ways, this (effect) variable would be altered in these ways.
James Woodward has developed an attractive interventionist view of this sort. Woodward’s account relies on the notion of a causal model: a representation that encodes hypothetical relationships between variables. These variables represent causal relata, and each variable represents a particular event in such a way that it can be set to different values by interventions. An intervention is an exogenous change to the value of a variable—we consider what happens in the model by tweaking just this variable’s value. (By contrast, an endogenous change to the value of a variable occurs because of the values taken by other variables within the model.) In this way, interventions are “surgical,” in the sense that the usual causes of a variable, or of a variable’s taking a given value, are ignored or erased.
On this approach, “X causes Y” (a type-level claim about direct causation) means that, for at least some values that a variable X can take in a model, there is a possible intervention on X that, when all other variables in the model are held fixed at some value by interventions, would result in a change in the value of Y.The hypotheticals specifying the relations that hold between the variables in a model are stated as structural equations, which are asymmetrical: the values of the variables on the left hand side of the “=” are determined by what is on the right hand side. These equations comprise the model, which can also take a graphical form indicating the dependency relations obtaining between the variables by means of “directed edges,” or arrows, which connect the variables in the graphical representation.
Token causation is defined in terms of a directed path, or “sequence” of direct (type-level) causes. Relative to a given causal model, “X=x is an actual (token) cause of Y=y” iff:
(1) The actual value of the variable X is x, and the actual value of Y is y; and
(2) There is at least one directed path from X to Y for which an intervention on X will change the value of Y, given that other direct causes of Y that are not on this path have been held fixed at their actual values.
Importantly, this approach is perfectly capable of modeling causal relations in both deterministic and probabilistic systems. Also, when deciding whether a given model is appropriate, it is important that the equations not contain variables whose values correspond to possibilities that should be considered too remote—as Woodward puts it, “our causal judgments should be influenced just by those counterfactuals that concern only serious possibilities.”
How does all this apply to free will? Applying Woodward’s approach to cases that are prominent in free-will debates yields illuminating results. (We are not alone in judging Woodward-style interventionist approaches as relevant to free will—Jenann Ismael, for instance, has recently defended a compatibilist view about the ability to do otherwise along these lines.) In particular, we think that an interventionist approach can shed light on Frankfurt cases, manipulation cases, and the counterfactuals that people seem to have in mind when they think about deliberation and choice.
In a Frankfurt case, the counterfactual intervener (Black) is like a late preemptive cause (Mele and Robb cases could be handled slightly differently). If Black has access to information, e.g., a prior sign (PS=2), that suggests, for instance, that Franny is about to decide to return the money in a wallet she finds (FD=2), then Black will intervene (BL=1) to ensure that Franny decides to keep it (FD=1), such that she steals by keeping the money (KM=1). Otherwise, PS=1, Franny decides to keep the money (FD=1), Black sits idly by (BL=2), and the money is stolen (KM=1). So, the entire model looks like this (variables, equations, graph):
PS=1 if prior sign occurs that Franny is about to decide to keep the money; PS=2 if prior sign occurs that Franny is about to decide to return the money
BL=1 if Black intervenes to ensure that Franny decides to keep the money; BL=2 if Black sits idly by without intervening
FD=1 if Franny decides to keep the money; FD=2 if Franny decides to return the money
KM=1 if Franny keeps the money; KM=2 if Franny returns the money
PS=1 or 2
BL=1 if PS=2; else 2
FD=1 if (PS=1 or BL=1); else 2
KM=1 if FD=1; else 2
PS=1; BL=2; FD=1; KM=1
Graph:Absences and Late Preemption”.)
But in the causal model, we want to see what happens when we intervene on the value of Franny’s decision (FD), while holding fixed other direct causes of KM—that’s how we test the causal (or difference-making) powers of variables. In this case, there are no other direct causes of KM, so when we intervene on FD we not only erase PS but also BL as a direct cause of FD. Erasing or ignoring these variables as inputs into FD is necessary in order to assess whether FD=1 is an actual cause of KM=1, that is, in order to assess whether Franny’s decision considered on its own (while ignoring Black) is the relevant causal variable in the situation. And note: FD is here allowed to range freely over two values, which represent Franny’s deciding either (a) to return or (b) to keep the money, and so causal modeling allows us a way of saying that Franny’s doing what she does rather than doing something else is the difference-maker in what happens in a Frankfurt case. This is because when we focus on Franny’s decision, we screen off, or ignore, the variable representing Black’s intervention. The modeling illuminates the intuition that Franny is the difference-maker in the actual case, even though there is another potential cause that would ensure the same outcome.
Indeed, we think that this model captures what Frankfurt was trying to bring to light with his cases—that questions about metaphysical alternatives distract us from focusing on the causal powers of agents.
Now, let’s apply causal modeling to the sort of cases used in Manipulation Arguments (and Zygote Arguments). Here, it gets tricky. Just as we ignored BL in the previous model, one might argue that we should ignore DN, the variable representing the causal role of Diana (i.e., the powerful goddess or neuroscientist who manipulates or creates an agent Manny so that he will decide to perform a specific act in a specific way, such as keeping money in a found wallet), and then consider what would happen if we intervene directly on the variable representing Manny’s decision (MD). If we did that, then because Manny is supposed to act from his compatibilist capacities, if we intervene in certain ways, for instance on his reasons, then he will decide differently (i.e., MD=2 rather than MD=1), such that he does not keep the money (KM=2 rather than KM=1). On this model, it looks like Manny’s decision-making capacities count as a difference-maker in whether the money gets stolen. Here’s the model, which is extremely simple:
DN=1 if Diana creates Manny so that he will decide to keep the money; DN=2 if Diana creates Manny so that he will decide to return the money
MD=1 if Manny decides to keep the money; MD=2 if Manny decides to return the money
KM=1 if Manny keeps the money; KM=2 if Manny returns the money
DN=1 or 2
MD=1 if DN=1; else 2
KM=1 if MD=1; else 2
DN=1; MD=1; KM=1
In the actual case, DN=1 (rather than DN=2) is an actual cause of KM=1 (rather than KM=2). But when we want to find out whether MD=1 is an actual cause of KM=1, we have to intervene on MD, which requires ignoring or erasing the variable DN. And when we intervene on MD and change its value to 2, we find that the value of KM subsequently changes to 2. Thus, MD=1 is an actual cause of KM=1.
Indeed, a “hard-line” compatibilist could use this modeling to defend the claim that there is no relevant difference between Manny and his deterministic twin, Danny. They both have the same compatibilist capacities (and here we see how that means they share the same causal powers), so they are both free and responsible (or at least we have no new reason to think Danny is unfree or determinism is incompatible with free will or moral responsibility). It is entirely appropriate to model Danny’s decision-making in a deterministic universe by considering interventions on his reasons, and one can also model Manny’s decision-making in this way.
Of course, this way of modeling the case explicitly cuts out the manipulator, Diana, her causal influence on Manny, and her goals for what he does. Doing so should indeed drain the intuitive appeal of the crucial premise of Manipulation Arguments that says Manny is not free or responsible for his action of stealing the money. But we think this intuition about Manny is very hard to shake, and we think the best explanation for why this intuition is hard to shake is that the counterfactuals in which Diana is irrelevant are difficult to “take seriously” in the context of the cases.
Whereas it is easy enough to consider what Franny would do with Black out of the picture (that’s the way the case is set up!), it’s much less clear how to consider what Manny would do with Diana out of the picture (unless one succeeds at considering him just like Danny as suggested above). The way the case is presented, it suggests, instead, that as one considers interventions on Manny’s reasons, one also considers that Diana, given her power and goals, would offset any such intervention, for instance bypassing his reasoning or cutting off his access to alternative reasons that might lead him to decide not to steal, or perhaps foreseeing all such cases and preventing the creation of any non-stealing Manny. This, of course, is the way we think about real-world manipulation, in which the manipulator has counterfactual control, adjusting the dupe’s situation (or brain) as necessary to ensure the desired outcome. (That’s the basic idea I suggested in this old post.) In fact, it is easy to represent these cases to imply: “Manny exists only if he does exactly what Diana intends.”
(We invite those who are doing experimental work on manipulation cases to describe your results in comments. We interpret these results as supporting these intuitive models of the cases in that they suggest that, as cases present the manipulator or creator as having less power over, or lacking intentions for, the agent’s action, people’s judgments of the agent’s free will and responsibility increase.)
Finally, relating these ideas to the prior post, we think the sort of counterfactual reasoning involved in causal modeling and interventionist theories also suggests a way of thinking about being free to do otherwise in deliberative contexts.
In Oisin’s recently defended dissertation, he provides a deflationary explanation of a major motivation for libertarianism—namely, the experience of choice—by using causal modeling together with an account of prospection—i.e., the mental simulation of future possibilities for the purpose of guiding action (this recent paper by Seligman, Railton, Baumeister, and Sripada gives a nice overview of how prospection works and its connection to free will). Crucially, prospection can be experienced, and because of the way in which the hypotheticals generated in prospection should arguably be modeled in an interventionist framework, it’s easy for deliberating agents both to experience their choice as indeterministic, and to believe that their choice is indeterministic (and therefore libertarian), even though the modeling itself is consistent with determinism.
Prospection treats the event of one’s making a choice as an exogenous variable in a model of prospected outcomes, and (as we’ve seen) this requires erasing its causal antecedents. This way of modeling our making choices is important for two reasons.
First, it makes it easy for an agent to experience (and to think of) her prospected alternatives as ones she can “get to” in the sense libertarians want to pick out, and which they claim might be provided for by indeterminism. In other words, it might easily seem that the future is open in a way that would require indeterminism for the experience to be accurate. Even so, all that’s going on is that the agent is considering what outcomes she can cause, depending on which choices she makes, which requires letting the event of her choice range across more than one value in order to prospect (or imagine) the downstream effects of her choosing in different ways. Modeling her own choice situation in this way is no different in kind from modeling how various contingent events, such as different paths of an approaching hurricane, would cause various outcomes, such as damage to different cities.
Second, prospection models the event of one’s choice in a given episode of deliberation as exogenous, and thus as carved off from its antecedent causes and allowed to vary freely across a range of values. If an agent experiences deliberation in this way, then she likely experiences her choice as not having antecedent sufficient causes. And a choice that is experienced as not having antecedent sufficient causes is, a fortiori, experienced as not having antecedent deterministic causes.
Even so, belief in libertarian freedom is not justified by such experiences. For one thing, the hypotheticals generated in prospection are subjective and relative, and so they can’t support the view that the future is, in fact, indeterministically open. This is partly due to the epistemic nature of the possibilities that such hypotheticals capture, and epistemic possibility is obviously compatible with determinism. Moreover, even if one introspects that one’s choice seems to lack deterministic causes, it doesn’t follow from this that it actually lacks such causes, since many of the causes might not be introspectible. Finally, modeling our deliberation in this way illuminates how the process of deliberation itself is a causal difference-maker—prospection, perhaps experienced as indeterministic, is an essential causal contributor to what happens in the world, even if the world turns out to be deterministic.