My name is Brian Parks, I’ve been posting sporadically here on GFP for a few months now.
Anyway, I’ve written a paper on moral responsibility (taking the pessimist anti stance) with some ideas that I’d like to introduce to the group. I was hoping that you all might be so kind as to read the paper (or just briefly skim parts of it) and share reactions/comments/criticisms. Maybe we can get a discussion going. I’m in a different field, so I haven’t had the opportunity to bounce any of the ideas off of philosophers yet.
Here is the abstract to the paper (the PDF link is at the end of this post):
In this paper, I defend The Basic Argument, Galen Strawson’s argument that ultimate moral responsibility is impossible. I begin by recasting the argument into a rigorous form that resolves a minor problem associated with the original version. I then support the argument with a thought experiment—called the ‘self-switching’ scenario—in which I ask the reader to imagine switching places with another human being and making decisions from inside that human being’s psychological perspective. Appealing to Rawlsian principles, I argue that we cannot reasonably assert that another human being genuinely deserves punishment for a decision that we ourselves do not deserve unless our assertion passes the ‘self-switching’ test: that is, unless there is some relevant sense in which the decision would have turned out differently if we were to have made it under identical internal and external circumstances. I conclude that because the ‘self-switching’ test cannot be satisfied on any account of human agency, that we cannot reasonably assert that any individual genuinely deserves any punishment. I end the paper by addressing the criticisms of Fischer and Ravizza, Clarke, and Bernstein.
Here are some quick highlights not mentioned in the abstract that you all might find interesting/though-provoking:
Section 3 (p.10)
I attack compatibilist accounts of moral responsibility with the following argument. If determinism is true, then it is always true that “if you had been in the exact condition as person X, and he had been in exactly your condition, then you would have done exactly what person X did, and he would have done exactly what you did.” I argue that if this statement is true, then it just doesn’t make sense to specifically blame person X for what he does (or to specifically blame you for what you do), or to say that he genuinely deserves a punishment that you don’t deserve (or to say that you genuinely deserve a punishment that he doesn’t deserve). Such an approach, I argue, basically amounts to the pot calling the kettle black (an approach that we would not take from a state of perfect impartiality--i.e., Rawls' original position).
Section 4 (p.16)
I address John Martin Fischer and Mark Ravizza’s attack on the principle of transfer-of-non-responsibility (TNR). I formulate a preferred version of TNR,
(III) Let S1, S2, S3, … ,Sn refer to all of the states of affairs that have caused S. To be morally responsible for S, one must be morally responsible for at least one of S1, S2, S3, … , Sn, OR for the fact that they have caused S in this case.
I show how (III) can withstand their attacks. I then argue in favor of TNR using the aforementioned concept of ‘self-switching.’
Section 5 (p.17)
I address Randolph Clarke’s criticism of Strawson’s argument. I attack his integrated agent-causal account of free will, and then I introduce a manipulation scenario to clarify the attack.
The manipulation scenario goes like this. Suppose that God puts a radio-controlled neurochemical device in your brain that allows another person, Eric, to control your desires and beliefs. I ask, would you be ultimately responsible for your behavior in such circumstances? I argue that the intuitive answer is no (for all intents and purposes, Eric could get you do anything he wants), but that Clarke’s account forces him to answer yes.
I then take the manipulation case further to attack all libertarian accounts of moral responsibility, or more specifically, to show that they are just as vulnerable to manipulation arguments as compatibilist accounts.
Here’s how I make the argument:
Suppose, in the scenario, that Eric wants to get you to kill person A. So he manipulates your desires and beliefs in a certain way. If determinism is true, then (in theory) he can guarantee, through proper manipulation, that you will kill person A. But if indeterminism is true, then he can’t make that guarantee. There will always be a non-zero probability that, when the time comes to make the decision, that you will decide not to kill person A. But this doesn’t matter, I argue, because if you don’t choose to kill person A the first time through, all that Eric has to do is steer you into a situation where you have a chance to kill person A again (or person B, or person C, it doesn’t really matter). In other words, all he has to do is recreate the scenario (or a similar scenario) over and over again. Eventually, with enough tries, you are going to kill someone. And when you do kill someone, it will have been a free choice (just as free as any other choice), and therefore you are going to be ultimately responsible for it (on a libertarian account).
The larger point, then, is this:
Any manipulation argument that can successfully invalidate deterministic accounts of moral responsibility can do the same for indeterministic accounts. The only difference between deterministic manipulation and indeterministic manipulation is in how many times it takes to get the desired result. In deterministic manipulation, the desired result is guaranteed to happen the first time through; in indeterministic manipulation it is guaranteed to happen the nth time through, let n go to infinity.
Section 6 (p.22)
I address the position of Mark Bernstein, who attacks Strawson’s argument by suggesting that it might in fact be possible to be causa sui. I argue that Bernstein misunderstands what a causa sui is, and so I take the opportunity to clarify what that term means.
This is what it means: a causa sui is an agent who engages in an action that creates the very motivations (or reasons, or personality traits, or mental nature, etc.) that led to that same action.
I perform action X. Action X has the effect of generating reason R in my mind. But wait: somehow, reason R was the reason I performed action X! (Strange Loop, per Hofstadter)
I go on to argue that Strawson is wrong: a causa sui of this sort would not be capable of moral responsibility.
I explain all these things more clearly and precisely in the essay.
Anyway, please take a minute and skim whatever parts of the essay interest you, and let me know what you think. Feedback would be greatly appreciated. Here is the link: