My friend MS has a brother in the Peace Corps who recently commented on the tu quoque fallacy:
I remember that some years ago the NYTimes magazine ran a Peter Singer article that talked about the moral obligation to give to charity ("Famine, Affluence, and Morality"). What was most striking to me was not the article so much as people's reactions to it, a large number of which were along the lines of, "Well, how much does he give to charity?"
Now, to me, this response illustrates a certain presupposition about normative statements that I think is ridiculous, namely that I can't say that something is good unless I do it myself.
Why is that? I mean, isn't whether or not something is good independent of my actions? I'm not God, I certainly do not bestow the moral property of goodness upon actions by performing them, so what is the relevance of my behavior to the 'objective' judgment of whether an action is good or bad?
Whatever the reason though, none of them need relate to the moral worth of that action, and I think we are petty, defensive, and ultimately miss the point when we interpret normative statements this way.
In practice, Vishnu is absolutely right that people usually invoke tu quoque (and ad hominem more generally) as a cheap way of dismissing an argument without dealing with its substance.
However, sometimes such dismissal is entirely rational; this becomes apparent when you stop thinking like a philosopher and start thinking like a computer scientist (or, I suppose, a behavioral economist, though I'm not one of those so I'm just guessing).
When somebody comes to you with a proposition intended to alter your behavior, they are asking you to expend effort not only in changing your behavior, but also in figuring out whether their reasoning is sound and consistent with your principles. But thoroughly investigating any nontrivial proposition in life can require that you re-examine the entire edifice of your belief structure, and all the empirical evidence available to you, in order to test the validity of that proposition. Therefore, figuring out the "right answer", or even an approximately-correct provisional "right answer", requires a lot of thinking. Computation costs. Only saints and lunatics have the time and energy to do this whenever people confront them with a proposition.
So, before investing this labor, people pre-screen propositions with some cheap, fallible heuristics that work well in practice. One of those heuristics is to examine the past behavior, reputation, credentials, etc. of the person who's bringing you the proposition. If Alice tells Bob that he's morally obligated to behave a certain way, at least one of the following must be true:
- Alice behaves in the way she advocates. In this case, there is no tu quoque problem, though there may be other reasons for Bob to reject Alice's argument.
- Alice is too weak, too evil, or both, to behave in the way she advocates. In this case, Alice now has a hard philosophical problem. Morality does not require one to do the impossible; if Alice wishes Bob to change his behavior, she must demonstrate that the weakness/evil in herself is not a reflection of universal human limitations. If she cannot explain why Bob can be stronger and/or less evil than she is, she cannot convince Bob to alter his behavior.
- Alice does not believe that Bob is morally obligated to behave the way she advocates, but she is trying to deceive Bob into believing he is. In this case, Alice's argument may be sound, but if so then it is so only by accident (presumably Alice has a reason for disbelieving her own argument). Now, intuitively, "accidentally true" statements are much rarer than statements that are "true by construction" (one can easily formalize this intuition in most logical systems; exercise left to the reader). Therefore, it is unlikely to be worth Bob's time to bother pursuing Alice's line of reasoning.
I'm simplifying a bit in the above, but basically Alice has a long row to hoe unless she modifies her behavior (or suffers from a special disability that exempts her from this obligation, which raises further problems that I won't get into here). Pile on the fact that, empirically, the deception case is pretty common in the Hobbesian world we inhabit, and tu quoque starts sounding like a great strategy for pre-screening ideas. Think of the car salesman who's never bought the brand of car he sells; he'd better have a pretty good story about why you should buy what he's selling even though he doesn't.
This "computational cost" effect in belief adoption partly explains why it's so much harder to convince people of truths that demand change than truths that support their status quo. Beliefs that fit in with our existing beliefs cost little to adopt, so we just accept them. Beliefs that demand change require lots of thinking, so we pre-screen them more aggressively using incomplete heuristics like tu quoque before even giving them a fair shot. It's not fair, but if you want to change the world, you'd better hold yourself to a much higher standard than if you just want to go along to get along.
This is from Vishnu, whose connection in El Salvador was being flaky:ReplyDelete
"Morality does not require one to do the impossible;"
Says who? I mean, many people conceive of the good as something unattainable, that we must strive toward although we always will fail in important ways. In fact, in light of articles such as Singer's it is all the clearer to me that our human inclinations limit the extent to which we are able to do even the most basic of goods. For instance, most people would choose to live a life of relative luxury before donating funds to charities that save other's lives…
Sorry, I thought it was self-evident that morality does not require that one perform the impossible. If you see a building burning down, and the only way to save somebody trapped on the twentieth floor is to fly there, and no helicopter is available, then you are not obligated to flap your arms really hard in an attempt to fly, because doing so would be futile.ReplyDelete
Similarly, certain extreme forms of Christian morality hold that feeling anger towards another person is a mortal sin. But it's impossible for human beings, as they are presently constituted, to be entirely free from anger, so it's absurd for God to condemn people to hell simply because they sometimes feel anger. No reasonable moral system can demand that people eliminate anger completely from their hearts.
One could try to make the case that one is morally obligated to try to accomplish things even when they remain impossible to achieve. I value outcomes over intentions, so I don't really believe this.
A closely related argument, which is probably closer to what you mean in your comment, would be to claim that striving towards an unattainable ideal is a necessary component of accomplishing some attainable goods, which we are morally obligated to achieve. I don't believe this statement is necessarily true, but even if we grant its truth ad arguendum, a burden still falls upon Alice to demonstrate that such striving is possible, and actually produces a good outcome. Again, if Alice is evil or weak, she must demonstrate why Bob can reasonably be expected to be less evil or less weak then she is.
p.s. I should also say that none of the above implies that I think that invoking tu quoque is, in any way, a just strategy for evaluating moral propositions. In fact, I specifically said that it was unfair and fallible. However, it is a rational strategy: a decision-making agent who wishes to behave morally in a complex world will get lots of value from employing it, because it costs little and eliminates a lot of unrealistic and deceptive propositions.ReplyDelete
I also think that Singer's higher-order points, and Vishnu's, are 100% on-target, and that most people do not live up to their moral obligations w.r.t. charity, or any number of other areas. However, this raises some interesting questions about the relationship between morality, emotion, and action, which I am saving for another post.