I remember that some years ago the NYTimes magazine ran a Peter Singer article that talked about the moral obligation to give to charity ("Famine, Affluence, and Morality"). What was most striking to me was not the article so much as people's reactions to it, a large number of which were along the lines of, "Well, how much does he give to charity?"
Now, to me, this response illustrates a certain presupposition about normative statements that I think is ridiculous, namely that I can't say that something is good unless I do it myself.
Why is that? I mean, isn't whether or not something is good independent of my actions? I'm not God, I certainly do not bestow the moral property of goodness upon actions by performing them, so what is the relevance of my behavior to the 'objective' judgment of whether an action is good or bad?
Whatever the reason though, none of them need relate to the moral worth of that action, and I think we are petty, defensive, and ultimately miss the point when we interpret normative statements this way.
In practice, Vishnu is absolutely right that people usually invoke tu quoque (and ad hominem more generally) as a cheap way of dismissing an argument without dealing with its substance.
However, sometimes such dismissal is entirely rational; this becomes apparent when you stop thinking like a philosopher and start thinking like a computer scientist (or, I suppose, a behavioral economist, though I'm not one of those so I'm just guessing).
When somebody comes to you with a proposition intended to alter your behavior, they are asking you to expend effort not only in changing your behavior, but also in figuring out whether their reasoning is sound and consistent with your principles. But thoroughly investigating any nontrivial proposition in life can require that you re-examine the entire edifice of your belief structure, and all the empirical evidence available to you, in order to test the validity of that proposition. Therefore, figuring out the "right answer", or even an approximately-correct provisional "right answer", requires a lot of thinking. Computation costs. Only saints and lunatics have the time and energy to do this whenever people confront them with a proposition.
So, before investing this labor, people pre-screen propositions with some cheap, fallible heuristics that work well in practice. One of those heuristics is to examine the past behavior, reputation, credentials, etc. of the person who's bringing you the proposition. If Alice tells Bob that he's morally obligated to behave a certain way, at least one of the following must be true:
- Alice behaves in the way she advocates. In this case, there is no tu quoque problem, though there may be other reasons for Bob to reject Alice's argument.
- Alice is too weak, too evil, or both, to behave in the way she advocates. In this case, Alice now has a hard philosophical problem. Morality does not require one to do the impossible; if Alice wishes Bob to change his behavior, she must demonstrate that the weakness/evil in herself is not a reflection of universal human limitations. If she cannot explain why Bob can be stronger and/or less evil than she is, she cannot convince Bob to alter his behavior.
- Alice does not believe that Bob is morally obligated to behave the way she advocates, but she is trying to deceive Bob into believing he is. In this case, Alice's argument may be sound, but if so then it is so only by accident (presumably Alice has a reason for disbelieving her own argument). Now, intuitively, "accidentally true" statements are much rarer than statements that are "true by construction" (one can easily formalize this intuition in most logical systems; exercise left to the reader). Therefore, it is unlikely to be worth Bob's time to bother pursuing Alice's line of reasoning.
I'm simplifying a bit in the above, but basically Alice has a long row to hoe unless she modifies her behavior (or suffers from a special disability that exempts her from this obligation, which raises further problems that I won't get into here). Pile on the fact that, empirically, the deception case is pretty common in the Hobbesian world we inhabit, and tu quoque starts sounding like a great strategy for pre-screening ideas. Think of the car salesman who's never bought the brand of car he sells; he'd better have a pretty good story about why you should buy what he's selling even though he doesn't.
This "computational cost" effect in belief adoption partly explains why it's so much harder to convince people of truths that demand change than truths that support their status quo. Beliefs that fit in with our existing beliefs cost little to adopt, so we just accept them. Beliefs that demand change require lots of thinking, so we pre-screen them more aggressively using incomplete heuristics like tu quoque before even giving them a fair shot. It's not fair, but if you want to change the world, you'd better hold yourself to a much higher standard than if you just want to go along to get along.