Saturday, December 29, 2007

When do people prefer unauthorized copies? (a few hypotheses)

T. Cowen points out that the people downloaded more unauthorized copies* of Resident Evil: Extinction than any other movie, making "most illicitly copied" a dubious proxy for actual popularity.

Here are several hypotheses on what media people are likely to copy without authorization (I'm sure that others have suggested these same hypotheses before):

Guilty pleasures
If you would feel ashamed to admit paying for something, you're more likely to download a free copy of it. Also, most legitimately purchased media have a visible footprint in either the physical or virtual world: anyone who looks can see the DVD box on your shelf or the movie download in your iTunes collection. But for unauthorized copies, the only footprint is a movie file tucked away in some corner of your hard drive. Normally, the signaling aspect of a media purchase is a feature, but for guilty pleasures, it's the opposite.
Low-quality media
Media companies sell basically all media of a certain type for a similar price (e.g., about $17 for a new film on DVD). Possibly this is due to fundamental fixed costs of production (a DVD costs $X to digitally master and $Y per unit to manufacture, transport, and warehouse); possibly it's because of social processes (the studio exec managing film X projected $Y margin per unit at retail, and cannot release it at a lower margin without losing status within the company). Regardless, when a product is of exceptionally low quality, more people will see the dollar price as unwarranted, and they will be more willing to spend time seeking out unauthorized copies (see the next point).
Youth culture
Young people have more time than money, and a large appetite for media (more time to watch movies when you don't have a full-time job, children, or other responsibilities). Obtaining an unauthorized copy of something requires a trade of time for money.
Low-availability media
If a media product is difficult to obtain via authorized channels --- for example, if a product is not available for sale in your country, but is available overseas --- then you are more likely to seek out an unauthorized copy.
Improperly bundled products
If a media product is an aggregation of many separable parts, some of which are much more desirable than the others, then people are more likely to seek out an unauthorized copy of the parts they like.

I've never seen Resident Evil: Extinction, but I'm pretty sure that it falls squarely at the intersection of the first three of the above categories. Anecdotally, I think that many people know someone who would never pay for a Britney Spears album but has some kind of lame excuse for having a few tracks on their iPod.

The above hypotheses have several corollaries.

First, if unauthorized copying truly reduces returns to creators so much that it discourages creative output, then overpriced crap aimed at adolescents and young adults will be the first to go.

Second, media companies can reduce the amount of unauthorized copying by a variety of straightforward means, including:

  • Reduce the price of crappy products. (Duh.)
  • Offer very low-cost versions of media that require a time investment. For example, offer a low-cost subscription service where you must play a game for a certain amount of time (think World of Moviecraft) in order to obtain a download of a media product. Consumers who value time over money will still buy the DVD. Consumers who value money over time will play the game.
  • Sell "brown sleeve" versions of guilty pleasures. Sell junky movies and music in a disposable cardboard sleeve instead of a DVD keep case or CD jewel case with album art. Or even sell them in deceptive packaging: reverse the cover insert for your Resident Evil DVD, and it can look like some depressing and obscure Swedish existentialist art film that nobody will ever want to pull off the shelf.

* I refuse to use the term "pirated", as it trivializes actual piracy (the sailing-ships kind), and blurs the important distinctions between different forms of intellectual property infringement.

Sunday, November 18, 2007

Pandora: awesome

I've been using the web since Netscape 1.0, and I've grown pretty jaded about new Internet technology. I got sick of beta testing the latest timewasting Internet fad sometime around 2001. As much as I write and think about technology (real and imagined), I've grown pretty reluctant even to try new technologies myself. And unless an application's reasonably polished and somehow delivers concrete value into my life, I won't give it a second glance.

I say all this as a preface to remarking that I tried the Pandora Internet music service today, and it delivers the goods. Its music recommendation technology is very impressive: type in a band name that you like, and it dynamically constructs a playlist of songs resembling the songs by that band. I was in the mood for some sunny pop music, so I typed in Saint Etienne, and instantly got a playlist of tracks I'd never heard (from artists both familiar and not), and that really sound like music that a Saint Etienne fan would like: Club 8, Stars, Magnetic Fields, Kanda, Brazilian Girls, etc.

I'm really curious about the guts of the so-called "music genome project" technology behind Pandora. Whatever it is, it works. If you're puttering around at home and just want to put on some background music for a particular mood, there's no need to construct playlists manually anymore.

Of course, as a hacker, I find the arbitrary restrictions --- you can only skip forward, and only skip a fixed number of times per hour --- mildly annoying. But if you view Pandora as a replacement for musical radio stations, then it's a huge, quantum leap forward.

My friend MS tried out Pandora some time ago. Being the music geek he is, he mentioned the potential for fiddling around with playlists to optimize your experience. But I didn't want another technology that I spend my time fiddling around with; I want technologies that give me maximum value for minimum time investment. Tragically, therefore, I didn't try it out until today. Rest assured that Pandora will repay the most minimal effort with a large payoff.

Saturday, November 17, 2007

Semi-synchronous telephony with call invitations

Consider the relative advantages of text messages and phone calls:

  • For activities requiring actual dialogue rather than notification, text messaging is incredibly inefficient: the interface is terrible and an exchange requiring N round trips requires 2N text messages, each of which requires that a human being fiddle with a phone and compose a new message. On the other hand, text messages do not require that the person on the other end be instantly available in order for the message to be received.
  • Conversely, phone calls are excellent for negotiation: an exchange requiring N round trips can be conducted within the space of 1 phone call. However, if the person on the other end isn't instantly available, then you are stuck either leaving voice mail (leading to the possibility of phone tag) or calling repeatedly, which is also incredibly annoying.

What's needed is a technology that merges the best of both worlds: the ability to do rapid synchronous dialogue, and the ability to initiate that dialogue without the human having to play phone tag or to call repeatedly.

As it happens, voice mail and calling repeatedly correspond exactly to the two usual mechanisms that computer communication protocols use when one process wants to receive a communication from another process but doesn't know when that communication will be ready. For the nerdy people who care, voice mail is registering a callback, and calling repeatedly is polling. But never mind the jargon: the important observation here is that computers already know how to do this sort of thing, and do it all the time, so it's stupid to make human beings do it.

Therefore I propose the semi-synchronous phone call, or "call invitation" for short. Here is how it works:

  1. Alice wants to talk to Bob. She sends him a call invitation.
  2. The invitation goes into both Alice and Bob's invitation inboxes. An invitation has two possible states for each user: "available" and "busy". By default, all invitations are initially available for the caller and busy for the callee.
  3. Bob now has three choices:
    • Bob can do nothing, which leaves it in his invitation inbox.
    • Bob can refuse the invitation, which deletes it from the system.
    • Bob can mark himself "available" for that invitation.
  4. Alice has the same three choices, except that she's initially available for the invitation. She will have to mark herself "busy" if she starts doing something that would stop her from talking on the phone.
  5. At any time, either Alice or Bob can toggle their status (available or busy) for the invitation. Talking on the phone implicitly toggles you busy for the duration of the call.
  6. If, at any time, both Alice and Bob have marked the invitation available, they receive a phone call connecting them.

Now, no doubt many similar proposals have been made in the past. I'm specifically aware of proposals in the ubiquitous computing community for communication devices that act differently when you're available than when you're busy. Some of the fancier proposals involve the device or the environment sensing (via sound, motion, or whatever) when you're doing an "interruptible" activity, and automatically marking you available or busy.

However, I am not familiar with any proposal that works exactly the way I propose, and I claim that even small deviations from exactly the above design would result in a system that people would hate.

To begin with, in my proposal, people explicitly mark their availability information. I believe that availability must be volitional. Imagine if your phone decided on its own whether to ring, and whether to vibrate or ring audibly. Or imagine if your front door decided when to open in response to a knock based on your past behavior towards that person. On a deep, primate level, humans do not feel emotionally secure when their social approachability is outside their control. Implicit signals like body language work for controlling approachability in face-to-face interaction only because humans can volitionally and unambiguously broadcast these signals, and because other humans react instinctively with extremely high fidelity.

Furthermore, in my proposal, availability is relative to each message. It is not a universal property of the user: there's no such thing as "your" availability, only your availability reacting to a given message. This is important for several reasons. First, you never have to think about your availability when no invitations are pending in your inbox, which means you're not constantly toggling your phone into "available" or "busy" mode. Second, you're never broadcasting any information about your availability, which preserves your privacy.

Some proposals make a user's availability relative to a priority: you can say you're available for "high priority" calls, but not "regular priority" or "low priority" calls. Other proposals make availability relative to a user and a caller: you can put someone on your whitelist, which means that you always accept calls from them.

Such priority schemes sound like a good idea, but they would fail to solve the problem for two reasons. First, no code-based priority scheme could deal with the complexities of actual human interaction. Second, such priority schemes actually force the recipient to expose more information, with the putative aim of increasing privacy; this gets the problem fundamentally backwards. Consider the problem of setting your availability for calls from your in-laws, or someone you just started dating.

Every successful interpersonal communication technology leaves room for ambiguity in social interactions. Why didn't you answer the phone the other day? Maybe you were busy, or maybe you didn't want to talk to that person, or maybe you wanted to take the call but you're playing hard to get. Why didn't you respond to my text message/email/Facebook note? Maybe you haven't logged on lately, or maybe you can't stand me. Like it or not, human beings prefer the ability to deceive each other socially. My proposal preserves social deception in a way that priority schemes do not.

In fact, if there's any flaw in my proposal, it's this: I seriously think that many people choose media like text messaging, instant messaging, email, and social networking sites precisely because they do not want to talk, even when it would be more efficient. Talking on the phone forces you to interact even more instantly than instant messaging. It also exposes the vast amount of sub-verbal information that carries through your human voice whether you want it to or not, including, first and foremost, your emotional state. In the end, voice communication may be growing rarer precisely because it's not good enough at helping us conceal our true selves from each other.

Thursday, November 01, 2007

What's your encore? Do you, like, anally rape my mother while pouring sugar in my gas tank?

...the illuminating aspect of the source of this expostulation being that Dante, after suffering nonstop abuse from Randal up to the point of the outburst, remains his steadfast buddy for the remainder of the film.

The relationship to the conversation related in this post of Ezra Klein's is left an exercise for the reader.

Sunday, October 21, 2007

Sullivan, Malkin, and justified belief

Henry at CT dares Andrew Sullivan to engage in substantive debate with Cosma Shalizi about statistics, which literally caused me to laugh out loud. Of course, the idea is ridiculous. Andrew Sullivan's a journalist with a Ph.D. in political philosophy, and I doubt he's done any math beyond basic arithmetic in decades. Like most elite journalists, he probably considers mathematics --- which is merely the language of the universe, after all --- unworthy of the hard work it would take to relearn it. Much easier to weave a bunch of tangentially related sophistry and hope your listeners get too distracted to notice you've simply evaded the main point.

On a similar note, Ezra Klein recently dared Michelle Malkin to debate him about health care. She ran away, of course. Ezra Klein is a huge policy nerd who spends his days reading white papers about health care. Malkin is a shrieking demagogue who spends her days copying and pasting Republican talking points and minimally digested links into her offal-trough of a blog. Nobody with half a brain earnestly believes that Malkin knows anything about health care policy. And Malkin has a enough of a vestigial sense of dignity (even after doing this) to fear being completely humiliated by Klein. Her refusal was a foregone conclusion.

A common thread unites these two stories. Shalizi and Klein have made conscious life choices that narrow their respective subjects of expertise (with sometimes steep costs), and as a consequence they can speak about those subjects with authority. People who make such sacrifices do so because they care about the subject, which is to say that they care about the truth of the matter. They want to get things right, and to behave in a way that causes their audiences to become strictly better-informed than before.

Sullivan and Malkin are playing a fundamentally different game. Their mission is to attract attention and notoriety. They do not care whether their actions further the cause of knowledge, by which I mean justified and true belief. In fact, in their own ways, Sullivan and Malkin behave professionally in ways that directly inhibit the propagation of knowledge. Not coincidentally, this behavior has salutary effects on their careers.

Sullivan does not care whether a belief is justified and true; or, at least, he does not care enough to approach any given subject with the humility and patience to learn about it. He earnestly believes that there's value to stirring up the waters, even if the primary effect is to splash mud into people's eyes. This belief is convenient for him in two ways. First, it gives him permission to write about anything without understanding its substance; and being prolific is good for a pundit's career. Second, it places him at the center of controversies; and being part of a controversy --- regardless of the correctness of one's positions --- is also good for a pundit's career.

Malkin, for her part, understands that (like Jonah Goldberg) her job is not to investigate the truth of anything, but to spew a great volume of noise that echoes the prevailing Republican talking points. I don't doubt that, on some level, Malkin believes whatever she's saying at any given moment. However, the distinction between knowledge and faith is that knowledge is justified --- that there exists some combination of evidence and inference that logically leads one to the conclusion. Malkin has chosen to base her career instead on faith --- and not even faith in a stable set of fundamental principles, which might be admirable, but faith in the official Party truth of the moment. (Eastasia? We have always been at war with Eastasia.) As a propagandist, Malkin's mission is faith through noise --- produce enough noise, and you can reinforce people's beliefs in the Party without ever justifying those beliefs. Obviously, this behavior is good for her career. The more noise you produce, and the more closely you adhere to the Party line, the more likely that people in the market for such noise will choose to pay attention to you.

Sadly, Sullivan and Malkin are well-rewarded for doing what they do. In politics, the market for knowledge is much weaker than the market for noise. Knowledge is expensive and serves only itself, but noise is cheap and can be turned to the purposes of any buyer. We can heap contempt on Sullivan and Malkin, but I don't know how to change the underlying dynamics that elevate them to prominence.

Tuesday, September 25, 2007

The 6th Anniversary of $9.11

So, when I saw this a couple of weeks ago, it struck me as a rather weak C&G --- being a relatively by-the-numbers exercise in cynicism compared to the more inventive stuff the strip's creator otherwise produces.

However, today I see this and realize that Dorothy had it right all along.

Monday, September 17, 2007

C. Shalizi on Iraq, econophysics

Further evidence that C. Shalizi's blog, however infrequently updated, is essential reading:

After reading a few of Cosma's essays, any person of merely ordinary erudition must wonder how he finds the time to know as much as he does. As it happens, he has a section of his FAQ devoted to this subject, and the price is steep. On the margin, however, I suspect many people would benefit from becoming slightly more like Cosma.

Sunday, August 26, 2007

Further evidence that your opinion of the conservative movement is too generous

Sometimes I wonder if I'm too uncharitable to the conservative movement in this country. Sometimes I wonder if it's really fair to characterize the right as an unholy alliance of plutocrats, theocrats, totalitarians, bigots, crooks, and thugs. However, I have to admit, this surprised even me. If anything, my opinion of conservatives is clearly too high.

One lone nut case waving around machine guns on stage is kind of disturbing, but relatively inconsequential. What's far more disturbing is the room full of cheering fans, and fact that the Wall Street Journal publishes his writing.

This is also why any false equivalence between nut case leftists and nut case right wingers is ridiculous. It is not the nut cases who matter; it is the relationships between nut cases and mainstream movements that matter. Nut case leftist loons who prance around on stage with weapons, yelling their fantasies about assassinating Presidential candidates and Senators, get excluded from the mainstream discourse pretty quickly. Yet nut case right wing loons like Nugent, and Ann Coulter and Rush Limbaugh, remain tightly coupled to the mainstream conservative movement --- not marginalized, not shamed, but given voice in influential outlets where they can reach millions of people.

Saturday, August 25, 2007

A feature, not a bug

Yglesias writes, in the context of the growing movement in Washington to depose al-Maliki and re-install Allawi:

I find it hard to find words to describe what a disaster it may be if the US ends up engineering the return to power of a grossly unpopular ex-Baathist ex-Prime Minister. It's as if people are trying their hardest to come up with policies designed to end with Muqtada al-Sadr marching at the head of a crowd shouting "Death to America" into the rapidly abandoned Green Zone sometime in 2010.

In all likelihood, the President and Congress in 2010 will both be Democratic. Worsening the objective situation in Iraq in 2010 would be a feature, not a bug.

Of course, the Iraq war was largely architected and executed by Republicans, in a time when the Republicans controlled both the White House and Congress. It would be logical to blame Republicans for the consequences. However, Republicans are working hard to crystallize the "stab in the back" narrative --- i.e., that the war could have succeeded, if only it weren't for those meddling critics --- in the public consciousness. If this propaganda effort succeeds, then worsening the consequences of our inevitable withdrawal would pay political dividends for Republicans.

So far, I don't think they're succeeding, except among the "28 percenters" who believe basically anything that the right-wing noise machine spews out. But there's still a long way to go until withdrawal. And using foreign policy for political ends is hardly out of character for this administration.


UPDATE 2007-08-26: OK, in this post, my cynicism got the better of my sense. What can I say; I was in a bleak mood. Sometimes you can be too cynical. Truthfully, I don't think that this administration consciously wants to worsen long-run outcomes in Iraq.

However, I do think they're determined to extend the occupation until Bush leaves office. If they accomplish this, then one of two things will happen. Either the next administration will initiate the withdrawal, causing the aftermath to play out under their watch; or the next administration will remain in Iraq, extending Bush's running long-shot gamble that something good will happen someday. In either case, the outcome can be spun into reduced blame for the Bush administration. Probably, they even believe in earnest that the next administration would deserve the blame for a disastrous withdrawal.

Given that the desired outcome is to remain in Iraq until 2009 at all costs, the question becomes how to maintain some illusion of progress, some hope for imminent improvement, however flimsy, in the short term. Hence ongoing noise about ousting al-Maliki; hence the noise about Petraeus's September report; and hence, and hence, and hence.

When you're grasping at straws like this, it becomes easy to disregard the long-run consequences of your actions. It is not active malice that drives our astonishingly bad Iraq policy. It is selfish shortsightedness. But that's hardly better in the end, is it.

Friday, August 24, 2007

+1 for the experts

After my last post you will find me profoundly unsurprised at PZ Myers pointing to a science journalist's mangled account of some paleontology reporting.

What's interesting is that the headline here was especially egregious, and it's generally editors who write headlines, not the reporters. This parallels the Michael Skube case, where the editor suggested some of the factually incorrect insertions to Skube's article.

One of the structural advantages claimed by proponents of old-style journalism is that editors impose quality control. And editors may be extremely valuable when they are subject matter experts --- for example, at scientific journals, where the editors have roughly the same credentials as the researchers. But in journalism, an editor's typically a generalist who knows even less than the reporter about the subject at hand. At least the reporter talked to the primary sources. If reporters work from secondhand knowledge, then editors do their work based on thirdhand knowledge. Why should we believe that editorial review by non-expert editors improves news coverage more often than it degrades it? Editors may improve the quality of the prose, but do they really, on average, improve the accuracy of the articles?

Wednesday, August 22, 2007

Expertise and journalism

Friends know that I've long held that the biggest problem with reporters is that most of them are not experts in anything in particular, which makes them gullible, easily manipulated, and prone to mangling the facts. Recently, there have been several worthwhile posts in this vein from Brad DeLong, Matt Yglesias, and (less directly) Ezra Klein.

Here's an excerpt from an actual email that I wrote two years ago, to an acquaintance who works at a major national magazine, discussing Michiko Kakutani's review of Bill Clinton's autobiography, which Brad DeLong discussed some time ago:

Moving along, if the Times prints book reviews in the daily paper, but never solicits such reviews from people who actually know something about the material in question, then that is a structural problem with the Times. Likewise if the Times rushes articles into print too soon. Perhaps I'm naive and don't know why the newspaper business is set up as it is. Perhaps there are good reasons, and not just ingrained shibboleths, causing the Times to use staff writers exclusively for the daily paper (outside of Op-Ed).

More broadly, I think the fact that journalists are mostly smart generalists is a huge weakness of the press. Save for a few geniuses, generalists are basically obsolete in every other field. Journalism should join modernity and take specialization of labor seriously, with all the attendant educational requirements. I can't count the number of times I've winced at some clueless description of computing technology in the general press. DeLong's frequent outrage at the economic ignorance (and general innumeracy) of the press shows that economics coverage is, if anything, much worse. Kakutani should not be reviewing history; she should be confined to literary fiction, where all the training one needs is to have read a great many books.

Newspapers hire full-time reporters who are not expert in any field because that has historically been an efficient way to produce news articles on deadline. At a high level, a major newspaper hires a hundred people who each write a few articles a week, instead of hiring ten thousand people who each write a few articles every hundred weeks. People assume this is simply the natural order of things, but it's not obviously the better choice: ten thousand experts can give you more substantive coverage of far more subject areas than one hundred generalists. It's true that one percent of a journalist's annual salary is a relatively small sum to pay a freelance expert for a couple of articles a year, but many experts care enough about communicating their subject matter to the public that they would probably work at cut rates (or, as blogs demonstrate, for no pay at all).

So why should newspapers be organized as small bands of generalists, instead of large federations of experts? I think the answer is coordination costs: you would need to maintain the world's biggest rolodex to keep up with all these experts, and aggregating the irregular work output of ten thousand people requires much more sophisticated planning than aggregating the regular output of one hundred people.

But coordination costs are an artifact of available technology. Mechanisms exist today that coordinate the output of many more than ten thousand voices. It's true that no existing system can serve as a drop-in replacement for the modern news reporting organization. However, the field of software-assisted content aggregation is young, and I believe that someday someone will figure out how to exploit the world's existing network of experts directly for reporting on complex issues, rather than relying on secondhand summaries of expert knowledge.


On a slightly related note, it's always laughable to me when I read about junk like this (and more! via Atrios; and more!), where some journalist sniffs contemptuously at the unwashed blogging rabble, and then gets promptly pwned (sorry, that truly is the mot juste).

What's especially rich to me is that, unless I misread Michael Skube's biography, Skube hasn't done hard investigative or beat reporting (at least, not full-time) for about two decades. His biography describes his freelance and city paper reporting from 1975 to 1982, but from 1982 onward his roles are described variously as an editorialist, book editor, columnist, and critic. In other words, it appears that he's spent the better part of his adult life doing exactly what he accuses bloggers of doing: sitting on his ass in a chair, reading what other people write, and commenting on it.

Tuesday, August 21, 2007

In which I toss comparative advantage to the wind

Attention conservation notice: Nothing but a link to an alternate blog that bears at best a distant topical relationship to this one.

I am a reasonably competent writer and programmer. Nevertheless, for some reason I have not been writing much lately (and not finishing the things I start); and as for programming, I code so much at work that, at the end of the day, that particular mental muscle prefers to sack out on the couch.

So, I have lately taken up drawing a comic strip instead, which I offer with neither apology, nor remorse, nor particular endorsement of its quality. It's kind of fun to be doing something that I am not particularly good at. Expect updates every Tuesday.

Be warned that in my writing, I care about getting things right, whereas with the strip I deliberately plan to privilege regularity and volume over quality. One interesting thing about drawing is that even if you don't have any ideas, you can kind of bullshit around with your instrument and learn something about the craft by observing your mistakes: how not to cross hatch a circular area, how not to draw a foreshortened hand, how not to balance light and dark areas on the page. By contrast, bullshitting around in prose fills me with a moral, existential horror.

Anyway, enough expectations management. By this point you know whether you care or not. I now return you to the usual regimen of irregularly updated spleen.

Tuesday, July 31, 2007

Why Intentionalism Is So Popular, Part 1

Yesterday, I explained why intentionalism is obviously a fallacy. But if intentionalism is so obviously a fallacy, why do so many otherwise intelligent people believe in it?

Why, for example, do so many people want so fervently to know what was in an author's head when (s)he wrote a novel, or poem, or other literary work? There are roughly two reasons that I can think of. This post is about the first.

In short, most people do not care about the text at all. They care about the human emotional connection they feel to the author. The novel, or poem, or whatever, is not really an independent object, with its own life, but rather a vehicle for the author's magical transcendent essence, in the same way that a french fry is less a potato than a starchy matrix convenient for delivering grease and salt.

Why this indifference? Because in most other contexts where people use language, they don't care about the text either.

If somebody says "I love you", then you do not care about the words, whose denotation is trivially obvious (i.e., that the speaker feels a powerful and deep-seated sense of caring/desire/etc. towards you). Rather, you care about the exact contours of the feeling causing that person to say that. If, at the moment those words are uttered, the only thought in the speaker's head is "I really want to get into your pants" or "I just feel obligated to say this because you said 'I love you' ten seconds ago" then the words themselves make no difference to your life.

And as this example illuminates, in most utterances, people use language so carelessly or deceptively that one would have to be a fool to care about the meanings of words. Instead, one must treat words as a scrap of evidence, among many other scraps, to apply towards the divination of the speaker's thoughts.

This deliberate disregard for the meanings of words is instrumental: in order to survive in a world full of other human beings, one must develop and practice this skill constantly. Naturally, having grown so used to piercing through the vaporous veil of language to the bloody guts of intention, people instinctively apply the same heuristic to literary works. In doing so, they confuse literary interpretation --- the task of evaluating the meanings of texts --- with participation in a human relationship --- which often relies on ignoring the meanings of texts where the meanings do not correspond to an authentic intention.

In other words, people who fret about intention are mistakenly treating a literary text as a kind of speech act other than it is. This is an error akin to confusing an imperative ("Take that hill, solder!") with an interrogative ("Take that hill, soldier?").

Next time: How formal semantics helps answer certain initially puzzling questions about non-intentionalist theories of meaning.


Post script: One initially seductive rejoinder to this line of reasoning is that if someone says, "I love you", but means "I just want to get in your pants", then in fact the meaning of the words "I love you" in this context is "I just want to get in your pants". That is, in such cases, the speaker is not ignoring the meaning of words so much as redefining the words, at least for the duration of this speech act.

This argument is bogus, because such a framework for understanding meaning makes notions like lie, malapropism, sarcasm, etc., impossible (or vastly more convoluted).

We all share a clear intuition that there exists a distinction between the meaning of an utterance as it would be understood by a competent speaker, and the authentic mind-state of the person making that utterance. If I write, "George W. Bush is the Pepsodent of the United States", then it's pretty clear that the meaning of that sentence is that George W. Bush is a dental product, even though I may have meant that he is the chief executive of the federal government. And that is why a competent editor or English teacher might inform me that I should replace the word "Pepsodent" with "President". If we consider intention and meaning to be identical, then the grounds for this correction disappears.

Monday, July 30, 2007

Intentionalism is obviously a fallacy

Oh, Mr. Werewolf, how you disappoint me.

Imagine that you get a job as a semaphore operator: one of those people who runs around on the runway signaling airplanes with batons. One day, you send the signal for "turn right" when you mean "turn left". The airplane turns right, and as a result, crashes into the terminal. Dozens of people are killed and hundreds injured.

The port authorities conduct an inquiry. Multiple witnesses, from the pilot to the airport control tower to semaphore operator on the next runway over, testify that you signaled the semaphore for "turn right". There is even video evidence that you signaled "turn right."

"But --- but --- I'm the author of the signal! And what I meant is to turn left!"

"Holy shit," says the interrogator, "how stupid are you? It doesn't matter what you meant. What matters is what you signaled. Your arms signaled turn right; the sticks signaled turn right; the visual image telegraphed by your action was to turn right."

You are summarily fired.

Language is a communicative medium --- a system of signals. The meaning of an utterance is not determined by the intent of the author, but by the meaning that the interpretive community applies to the relevant system of signals. Ink on paper has an objective existence outside of the author's head, just as a semaphore signal does, and the patterns of ink on paper acquire meaning in the context of an interpretive community (viz., English speakers, or whatever) independently of any vaporous and transient firing of neurons in their originator's head.

This is not a postmodernist idea. It is not even a modernist idea. It is trivial common sense. To claim otherwise is to support Humpty Dumpty's contention, in Alice in Wonderland, that "When I use a word, it means just what I choose it to mean --- neither more nor less." If you believe in the intentionalist fallacy, you deserve to be one of the people who dies when the airplane crashes into the terminal.

Tune in next time for the explanation for why so many otherwise intelligent and reasonable people think that, for example, a novelist's intention towards his or her work has any special authority.

Sunday, July 29, 2007

Rudy Giuliani is a liar

So, my friends know of my longstanding animosity towards John McCain. But McCain's career is thankfully fading, so it's time to start dumping on the other Republican candidates, who deserve it just as much.

Not that anything at TPM needs my linkage, but I hereby reaffirm the truth that Rudy Giuliani speaks in a nonstop stream of lies.

Even more egregious than the lie about tax cuts, which takes up the bulk of the TPM post, is the recent lie that Democrats "refuse to admit the existence of Islamic terrorism". Right, Rudy, the millions of Democrats in New York City don't believe in the existence of Islamic terrorism. It's pretty easy to say such things from a diner in Texas. I dare you to stand on a soapbox in Times Square and repeat that sentence.

Thursday, July 19, 2007

I told him I knew _____ when I was young at summer camp

Attention conservation notice: I post this primarily to elevate the PageRank of the blog post linked herein.

So, when I was an awkward young man (ha! was?!... I suppose I'm less young than I used to be) I went to NJ Governor's School of the Arts (i.e., Art Nerd Camp) in creative writing.

This week DK, a friend whom I met there, sends me her pictures of Kal Penn, a.k.a. Kumar, who was there that same year.

I must have met him once or twice, but my memories of that time are dim; I barely remember all the writers, let alone the theater kids. Still, I can safely say that if my guidance counselor had not urged me to apply to Art Nerd Camp that year, my life would have been very different. Let nobody say that guidance counselors do no good in the world.

On a slightly related note, when I was at POPL 2006 in Charleston, SC, I took a break one night from revising my workshop talk slides to grab a burrito at a local chain. The girl behind the cash register told me I look just like that guy from Harold and Kumar. Granted, I was not wearing my stylin' headgear, but even without it I think the resemblance is, shall we say, distant at best. The conclusion I drew was that they don't get too many of them Orientals in these parts.

Tuesday, June 12, 2007

.sft: A proposal for software patent reform

As follows:

  1. Software companies that wish to protect their intellectual property register with a new ICANN gTLD, .sft.
  2. A .sft receives "IP points" every time it produces a "significant" software innovation. For example, every time a .sft publishes a peer-reviewed paper in a major computer science conference, that .sft gets 100 IP points.
  3. Any .sft may "sue" another .sft at any time, for any reason, for any quantity of money.
  4. Lawsuits are settled by best-of-7 tournaments of StarCraft. A .sft's designated StarCraft player ("IP lawyer") starts each match with a bonus quantity of minerals, Vespene gas, and peons determined by a time-weighted function of the .sft's IP points. The victor wins a fraction of their client's requested damages determined by the ratio of their buildings razed, units constructed, etc. vs. their opponents'.
  5. IP lawyers may play Protoss, Terran, Zerg, or random race, at their discretion.

The merits of this reform are obvious. Much like patent law, StarCraft is governed by a system of arcane rules that are mostly irrelevant to the actual process of writing innovative software. Much like patent law, StarCraft's rules can only be mastered by a caste of professionals whose expertise is honed over years of practice. Unlike the legal system, however, StarCraft is swift, decisive, objective, and exquisitely balanced for fairness. Any minor loss in the quality of judgment on the margin would be overwhelmed by the reduced transaction costs of the system as a whole.

I can already hear the objections of the closed-minded, and I will respond to them in turn.

  • Q: You're introducing perverse incentives. Companies with better StarCraft players will beat companies who write better software!
    A: Paying for quality StarCraft players would simply be a cost of doing business, no different from hiring secretaries or accountants or paying your electric bill. Even though the market value of Starcraft skills would temporarily skyrocket, StarCraft does not have a government-sanctioned professional guild that artificially limits the labor supply, so in the long run the cost would still be lower than for ordinary lawyers. Anyway, if some company hires bad StarCraft players, the management's incompetent and the company deserves to lose in the marketplace, regardless of how innovative their software developers are.
  • Q: What's to stop the emergence of "IP trolls" who never innovate, but hire a lot of expert StarCraft players to sue everyone?
    A: First, only companies with substantial independent revenue streams would be able to compensate StarCraft players well enough to attract really good ones. Second, beyond a certain point, the bonus resources granted by accumulated IP points would produce an overwhelming advantage for companies that produce innovation.
  • Q: Wouldn't this give an unstoppable advantage to software firms in South Korea, which has an impressive national lead in competitive StarCraft talent?
    A: So what? The international distribution of firms holding technology-related patents in the past two decades has not been even remotely equal. Established United States firms like IBM and Microsoft hold vast patent portfolios, dwarfing those of firms in developing nations, and WIPO has been aggressively working to bring developing nations under First World IP regimes. Yet the software industry remains steadfastly innovative around the globe. Over time, other nations would earn IP points and develop local StarCraft talent, evening the score. In any case, freezing the rules over an unequal or even larcenous initial distribution is the very essence of property: if a Native American robber breaks into your home and takes your television, will you refrain from pressing charges when he's arrested?
  • Q: Your proposal permits any .sft to sue any other .sft, without any basis for infringement. That's absurd!
    A: Under the present system, lawsuits claiming intellectual property infringement can drag on for years and cost millions of dollars even if the plaintiff never specifies exactly what was infringed upon. By permitting any .sft to challenge any other .sft, we remove the fig leaf of "cause of action", which (let's face it) is mostly just embarrassing. Also, as noted above, .sft lawsuits would be swift and decisive, so the transaction costs would be low even for frivolous lawsuits. That said, I am not completely opposed to a ladder system, to save highly skilled lawyers from being constantly challenged by lowbies.
  • Q: IP points would give an unstoppable advantage to entrenched encumbents with deep IP portfolios. What's to stop them from suing everyone?
    A: Under the present patent system, IBM, Microsoft, or any number of other entrenched players with deep patent portfolios could hypothetically destroy the entire software industry in a convulsive paroxysm of lawsuits. Yet they choose not to, because suing everyone in the industry would result in "mutually assured destruction": everyone would countersue everyone else and everyone would lose. I see no reason that .sft lawsuits would be any different.
  • Q: What's to stop incumbents from suing little startups that have lower-tier StarCraft players and few IP points?
    A: Startups today usually have small to nonexistent patent portfolios, but big companies don't find it worth their time to sue them because they also have very few assets worth confiscating. As a startup grows more successful and its pockets get deeper, it should be able to afford to hire StarCraft talent and accumulate IP points, just as companies today accumulate patents and other IP as they grow bigger.
  • Q: The software industry's incredibly innovative to date, and your reform would bring it under a radically new legal regime. How can you be sure it won't stifle innovation?
    A: How can you be sure it will stifle innovation? There's ZERO evidence that it would. Imagine if we'd listened to such naysayers back when software patents were proposed. Don't you think the burden of proof is on the opponents of regulation, rather than on the proponents?
  • Q: StarCraft 2 is coming out soon. How does that affect your proposal?
    A: StarCraft 2 is currently an unknown quantity, whereas Starcraft is a classic that has withstood the test of time. That said, we should keep an open mind about the gameplay innovations, and if they prove to be successful I am not opposed to adopting the sequel someday. For one thing, the improved graphical sophistication would make lawsuits more entertaining for bystanders to watch.
  • Q: Halo would be more exciting than Starcraft.
    A: Fuck off, frat boy.

Wednesday, January 24, 2007

Notes on [Hahn Litan 06]: Network Neutrality Part 1: Requests For Comments

[Full disclosure: I work for a large technology company that presently lobbies for network neutrality legislation. My personal views on network neutrality predate my employment, and are independent of it. Everything posted on this site reflects strictly my own thinking, and is not endorsed, sponsored, or approved by my employer in any way. Finally, this should go without saying, but nothing I write here reflects any confidential or proprietary information from my employer.]

The generally excellent Tim Lee @ TLF today writes two posts reacting to a recent white paper on network neutrality by R. W. Hahn and R. E. Litan of AEI/Brookings[0] (also to be published in the Milken Institute Review).

As I read it, Hahn and Litan's paper makes the following major claims. First, they claim that the Internet's not neutral, and never has been --- hence the title, "The Myth of Network Neutrality...". Second, they claim that existing proposed legislation to codify network neutrality into law would do more harm than good. Third, they claim that there are economic benefits to tiered pricing for network-layer "quality of service" (QoS).

The major weakness of the paper is that the authors do not understand Internet technology, and they seem to have consulted zero experts who do. As a result, they make many elementary errors of fact, rendering their argument unsound and their conclusions unsupportable. I am rather too tired tonight to go through all the errors at once, so I will delineate only a few in this post. Expect at least one follow-up post sometime in the next N days.

On Requests For Comment

Hahn and Litan cite several historical RFCs in support of the following conclusion:

. . . early writings on the Internet indicate that prioritization has always been considered an important design characteristic for TCP/IP --- sharp contrast to the romantic ideal of the end-to-end principle.

This post will examine how the authors attempt to support this claim, and how they fail.

As an aside, before I dive in, it is important to recognize that RFCs ("Requests For Comments") are not necessarily authoritative design documents for the Internet. RFCs have no binding force except insofar as many engineers independently decide to follow them --- a sort of community-based moral suasion, given economic force by network effects. Furthermore, RFCs vary widely in purpose: they may be arcane memos warning about one-time events, ideas of untested merit from the dustbin of history, cutting-edge research that may or may not ever be adopted, or even jokes.

Only a few RFCs describe protocols that have been widely implemented and deployed on the Internet, and even those are almost always provisional.

On to the meat. Hahn and Litan cite four RFCs. Tim seems to have missed read the short version of Hahn and Litan's paper, which lacks RFC citation numbers, but they're in the footnotes of the long version:

  • RFC 675: Specification of Internet Transmission Control Program
  • RFC 791: Internet Protocol
  • RFC 1633: Integrated Services in the Internet Architecture: an Overview
  • RFC 794: Pre-emption

The first two are (ancestors of) bona fide, widely-adopted standards. The third is a position paper by a group of highly respected networking researchers. So, those three RFCs are not jokes, although the first was superceded by RFC 793 before ARPANET even became the Internet, and the third has never, to date, been deployed on the Internet at all. Then there's the fourth, which does not even describe the Internet, but another network entirely; so it is not exactly a joke, but it's pretty funny to see it cited as evidence of the Internet's principles.

So, here are the mistakes the authors make w.r.t. each of these RFCs in turn. Note that I share Tim's frustration that the authors have not, in most cases, provided either page numbers or quotes, so in some cases I have had to interpolate the exact citation.

RFC 675

This RFC describes an early version of TCP, one of the two fundamental protocols of the Internet. The authors state that Vint Cerf "explained that outgoing packets should be given priority over other packets to prevent congestion on the ingoing and outgoing pipe" [HahnLitan06, p. 4]. I believe the authors are referring to section 4.4.1., as the word "incoming" only appears in a handful of places in this RFC, and only once in any context related to priority:

From the standpoint of controlling buffer congestion, it appears better to TREAT INCOMING PACKETS WITH HIGHER PRIORITY THAN OUTGOING PACKETS.

The all-caps are in the original. Hahn and Itan appear to have the capitalized part exactly backwards, which doesn't speak well of their conscientiousness, or that of the editors at the Milken Institute Review. However, that's not the deep problem. The deep problem is that Hahn and Litan do not understand what TCP is, and what is being described here.

First of all, TCP is an end-to-end protocol. Period. Every single normative sentence[1] in RFC 675 describes an operation that occurs on an end-host, not on a router internal to the network. The above sentence describes how an end-host should prioritize processing of packets in buffers inside its networking stack. It is, in other words, a hint to operating system implementors who want to write TCP/IP stacks. It has nothing whatsoever to do with "the network" prioritizing packets.

If this sounds like an abstruse distinction, imagine "the network" as the US Postal Service, and an end host as your home. The operating system's network buffer is your mailbox. What the above sentence is saying is that before you stuff outgoing mail into your mailbox, you should take your incoming mail out of your mailbox. It is saying nothing about whether the US Postal Service should pick up your mail in one order or another.

Does RFC 675 present a "sharp contrast to the romantic ideal of the end-to-end principle"?

It does not.

RFC 791

This RFC describes IP, the other fundamental protocol of the Internet. Again, the authors do not give exact quotes or specific citations, but they state:

A 1981 Request for Comments explained that precedence—a measure of importance of the data stream—could be used as a means of differentiating high priority traffic from low priority traffic.

Now, RFC 791 does contain some discussion of precedence. A "packet" is a little bundle of bits that a network shuffles around. Among other things, a network protocol must specify the form of its packets, just as the US Postal Service demands that envelopes be addressed and stamped in a particular manner. IP specifies a packet format with 8 bits reserved for the "Type of Service" field, which can technically be used to indicate the priority of a packet.

The motivation for this is as follows. Back in 1981, before the Internet emerged as the winner in the ecology of network designs, networking researchers were experimenting with different kinds of networks to run IP on. Some of those networks prioritized packets based on how the packets described themselves. It was believed that IP packets should reserve some space so that these networks could stash priority information in them. This reserved space is the "Type of Service" field.

RFC 791 does not describe how networks would use the "Type of Service" (TOS) field. That is specified in RFC 795, which describes how TOS is used by the AUTODIN II, ARPANET, PRNET, and SATNET networks.

None of those networks was the Internet. They were networks for military communications in the 1960s and '70's. None of them exists today. Now, as every geek knows, ARPANET was the ancestor of the Internet; but not all the features of ARPANET were carried over to the modern Internet. In particular, modern Internet routers do not use the TOS field, at least not as described in RFCs 791/795. Eliding many gory details, DiffServ (a.k.a. DSCP) supercedes TOS, and it is used for traffic shaping within individual subnets, not on the Internet as a whole.

In short, the section on precedence in RFC 791 describes a mechanism that is not, and has never been, used to prioritize packets on the Internet.

Does RFC 791 show that "prioritization has always been considered an important design characteristic for TCP/IP"?

It does not.

RFC 1633

RFC 1633 is, as noted above, a position paper by a group of distinguished networking researchers: R. Braden, D. Clark, and S. Shenker. In this RFC, Braden et al. argued (in 1994) that at some point in the future, a QoS mechanism should be adopted into the Internet's fabric.

Considered as a technical question, this is a controversial argument, but not a ludicrous one. I could discuss it at some length (and, if I ever get my act together, perhaps someday I will do so in this space), but for the moment I must focus on this RFC's relevance to Hahn and Litan's white paper. Hahn and Litan cite this as an "early writing on the Internet" that indicates that "prioritization has always been considered an important design characteristic of TCP/IP". There are at least two problems with this reading.

First, a document dated 1994 cannot be an early writing on the Internet. In June 1994, the Internet had not become a commercial mass phenomenon --- that had to wait for the spread of Netscape --- but it had existed for almost a decade. And, indeed, RFC 1633 sketches a speculative protocol extension to the existing Internet that has not, to date, been adopted by anybody.

Second, and more importantly, here are a few direct quotes from RFC 1633, Section 2:

The fundamental service model of the Internet, as embodied in the best-effort delivery service of IP, has been unchanged since the beginning of the Internet research project 20 years ago [CerfKahn74]. We are now proposing to alter that model . . .

. . . Internet architecture was [sic.] been founded on the concept that all flow-related state should be in the end systems [Clark88].

Designing the TCP/IP protocol suite on this concept led to a robustness that is one of the keys to its success.

In short, the authors state exactly the opposite of what Hahn and Litan would have us adduce. End-to-end flow control was part of the "fundamental service model of the Internet" and "one of the keys to its success".

Does RFC 1633 show that the Internet presents "a sharp contrast to the romantic ideal of the end-to-end principle"?

It does not.

RFC 794

Best for last. This one's particularly hilarious. Hahn and Litan quote this RFC at length --- one of the few times they do so:

In packet switching systems, there is little or no storage in the transport system so that precedence has little impact on delay for processing a packet. However, when a packet switching system reaches saturation, it rejects offered traffic. Precedence can be used in saturated packet switched systems to sort traffic queued for entry into the system. In general, precedence is a tool for deciding how to allocate resources when systems are saturated. In circuit switched systems, the resource is circuits; in message switched systems the resource is the message switch processor; and in packet switching the resource is the packet switching system itself.

That's a fine excerpt from RFC 794. The problem is that RFC 794 describes AUTODIN, not the Internet. Do you use AUTODIN? Me neither.

Vint Cerf was a networking researcher. He and Bob Kahn tried lots of things. The fact that some of his projects used packet prioritization has almost no relevance to the fact that the one project that succeeded wildly was a neutral network with end-to-end flow control.

Does RFC 794 give us an "early writing on the Internet"? Does it show that "prioritization has always been considered an important design characteristic for TCP/IP"? Does it demonstrate the Internet's "sharp contrast to the romantic ideal of the end-to-end principle"?

It. Does. Not.

Conclusion

The above points become apparent to anybody of moderate technical knowledge who attempts to read the RFCs carefully and understand them. RFCs were frequently written by Ph.D.'s, but they were not written for Ph.D.'s; they were written for hackers.

It is, perhaps, understandable that Hahn and Litan --- two economists --- could not understand these RFCs in detail. However, they have misread the RFCs so completely that it is almost inconceivable to me that they could have consulted someone with the necessary background.

They construe RFC 675 --- a description of an end-to-end transport protocol --- as a blow against the end-to-end principle. They construe (portions of) RFCs 791, 1633, and 794 --- documents which do not describe the Internet --- as documents describing the foundational principles of the Internet. In some cases, as with 1633, they cite these documents in support of a claim that is specifically refuted by plain text in the document.

How could this happen?

I would guess that Hahn and Litan's "research" process went something like this. First and foremost, they knew that they wanted to produce a paper arguing against network neutrality regulation. They had heard somewhere about these "RFC" things, and they knew that Vint Cerf, one of the current big pro-neutrality voices, had written a bunch of them. So, they decided to go search for the words "precedence", "priority", and "quality of service" in the old RFCs. To their great delight, these words appeared in some RFCs by Cerf himself, and by other prominent networking researchers. Alas, these technical documents turned out to be pretty tough to interpret if you've never written a line of networking code in your life. However, never mind meaning or context: knowing their "research" community --- economists predisposed to disliking regulation --- they figured they could get away with fudging the citations anyway, because none of their peers would understand the RFCs either. Most of them wouldn't even bother to try. So they went ahead and wrote their paper, and got it accepted to a little economics review.

Now, I understand that this is a pretty nasty thing to say. Given Hahn and Litan's long and distinguished careers in academia and public service, I would like to believe something else, but I'm having trouble doing it. I mean, look at the evidence above. They have clearly leaned upon the facts, as the proverb goes, as a drunkard leans upon a lamppost: for support, not illumination.

At best, I can understand this behavior as a combination of ignorance and arrogance: maybe the authors believed their vast experience in parsing documents in economics and law made it unnecessary to consult experts in computer science ("Not even a real science --- it has 'science' in the title!"). At worst, though, one could argue that it's a mixture of intellectual dishonesty and irresponsibility.


In Part 2 (if I ever manage to write it): Hahn and Litan's errors regarding VPNs and World of Warcraft.


[0] I normally ignore anything that comes out of AEI, as it tends to be 99% worthless on technology issues, and it's more work than it's worth to sort the wheat from the chaff. Based on Tim's decision to post about this paper, I waived my normal skepticism, and was pretty badly disappointed. Sigh. I have adjusted my priors, as the Bayesians say.

[1] By "normative sentence", I mean one stating a property that a TCP implementation must have in order to be rightfully called a TCP implementation. Now, like most RFCs, 675 is not an ultra-terse mathematical specification, but a document intended to be a useful and readable guide to practical implementors. So, it gives some background about routers and such to provide the reader with context. But as Cerf and Kahn state, TCP makes almost no requirements of the underlying network beyond its ability to carry bits, which is one reason why it works over substrates ranging from circuit-switched telephony (dial-up Internet) to the postal service.

Wednesday, January 17, 2007

War's most significant bit

Read this. Then read this.

One of the reasons I post less frequently than I used to is that lately, my despair at the stupidity of humanity exceeds my fury, whereas the opposite once obtained. Shall I bother to point out the obvious? All right, once more into the breach. This post will be more indirect than it needs to be, but I can only overcome my sense of the inherent futility of it all by creeping up on the subject sideways.

Computers represent everything, including numbers, using bits. Each bit is either a one or a zero. One and zero are not the only numbers we would like to represent: two and three and five hundred million are all nice too. To represent other numbers, computers use several ones and zeros at a time. This is called binary notation:

11010110

represents the number 214. Binary notation works more or less like the decimal notation that everyone's familiar with. In decimal, you interpret a number by multiplying each successive digit from right to left by an increasing power of ten, so 214 = (2 * 102) + (1 * 101) + (4 * 100). In binary, you interpret the bits by multiplying by successive powers of two, so that the rightmost bit is multiplied by 1 (20), the next-to-rightmost bit is multiplied 2 (21), and so on, up to the leftmost bit, which is multiplied by 2n-1, where n is the number of bits in your string. In the above case, because there are eight bits, the leftmost bit represents 27, or 128. Summing up, 128 + 64 + 16 + 4 + 2 = 214.

Note that the leftmost bit is vastly more important than the rightmost bit. If you twiddle the rightmost bit of 11010110, you get the string

11010111

which corresponds to 215. That's pretty close to 214. If you twiddle the leftmost bit, you get the string

01010110

which corresponds to 86. That's pretty far from 214. Programmers call the leftmost bit the most significant bit; we call the rightmost bit the least significant bit.

As the size of your string grows linearly larger, the difference between the significance of the most significant bit and the least significant bit grows exponentially larger. With a 16-bit string, the difference is about about thirty-three thousand to 1. With a 32-bit string, the difference is about 2 billion to 1.

The concept is suggestive, and programmers readily adapt it metaphorically to other subjects. In most decisions in life, it's important to get the most significant bits exactly right, and much less important to get the least significant bits exactly right. Err on the most significant bit in the quantity of your jet's fuel, and you're making a "water landing" in the middle of the Atlantic. Err on the least significant bit and nobody will notice.

The ability to concentrate on the most significant bits is also called "having a sense of proportion".

Here, in what I consider roughly descending order of significance, are several "bits" of truth from the years 2002-2007:

  • America should not have invaded Iraq.
  • Congress should not have given Bush authority to invade Iraq at his sole discretion.
  • Invading Iraq without a UN mandate hinders America's diplomatic efforts, which are crucial to antiterrorism and nuclear non-proliferation policy.
  • Given its failure to establish a stable state in Afghanistan, the Bush administration could not be trusted to handle the aftermath of the invasion of Iraq.
  • Invasion of Iraq and its subsequent destabilization fuel anti-American sentiment and provide a propaganda bonanza for violently radical Islamist movements all over the world.
  • Saddam Hussein's WMD program had not made significant progress towards either nuclear weapons or mass casualty biological weapons.
  • The doctrine of preventative war does not suffice to justify the Iraq War.
  • ...
  • ...
  • ...(several thousand more bits)...
  • ...
  • As a companion to coffee, key lime pie is superior to blueberry pie.
  • Absolute isolationism and absolute pacifism are philosophically unsound.
  • The Beyoncé Knowles song "Deja Vu" was produced by chart-topping super-producer Rodney Jenkins.
  • Theoretically speaking, the doctrine of preventative war might someday suffice to justify some hypothetical war.
  • ...

One can summarize Megan McArdle's point (and Kevin Drum's point here) as follows: "Some leftists were wrong about the least significant bits on the Iraq War; therefore, the arguments of anti-war advocates remain no more credible than those of people who were wrong about the most significant bits."

Of course, stated this way, nobody would dare make such an argument. Obviously, a decision procedure that leads to correct answers in the most significant bits is strictly preferable to one that gets the most significant bits wrong but the least significant ones right. Arguing otherwise is transparently stupid. So former hawks take a circuitous route that's no less stupid, but slightly less transparent. By filling the conversation with angels-dancing-on-the-heads-of-pins arguments about absolute isolationism and absolute pacifism and preventative war, McArdle and Drum hope to divert readers' attention to the least significant bits and induce the illusion that the most significant bits are not, in fact, far more significant.

And I suspect that McArdle and Drum even believe that they're talking about important subjects, despite the fact that virtually nobody actually believes in absolute isolationism or absolute pacifism, or that preventative war can never be justified. If you're arguing that "sometimes war can be justified", you're arguing with Quakers and half-mad hermits who live alone in the woods. To pretend otherwise requires massive cognitive dissonance, or a completely unprincipled and remorseless willingness to erect straw men, or both.

Of course, pundits have ample reason for cognitive dissonance. Pundits blow hot air around for a living. Their sense of self-worth depends on the belief that such vigorous thermoconvection makes them better qualified to judge matters of import than the hoi polloi, who rely on simple rules of thumb, like "War Is Bad". It literally does not compute in their minds that some shaggy dumbass off the street with a picket sign could have better judgment than, say, a professional writer for the Economist or the Washington Monthly.

But "War Is Bad" is a pretty good rule, because in the vast majority of practical cases, nonviolent action leads to better outcomes than war. "War Is Bad" gets the most significant bit right far more often than the punditological prestidigitation of which McArdle et al. are so fond. If all liberal (and "libertarian") hawks had shouted from the rooftops in 2002 that "War Is Bad" instead of the pseudo-nuanced bullshit they actually said, it would have strictly improved objective outcomes for the nation.

But rather than learn from this experience, McArdle's looking around for excuses to ignore the lesson. As an individual, of course, McArdle barely matters at all, but she's representative of a whole equivalence class of formerly hawkish intellectuals who want to emerge from this debacle without troubling themselves to rethink a single assumption.

Hence my despair and fury. The stupidity, the arrogance, the willful blindness; and these people will not be called to account. If anything, they'll be rewarded.