Sunday, October 30, 2005

Sensible countermeasures against mass casualty terrorism

I've pointed several times, in this forum, to a multi-institutional course about cybersecurity and homeland security that's being offered this term at UW, UC Berkeley, and UCSD. One of the things that comes up in the lectures (video and slides available online) is that there are, in fact, many simple measures that would mitigate the casualties from chemical, biological, radiological, and nuclear attacks.*

It's commonplace, especially among conservatives, to say stuff like "you won't win the war on terrorism with defense --- you have to go on offense!" This is true, as far as it goes: in an open society, it's not feasible to defend every target against terrorist attack, which means that you can only prevent mass casualty terrorist events by using a mixture of diplomacy, intelligence, law enforcement, and military action. But this can't be an excuse for neglecting defensive measures, some of which would do considerable good.

For example, taking refuge in fallout shelters, even very crude ones, considerably mitigates the damage of a nuclear attack. Nuclear explosions kill people within a certain radius through raw blast force and heat, and there's not a whole lot you can do inside that radius. However, outside that radius, you want to be insulated from radioactive fallout for about 48 hours (by then, the fatality rate from radiation poisoning has dropped dramatically --- the most dangerously radioactive elements of fallout are also those that decay most rapidly). You can avoid radiation pretty effectively by putting distance between yourself and the radioactive dust outside: either take refuge in a shelter with thick walls, or move deep into the central spaces of a large building (far from the exterior surface).

So why isn't the Dept. of Homeland Security telling people in major metropolitan areas to prepare fallout shelters, and take other defensive measures? UC Berkeley public policy prof. Steve Maurer makes a good point on the course wiki discussion for a recent lecture:

It turns out that if you do something minimal like dig a hole in the backyard and put a door on top of it, that cuts down the radioactivity pretty dramatically. That's just physics, there's not much doubt it's true. But the last time the government tried to point this out in a big way was during the Reagan Administration. The problem with this pitch -- they called it "with enough shovels" -- was two-fold. First "everyone knows" that we will all die in a nuclear war, so everyone translated "with enough shovels" as evidence that Reagan "must be senile." Second, people like to avoid thinking about nuclear war. Saying "the government should do something" does that, saying "we'll all die" does that, but saying "you are the first line of defense" asks people to think which makes them anxious. So what you end up with is physically sensible advice that people would desperately want to know in a nuclear war but will create a huge political backlash if you bring it up in advance.

Now, during the Cold War, the most credible nuclear threat was global thermonuclear war, i.e. massive numbers of fusion bombs dropped on every major population center in America. In that context, I think that "we will all die in a nuclear war" was actually not much of an exaggeration. However, in the post-Cold War, post-9/11 era, people generally believe that the most credible nuclear threat is a fission bomb (considerably less potent than a fusion bomb), stolen or manufactured by terrorist groups, and detonated in one city. In this context, American society would largely survive intact, but there would be many people in the attacked metropolitan area who have a good chance of either dying or surviving, depending on how they act. Some of those people will be downwind, in the path of the fallout plume; it's unlikely that we'll have enough road and transit capacity to quickly evacuate them all; and having plans to place people in fallout shelters would be a really good idea.

And fallout shelters are just one defensive measure among many. Biological attacks are even more subject to mitigation than nuclear attacks --- the behavior of carriers and potential victims can make all the difference in the propagation of an infection.

Why don't you and I already know about these countermeasures? Why don't we already know what to do in the event of a mass-casualty terrorist attack? Clearly, if America's political leadership were serious about defending America against terrorism, then undertaking a public education campaign would be part of their strategy, regardless of the political difficulty. Just as clearly, our leaders aren't doing this. Instead, we've gotten inscrutable color-coded "terror alerts" and vague exhortations to spy on our neighbors.

Of course, one can just add this to the long list of counterterrorism opportunities missed in the past four years. When you get down to it, our political leaders either haven't thought very seriously about counterterrorism policy, or else their thinking on the subject has been profoundly misguided, or both.

At this point, it would be incredibly easy, and appropriate, to segue into a broader rant against the ruling political party. But, I don't really have time to get into all that tonight, so for now I leave this as an exercise for the reader.


* Incidentally, of these four weapon categories ("CBRN"), the experts who've lectured in this course think that only biological and nuclear attacks ("B" and "N") have serious potential to be "mass casualty" events. The reasons are somewhat complicated, but in a nutshell:

  • It's hard for a terrorist to deliver chemical weapons in a way that causes hundreds or thousands of fatalities, as opposed to a few dozen.
  • The radiological attacks that terrorists are likely to be capable of executing don't generate piles of dead bodies. Rather, they increase the victims' lifetime probability of getting cancer by a few percent. This might have considerable psychological effect, but it's not a mass casualty event.

Saturday, October 29, 2005

This is your blog on Pharyngula + Kottke

(Warning: Navel-gazing metablogging ahead. Skip if you value your time.)

This is your blog on Pharyngula + Kottke:

[stats chart, 29 Oct 2005]

For comparison, as previously noted, this is your blog on Pharyngula + Atrios:

[stats chart, 24 April 2004]

Examining the content of the two posts in question, I conclude that being colorfully obnoxious is the easiest way to get burst traffic.

Truthfully, I'm rather ambivalent about both the attention, and the reason for it. As I've written many times here, this blog's a way for me to vent my thoughts and to let a few friends know what I'm thinking about, and it works pretty well. More attention would effect a Heisenbergian perturbation that I'm not sure I want. Plus, I resolved a while back to strive for less petulance and more generosity of spirit here. I guess I have a little too much spleen to live up to that resolution all the time; I don't beat myself up about this, but it's a bit chastening to have spotlights (even the modest, fleeting spotlights of blogs) shining on the exact moments I fall off the wagon.

Anyway, I'm not sure what the point of all this was. Metablog mode off.

p.s. As for why I write things even though I know they may be obnoxious, consult Oliver Wendell Holmes's satirical Autocrat of the Breakfast Table:

All uttered thought, my friend, the Professor, says, is of the nature of an excretion. Its materials have been taken in, and have acted upon the system, and been reacted on by it; it has circulated and done its office in one mind before it is given out for the benefit of others. It may be milk or venom to other minds; but, in either case, it is something which the producer has had the use of and can part with. A man instinctively tries to get rid of his thought in conversation or in print so soon as it is matured; but it is hard to get at it as it lies imbedded, a mere potentiality, the germ of a germ, in his intellect.

Metablog mode off. (For real this time.)

Sunday, October 23, 2005

Violence, religion, and double standards

Some of the replies to my previous post have predictably remarked on the casual violence therein. Truthfully, I am a little uncomfortable with it --- but only a little, since I see it as an illustrative thought experiment, not an incitement to actual violent action. I don't actually advocate using a baseball bat on Intelligent Design advocates, any more than Schrödinger advocated giving radiation poisoning to cats.

Nevertheless, I do take the point that the seductiveness of violent rhetoric is a dangerous thing to fall into. Therefore, I will apologize for my previous post if Intelligent Design advocates agree to disavow the text containing the following passages:

Thus says the LORD: About midnight I will go out through Egypt. Every firstborn in the land of Egypt shall die, from the firstborn of Pharaoh who sits on his throne to the first born of the female slave who is behind the handmill, and all the firstborn of the livestock. Then there will be a loud cry throughout the whole land of Egypt, such as has never or will ever be again.
Moses became angry with the officers of the army, the commanders of thousands and the commanders of hundreds, who had come from service in the war. Moses said to them: "Have you allowed all the women to live? These women here, on Balaam's advice, made the Israelites act treacherously against the LORD in the affair of Peor, so that the plague came among the congregation of the LORD. Now therefore, kill every male among the little ones, and kill every woman who has known a man by sleeping with him. But all the young girls who have not known a man by sleeping with him, keep alive for yourselves."
Then from the smoke came locusts on the earth, and they were given authority like the authority of scorpions of the earth. They were told not to damage the grass of the earth or any green growth or any tree, but only those people who do not have the seal of God on their foreheads. They were allowed to torture them for five months, but not to kill them, and their torture was like the torture of a scorpion when it stings someone. And in those days people will seek death but will not find it; they will long to die, but death will flee from them.
Then another angel, a third, followed them, crying with a loud voice, "Those who worship the beast and its image, and receive a mark on their foreheads or on their hands, they will also drink the wine of God's wrath, poured unmixed into the cup of his anger, and they will be tormented with fire and sulfur in the presence of the holy angels and in the presence of the Lamb. And the smoke of their torment goes up forever and ever.
The fourth angel poured his bowl on the sun, and it was allowed to scorch people with fire; they were scorched by the fierce heat, but they cursed the name of God, who had authority over these plagues, and they did not repent and give him glory.
The fifth angel poured his bowl on the throne of the beast, and its kingdom was plunged into darkness; people gnawed their tongues in agony, and cursed the God of heaven because of their pains and sores, and they did not repent of their deeds.

Any Christian who doesn't disavow the above has no grounds for criticizing the violence in my previous post. These passages aren't from some random blog; they're from the central text of a religion. The crazy thing is that, unlike my completely hypothetical thought experiment, Christians seriously believe that God did (or will do) these things, and that he's righteous in doing so (everything God does is axiomatically righteous).

By my estimate, I have a huge amount of headroom here. Unless I start wishing that Intelligent Design advocates choke for a thousand years on the putrid rot of their own entrails while watching their children being raped by goats, I'm still way undershooting the cruelty that's glorified by the Bible.

But this is just the same old story: there's a double standard for secular and religious folk. When a secularist, even in jest, even in a moment of frustration, invokes hypothetical slapstick violence to illustrate a point, it's evidence that we're evil. Yet when the central holy text of a religion advocates deadly serious, brutal violence on a massive scale, that's somehow OK. Similarly, evolution has mountains of evidence behind it, but should be dismissed because there are still some open questions. Yet Intelligent Design, which completely lacks evidence --- or even anything resembling a testable hypothesis for which evidence could be adduced --- should be taken seriously. What can one call this, but intellectual dishonesty on a massive scale?

Tuesday, October 18, 2005

The only debate on Intelligent Design that is worthy of its subject

Moderator: We're here today to debate the hot new topic, evolution versus Intelligent Des---

(Scientist pulls out baseball bat.)

Moderator: Hey, what are you doing?

(Scientist breaks Intelligent Design advocate's kneecap.)

Intelligent Design advocate: YEAAARRRRGGGHHHH! YOU BROKE MY KNEECAP!

Scientist: Perhaps it only appears that I broke your kneecap. Certainly, all the evidence points to the hypothesis I broke your kneecap. For example, your kneecap is broken; it appears to be a fresh wound; and I am holding a baseball bat, which is spattered with your blood. However, a mere preponderance of evidence doesn't mean anything. Perhaps your kneecap was designed that way. Certainly, there are some features of the current situation that are inexplicable according to the "naturalistic" explanation you have just advanced, such as the exact contours of the excruciating pain that you are experiencing right now.

Intelligent Design advocate: AAAAH! THE PAIN!

Scientist: Frankly, I personally find it completely implausible that the random actions of a scientist such as myself could cause pain of this particular kind. I have no precise explanation for why I find this hypothesis implausible --- it just is. Your knee must have been designed that way!

Intelligent Design advocate: YOU BASTARD! YOU KNOW YOU DID IT!

Scientist: I surely do not. How can we know anything for certain? Frankly, I think we should expose people to all points of view. Furthermore, you should really re-examine whether your hypothesis is scientific at all: the breaking of your kneecap happened in the past, so we can't rewind and run it over again, like a laboratory experiment. Even if we could, it wouldn't prove that I broke your kneecap the previous time. Plus, let's not even get into the fact that the entire universe might have just popped into existence right before I said this sentence, with all the evidence of my alleged kneecap-breaking already pre-formed.

Intelligent Design advocate: That's a load of bullshit sophistry! Get me a doctor and a lawyer, not necessarily in that order, and we'll see how that plays in court!

Scientist (turning to audience): And so we see, ladies and gentlemen, when push comes to shove, advocates of Intelligent Design do not actually believe any of the arguments that they profess to believe. When it comes to matters that hit home, they prefer evidence, the scientific method, testable hypotheses, and naturalistic explanations. In fact, they strongly privilege naturalistic explanations over supernatural hocus-pocus or metaphysical wankery. It is only within the reality-distortion field of their ideological crusade that they give credence to the flimsy, ridiculous arguments which we so commonly see on display. I must confess, it kind of felt good, for once, to be the one spouting free-form bullshit; it's so terribly easy and relaxing, compared to marshaling rigorous arguments backed up by empirical evidence. But I fear that if I were to continue, then it would be habit-forming, and bad for my soul. Therefore, I bid you adieu.


UPDATE (22 Oct.): If you're a creationist or IDiot [0], and you're suddenly possessed by the urge to comment on this post, please don't bother. I know what you're going to say. When I was an undergrad, I read talk.origins for a while, and I have seen every single creationist argument under the sun. I spent many an hour watching people knowledgeable about evolution debating creationists: patiently debunking the same tired arguments over and over and over and over and over and over and over and over and over and over and over again, responding in good faith to arguments that were clearly disingenuous, dumbing down their writing style to a second-grade level so that creationists could understand (and even then creationists wouldn't understand), and even copying and pasting from FAQs because creationists were too lazy to open up URLs in their web browser. All to no avail.

So, you may think you're going to blow me away with your amazing show of rhetoric, but believe me, I have seen it before, and you're wrong. The thing that you're about to write is not only wrong, but transparently, stupidly, embarrassingly wrong, so wrong that it makes me wince inwardly with shame at the fact that you're a member of the same human race that I am. What you're about to write is evidence that you haven't bothered to read the FAQs, or comprehended a single book on evolutionary biology that's not written by one of your crackpot creationist pseudo-intellectuals. So don't bother writing what you're going to write. Just go away.

[0] Really, creationists and Intelligent Design advocates are the same thing, just like a clown and a clown carrying an umbrella are really the same thing.


UPDATE' (22 Oct.): Two further clarifying points, since this page unexpectedly got linked rather widely (more...).

First, I stand by the position that the above post is the only debate on Intelligent Design that's worthy of its subject. Now, in a democratic society one must, in the public sphere, sometimes engage in good-faith debate with people or ideas that do not deserve it --- with ignorant or dishonest people, with bad ideas --- and indeed, there are legions of people with backgrounds in evolution who are doing exactly that. Call me an asshole if you want, but don't you dare claim that my post is somehow representative of evolution advocates. For literally decades, evolution advocates have responded to the abuse and astonishing mendacity of creationists/IDiots with patience, careful explanations, and copious fact-checking.

Nevertheless, I'm not one of those people. I'm never going to debate Intelligent Design seriously in this forum. This is a personal weblog, the Internet equivalent of my front yard, and under normal circumstances it's only read by myself and a handful of my friends. If I'm having a barbecue in my front yard with some friends, and we make derisive noises about Intelligent Design, then Intelligent Design advocates who overhear and venture into my yard can expect to be viciously mocked. They should not expect to be taken seriously, any more than anti-Semitic conspiracy theorists or flat-Earthers can expect to be taken seriously, in my yard. If someone believes in Intelligent Design, I believe (s)he's either a nutball, or simultaneously ignorant and too lazy to take elementary steps to remedy their ignorance. Were I writing for an Op-Ed page or teaching in a classroom, I could muster all kinds of reasoned argument against ID, but I'm not, and I won't.

Second, there seems to be a distressingly common misperception among non-ID advocates that ID's somehow valid in its own (non-scientific) sphere of debate. But that, too, is a load of crap. ID is not a generic theological or philosophical argument for the possibility of a designer. ID is a specific intellectual/political movement that explicitly seeks to establish scientific grounds for rejecting the possibility of evolution without a designer. If ID were simply a theological or philosophical argument, there would be no way to introduce it to school science curricula. But that's one of the ID movement's stated primary objectives. People get confused by this, because ID's methods are so fundamentally unscientific, but always remember that ID calls itself science.

Let me repeat that: ID calls itself science. ID calls itself science. ID calls itself science. And therefore, ID must be judged by the criteria of science, not philosophy or theology.

And as science, ID is absolutely the pits. It is a fundamentally non-scientific argument that calls itself scientific (note my use of the the restrictive subordinating conjuction, "that", instead of "which"). Therefore, it's a contradiction in terms to say that ID is "valid" when considered nonscientifically.


UPDATE'' (23 Oct.): Perhaps I should have foregone all the above and simply linked to Samuel Johnson's refutation of Bishop Berkely (via MonkeyFilter).


UPDATE''' (23 Oct.): IDiots, unsurprisingly, seem to lack basic reading comprehension skills. What part of "Just go away" do you not understand? If I read one more comment claiming there's no evidence for evolution, then I'm just going to delete it, period. No, I'm not going to point you to evidence. If you're too lazy to type the words "evidence evolution" into Google and hit Enter before you post such an outrageous claim, then I don't believe I have any obligation to respect your desire to defecate into my comment box.


UPDATE'''' (24 Oct.): Well, it had to happen. Godwin's Law strikes again. Unlike Usenet, however, blog technology permits threads to be closed for comment, and I've done that here. Go post on your own blog, kiddies.

Monday, October 17, 2005

The Google Print shakedown

Donna Wentworth at Copyfight points to a pretty good article by Tim Wu at Slate on the "exposure culture" versus the 'control culture", and how it bears on Google Print.

I think Wu's reasonably astute about the principles. However, I also think that it's a mistake to think that opponents of Google Print are really motivated by principles of any kind. The Authors Guild and others do not really want Google Print to omit their books. If they did, then they could just take advantage of Google's offer to opt out, but they're not doing that --- they're trying to get Google to leave everyone's books alone. Why is that?

Well, what goes through an author or publisher's head when confronted with Google Print? Unless (s)he's a complete idiot, it must be something like:

  1. Holy cow, you didn't ask my permission! Get your grubby mitts off my work!
  2. Oh, you'll let me opt out? Fine: "Get your grubby mitts---"
  3. ...wait a second. If I opt out, it just means that other books will get exposure, and mine won't! I'll be at a competitive disadvantage!

This is why the publishers don't want to opt out of Google Print. Each individual publisher knows that unless everyone opts out, the ones who do will be at a disadvantage. And publishers can see at a glance that something like Google Print would increase the total market for books, so they don't even really want everyone to opt out.

Their real goals are twofold:

  • They want a cut of Google's profits. They want to get paid not only on the "back end" (where customers buy their books) but on the "front end" (where customers search for their books).
  • They want control of the sales channel. They especially want to control whether (and how) competing products get presented alongside theirs in search results. As a bonus, they'd love to completely eliminate competition from all those pesky, obscure, out-of-print books, which are only available used and hence net publishers no profits anyway. Coincidentally, these books are the hardest ones to get copyright clearances for, and therefore the ones most likely to be unavailable if Google's required to get explicit permission for each book.

That's all it's about: greed, plain and simple. The principles of copyright are only a fig leaf over publishers' tumescent desire for a piece of the search business.

In the end, this isn't about whether Google Print will continue to exist, because it (or something like it) will. It's only a question of which Google Print our society chooses for itself. If the Authors Guild and their ilk prevail, then the result will be an impoverished Google Print, one with far fewer books (and far fewer older books), and one where publishers hold veto power over the functionality and design of the service. If Google prevails, then the result will be an organic, ever-growing wealth of services, offered by Google and by others, competing with one another to help our society achieve the second and third of S. R. Ranganathan's famous laws of library science: every book its reader, and every reader their book.

Monday, October 10, 2005

Academia, meritocracy, and some semi-lame analogies

(I should really be working, but it's late at night, I'm tired, and I've got a mental itch. So I'm going to scratch it, and then try to load my brain with some final tasks to work on when I finally fall sleep.)

Political scientist and blogger extraordinaire Daniel Drezner was just denied tenure at the University of Chicago, in spite of a C.V. that, from my viewpoint as a layperson elsewhere in academia, seems pretty impressive for a standard six-year tenure clock.

Of course, at truly top-tier institutions, a strong C.V. is no guarantor of tenure. (By top-tier, I mean, roughly, a top-ten ranked department in a dynamic field.) To an outsider, this may seem like evidence that academia is somehow unfair or corrupt. Indeed, several comments on Drezner's post construe his tenure denial as evidence for a sweeping indictment of academic hiring practices.

In my opinion, this reveals a basic misunderstanding of the top tier of U.S. academia. This isn't an ordinary career track, where "everybody who's adequate" has a good chance at a job. Rather, it's a high-powered, competitive, elite profession, with a surplus of talented people competing for a small, relatively fixed pool of highly desirable spots.

In fact, the profession it most closely resembles is major league sports. To be a major-league pitcher, you have to be more than a merely excellent baseball player. You have to be one of the best players in America. This means you possess a combination of talent, focus, drive, and personal resilience which enables you to consistently perform, day in and day out, through personal and professional crises, at a level that outclasses all but a couple hundred of your peers nationwide.

How hard is this? Well, everybody's known some run-of-the-mill smart people in their lives: that kid you knew in high school who got A's without trying, that co-worker who's always got something clever to say, that wonderfully articulate blogger whom you enjoy reading, etc. Drawing on this commonplace experience, people outside academia conclude that top-tier professors must be roughly like the smart, bookish people they've known, only more so. This is like thinking that Barry Bonds is like the star of your high school baseball team, only more so. The truth is that there's a quantum difference.

In all humility, I submit that unless one has spent a few years inside an academic research community --- not merely taking courses at a university, but actually observing professors conduct their long-term research programs --- one doesn't truly know what it takes to become a tenured professor at a top-tier department. You cannot know, any more than you can truly understand what it takes to hit a major league fastball until you've seen it up close, from the batter's box, as it whizzes by faster than you can blink, so fast that you can barely believe it was there at all. This sounds incredibly elitist, but it's true.

Of course, in addition to possessing talent and drive, the candidate must be lucky, in a number of ways. For one thing, there's the lottery of life itself. The filtering mechanisms of American society kick in early, and operate mercilessly through the first few decades of your life. If you don't demonstrate some promise in high school, it's unlikely that you'll go to a halfway-decent college. If you don't show exceptional promise at a halfway-decent-or-better college, you probably won't go to a good graduate school. If you don't go to a good grad school (and/or postdoc, depending on the field), and do impressive research there, you definitely won't get a top-tier professorship. These filters are brutally selective, although you will notice from my phrasing that the earlier stages are more forgiving than the later ones.

However, this is hardly an indictment of academia in particular. Somewhere in America, there's a teenager who's throwing a baseball at a cardboard rectangle, and who could be the next Roger Clemens, but who never will. He'll never have the opportunity: he won't get the right coaching, he won't get scouted, or he'll simply be passed over for somebody who develops earlier and hence looks more promising when the scouts visit. This doesn't mean that baseball is especially non-meritocratic. It just reflects the general truth that it's inherently hard and expensive for society to establish a perfect meritocracy, which would entail maximizing the personal development of every single individual. So, we make do with a "good enough" meritocracy. We give most people (at least, middle-class people) a decent but imperfect chance, and develop "enough" talented people to fill the available jobs.

In fact, I personally believe that academia's much closer to a meritocracy than most other institutions in America. Your parents' connections and your personal wealth make exactly zero difference in your grad school app, at your dissertation defense, or during tenure review. Can you say that of success in business, politics, art, or literature? In our society, only professional sports and the military strike me as contexts where objective achievement drives long-term professional advancement to a similar degree.

Now, this isn't to say that random politics and backbiting can't cause some significant fraction of hiring decisions to go awry. However, in aggregate and in the long run, those with truly impressive research ability --- those who produce influential refereed publications --- will find secure positions somewhere. Academic departments, like professional sports teams, have strong incentives to hire people who can produce.

So, returning to Drezner's case: the question's not whether he's a brilliant thinker or a solid researcher. The questions, for a place like U. of Chicago poli-sci, would be much starker --- I'd guess something like the following:

  • If you listed, in order, the world's five most important active researchers in his academic subspecialty, would Drezner be on the list? What number?
  • Is that academic subspecialty one of the most important in our field today, or at least important enough that we want to spend a twenty-year tenure slot on it?

Hard to say, and certainly not something laypeople can judge. U. of Chicago's department didn't think the answers justified tenure. However, if Drezner's work is substantial, then I'd bet he still has a good shot at a tenure-track offer somewhere respectable. If not a top-ten department, then perhaps a top-thirty department; or, at an absolute minimum, someplace he'd have the resources to do solid work.

The situation's roughly comparable to the Yankees' choosing not to renew the contract of some outfielder who's promising and productive, but not (yet) All-Star quality. Why would they do that? There are many possible reasons --- some having to do with the player, some having to do with the particular needs of the team --- but that player will probably get signed by someone else. And if he doesn't, then it's probably because the few available slots are all taken by other players who look more promising. Sorry, tough cookies, but you didn't make the cut. Most people don't. Life is harsh.


p.s. Further random observations that I couldn't integrate smoothly into the above post:

  • On a personal note, just so you know where I'm coming from, I am a Ph.D. candidate at an elite institution in my field, and I'm 100% sure that I am not going to become faculty at a major research university. I've discovered that I'm not cut out for that career. I'm not bitter about it, since my current ambitions lie in other directions. But I'm not going to dismiss academia's standards as bollocks, because I've seen up close what it takes, and it's frankly astonishing. The tenured professors in my department definitely possess qualities that I lack. Among the fellow students I've known, those who have garnered tenure-track positions at even top-thirty departments are also extraordinary people, and also have qualities that I lack. And, if I may permit myself a moment of ego, I think I'm not a dumb guy, at least when measured against the general population.
  • I'm not implying that elite academic success correlates with intelligence alone. Nor do I deny that plenty of people are as brilliant as the smartest professor, but won't or can't become academics. It takes a whole suite of qualities to be a successful researcher, and intelligence is only one of them. Then, too, choosing a career in academia requires a perverse and rare set of personal motivations. Furthermore, experts in any given field can obviously have huge blind spots about things outside their field (and even inside their field), so I'm also not implying that we should view elite professors as an infallible, sacred intellectual priesthood. What I'm saying is that dismissing academia's system for professional advancement as corrupt, anti-meritocratic, irrelevant, etc. simply doesn't pass muster.
  • Incidentally, all of the above explains why I have little patience for conservatives who cry "liberal bias" because they know of some allegedly brilliant people who didn't get tenure-track jobs at top universities. I've seen this sort of complaint pop up periodically on conservative blogs and such. Beyond the obvious fact that anecdotes don't demonstrate anything statistically meaningful, being merely brilliant by the standards of laypeople doesn't mean jack in academia. Even mediocre Ph.D. candidates are probably "brilliant" by that standard. The only relevant question is: are you brilliant by the cruelly exacting standards of your academic discipline? Is your publication record better than all but a handful of the other candidates in your field, nationwide, of the same year? If so, maybe you have cause to complain. Otherwise, once again, tough cookies.
  • The employment picture obviously gets radically different outside elite research institutions. I'd guess that in some fields, there's so much surplus talent that some terrific candidates still can't get elite jobs, and even [Random Obscure State University] will attract terrific people. In other fields, the surplus talent is smaller (or siphoned off by industry), and [ROSU] will get... well, the leftovers. In those cases, even candidates who don't possess all the marvelous qualities I describe can probably get a tenured job somewhere.

    To relate this slightly to the point above, my suspicion is that if conservative intellectuals really valued educational careers, then they'd likewise have no trouble getting some kind of tenured position somewhere. (Blog cognoscenti can probably think of, say, a prominent conservative blogger who fits this description.) But that's not what most conservative critics of academia are talking about. David Horowitz doesn't bother to profile the faculty at [ROSU]; he profiles the Ivies, for reasons too obvious to bother elaborating here.

  • If I had a deep-down blood-and-bones hatred of the English language, I might have contracted the phrase "blog cognoscenti" in the previous paragraph to... no, I just can't write it. Gaaagh.

Sunday, October 02, 2005

PITAC co-chair Lazowska on cybersecurity

For two years, University of Washington computer science professor Ed Lazowska served as co-chair of the Presidential Information Technology Advisory Committee. His two-year term ended in June 2005, and in a recent interview in CIO Magazine, he's got some things to say. There's so much juicy material in this interview that anyone even vaguely interested in national security or computer science should read it all, but some excerpts follow...

Lazowska doesn't pull any punches when discussing the Bush administration's approach to the issue. "In my opinion," he says, "this administration does not value science, engineering, advanced education and research as much as it should-as much as the future health of the nation requires."

...

We see some of the effects of cybervulnerabilities on a daily basis on the front page of our newspapers: phishing attacks, pharming attacks, denial-of-service attacks and large-scale disclosure of credit card information. Even phishing attacks, which seem easy to dismiss as a gullibility problem, arise from the basic design of the protocols we use today, which make it impossible to determine the source of a network communication with certainty.

The public, and most CIOs, do not see many activities that are even more threatening. The nation's IT infrastructure is now central to the life of all other elements of the nation's critical infrastructure: the electric power grid, the air traffic control network, the financial system and so on. If you wanted to go after the electric power grid-even the physical elements of the electric power grid-then a cyberattack would surely be the most effective method. It's also worth noting that the vast majority of the military's hardware and software comes from commercial vendors. PITAC was told that 85 percent of the computing equipment used in Iraq was straight commercial. So the military itself is arguably about as vulnerable to a cyberattack as the civilian sector.

Now, the problem is that you can't suddenly decide that you want something like security and expect to be able to buy it, because the technology doesn't necessarily exist. Almost no IT company looks ahead more than one or two product cycles. And historically in IT, those ideas comes from research programs that the federal government underwrites. Just think about e-commerce: You need the Internet, Web browsers, encryption for secure credit card transactions and a high-performance database for back-end systems. The ideas that underlie all of these can trace their roots to federally funded R&D programs.

That's how this relates to the R&D agenda. Long-range R&D has always been the role of the national government. And the trend, despite repeated denials from the White House to the Department of Defense, has decreased funding for R&D. And of the R&D that does get funded, more and more of it is on the development side as opposed to longer-range research, which is where the big payoffs are in the long term. That's a more fundamental problem that CIOs aren't responsible for.

...

PITAC found that the government is currently failing to fulfill this responsibility. (The word failing was edited out of our report, but it was the committee's finding.)

Of course, given the Bush administration's track record, it's hardly surprising that they're failing here, but it's yet another data point. I also find their editing of the PITAC report pretty shameful; but, again, s.o.p. for the Bush gang.

Incidentally, in September 2005, PITAC's functions were absorbed into PCAST. However, I don't have the inside baseball on whether this means that IT concerns will be taken more seriously (because PCAST is a more important council) or less seriously (because IT will get buried in the millions of other things on PCAST's plate). My guess is that much will depend on the dynamics of the PCAST committee and its leadership.

For more about the state of cybersecurity in general, you might want to see Lazowska's 12/02/04 lecture from Lazowska's course last year on public policy and IT (note that the slides alone won't give you the full impact; you really want slides and video together).


UPDATE 7 Oct.: Regarding the merger of PITAC and PCAST, the Computing Research Association blog, which is more clued in than I am, has similar questions. However, they think that, on balance, "the positives outweigh the negatives", so maybe things are looking up for IT policy in the US.