Alas, a blog's recent series on wage discrimination and gender is quite informative; I hope someday the Alas folk will package this series up as a PDF, like David Neiwert's Rush series. The point that hits closest to home for me, as an apprentice scientist:
What the Nature study did was examine productivity (measured in terms of publications in scientific journals, how many times a person was a "lead author" of an article, and how often the articles were cited in scientific journals) and sex. Publication in peer-reviewed scientific journals is often considered to be the most objective and "concrete" sign of accomplishment in the sciences. These factors were then compared to how an actual scientific review panel measured scientific competence when deciding which applicants would receive research grants. Receiving grants like these are essential to the careers of scientific researchers.
The results? Female scientists needed to be at least twice as accomplished as their male counterparts to be given equal credit. For example, women with over 60 "impact points" - the measure the researchers constructed of scientific productivity - received an average score of 2.25 "competence points" from the peer reviewers. In contrast, men with less than 20 impact points also received 2.25 competence points. In fact, only the most accomplished women were ever considered to be more accomplished than men - and even then, they were only seen as more accomplished than the men with the very fewest accomplishments.
It probably wouldn't surprise the average layperson to learn that computer science is an overwhelmingly male profession. A 9-to-1 male to female ratio is a typical ballpark figure at all levels, from undergrad major enrollment through junior faculty, though it gets worse as you go up the ladder. Furthermore, my own specialization (programming languages and tools) is even more overwhelmingly male. When I go to the top conferences*, there will typically be a mere handful of women in a room of two hundred researchers.
I don't think these population numbers reflect disciplinary sexism. In fact, I think my profession's less sexist, on average, than society as a whole; and certainly the individual women researchers with whom I'm familiar are quite well-regarded. But the sobering studies cited by Alas do reflect disciplinary sexism (in science in general), and should lead us all to question the way we approach women and their work. Sexism in this context won't generally be a conscious act --- it could be a mere statistical differential in the probability that we'll cite particular papers, or chat with particular individuals at conferences, or chat about particular people's work with other people (the latter two are quite important for generating "buzz" around people's work).
This leads to an interesting question: should scientists practice a form of "personal affirmative action" for women researchers, whereby we consciously make an effort to pay extra attention to women and their research?
* Note for any curious scientists in other fields who may be reading this: conferences are a much bigger deal in computer science than in most other sciences. The year-or-more lag time for reviewing journal papers means that cutting-edge research is basically never published in journals. If you want to keep up, you must attend conferences, and publish your best research in them. As a result, the bar for conference publications is also much higher than in other fields: the top conferences in programming languages are well-known for rejecting even quite strong submissions because the competition's so intense.
Once you get a paper at a good conference, you may fill out the paper with all the details and tedious proofs, and attempt to publish the extended version in a journal; but this step's not strictly necessary to build a reputation. A fresh Ph.D. could get a faculty job at a top-ten institution without a single journal paper.
Post a Comment