Saturday, August 08, 2009

Crowdsourcing in peer review

Michele's comment in response to the crowdsourcing post made me think again about Kathleen Fitzpatrick's post at Planned Obsolescence on "The Cost of Peer Review." The whole post is worthwhile, but here's the essence of it:
In that case, we’d be much better served, I believe, by eliminating pre-publication peer review. Perhaps the journal’s editorial staff reads everything quickly to be sure it’s in the most basic sense appropriate for the venue (i.e., written in the right language, about a subject in the field, not manifestly insane), but then everything that gets past that most minimal threshold gets made available to readers — and the readers then do the peer review, post-publication.

This is an attractive idea in some ways, but I can also see some problems with it.
  • Especially in the sciences, wouldn't there be a risk of factionalism--that is, supporters of the author giving the article a thumbs-up and those opposed to his or her theories a thumbs-down? I say "the sciences" because I understand that STEM disciplines really place a lot of weight on numbers of citations as well as grants as indicators of professional development.
  • Print publications have only so much space, a limitation that they've carried over to their (subscription) web versions. While this seems to be a bad thing at first--why limit yourself to 5 articles when the server can contain 100 for the same issue?--this could fall prey to the same consumer behavior that happens when people are confronted with too many choices. Recent research suggests that people given a huge number of choices are less likely to buy, say, a jar of jam than those given only 5 or so, because they're overwhelmed by the numbers. If you're confronted with a Table of Contents that's even 50 items long, are you more likely or less likely to read all the articles and vote on them?
  • Let's say that you've decided that you really want to find out about this subject and are going to read some of the articles; wouldn't the arrangement of the articles count? For example, would an essay by Amy Aardvark get read more than one by Zeb Zebra? Would people just go through looking for famous names? Or, if the ratings system made some articles rise to the top of the Table of Contents, wouldn't those lower in the pack get less attention? Would people be tempted to game the ratings system to get their article placed higher? In a perfect world this wouldn't happen, but at Amazon this happens a lot, apparently.
  • Here's another scenario: although the English Romantics aren't your field, you're teaching an entry-level introduction to literature class and you want to find out some of the best current ideas about Coleridge's "Christabel" for Wednesday's class. You go to a special issue on this subject and are confronted with 30-50 essays. You could read them all to decide, but you have to move on to "The Rime of the Ancient Mariner" on Friday, and life's too short. What do you do? How do you decide?
  • In keeping with the need for numbers in "crowdsourcing," what about if you're in a small field where only about a dozen of you are really expert enough in the field to judge, but maybe the field is hot or trendy enough (or the author or subject popular enough) to attract a lot of readers/voters who don't know the field as well?

    These are just questions. I don't have any answers.
  • 5 comments:

    heu mihi said...

    I think, for all the reasons that you mention, that crowdsourcing (a new word to me) is a bad idea in this case. Or at least too flawed to fly at this point. Besides, I'm generally skeptical of using amorphous voting procedures (or, really, consumer ratings) as a measure of something's worth. Not that peer review is necessarily any better, I suppose.... Perhaps it's just the elitist in me (or the one who fears popularity contests of any kind), but rating by acclaim seems arbitrary at best.

    Oh, and what about the public shame of having a not-well-rated article? I would MUCH rather get rejected outright (privately) than published and then scorned in front of my whole academic community!

    Sisyphus said...

    These are all good points that I hadn't thought of --- my complaints about open peer review are more on the labor side --- will we really all want to commit to this at the level of academic rigor (as opposed to "hey I loved your cat pic") for free?

    Also, the best part of the current process is having a professional editor --- someone who is trained and paid to do nothing but nurture our books and arguments into the best possible works they can be --- although I understand this is rapidly going away at U presses and journal editors are very overstretched. But I think the attitude, and skills, of someone who is paid to do nothing but think about how a work they like can be made better (and who does this all the time, for a living) will have a way different way of commenting than with the crowdsourcing method.

    Also, I heard Fitzpatrick speak once and she prefaced her discussion of her site with the apology that it was currently down because of hacking attacks and was being fixed by the IT guy. So _really_, this is not about making the process free but shifting salaries over from the editorial side to the computer support side, as if those people needed more jobs created for them!

    Sorry, this topic is a sore spot for me --- maybe I need to take this big fat comment and make it a post.

    Kate said...

    Actually, what you describe is exactly what some science journals are doing (spearheaded largely by PLoS): the editors scan for technical correctness and then publish. They have print version, they set up online journal clubs and have a place where you can comment on manuscripts. So, like you said, scientists are commenting post-publication in a way they used to pre-publication. I'd rather get my ideas out there and start to discuss them with my colleagues than have to sit on my ideas for months or years while one reviewer picks and picks at the wording of a section.

    At the same time peer review has helped the quality of my manuscripts in the past. Perhaps there is a nice in-between stage where the pre-pub process isn't as insane and time-consuming, and the post-pub is a place where people feel free to critique articles?

    Kate said...

    Argh: "They have print version" shoudl be "They have NO print version." Sorry!

    undine said...

    heu mihi, I hadn't thought about the public shame idea, but what a thought! I'm also thinking ahead: what happens when you list something as a publication and the tenure & promotion committee checks it out, only to see it festooned with devastating comments and a low rating?

    Sis, I don't know; we sure put a lot of thought into those "love your cat pic LOL" comments. The other thing is that if the comments are anonymous, they might tend toward snark, if the standards of the internet are anything to judge by, and if they're not, well, who's going to commit to criticizing a potential colleague/grant-giver/employer under his or her own name?

    Kate, I'd love to see more about how this works in the sciences. Is there a journal that does this especially well? Are the humanists all getting worried over nothing?