Showing posts with label crowdsource. Show all posts
Showing posts with label crowdsource. Show all posts

Friday, September 09, 2011

Hacking the Academy: Transformative? Feasible?

The shorter version of the free, crowdsourced book Hacking the Academy is now online (via Profhacker) at this site: http://www.digitalculture.org/hacking-the-academy/. I've been reading through the "Hacking Scholarship" part.

The whole essay or series of essays, if it's not too old-school a term to refer to them that way, is exciting; you can feel the energy that went into this project. It's also exciting to see put together in one place ideas that have been out on the blogosphere for some time. Here are some excerpts, with comments and questions:
  • "Say no, when asked to undertake peer-review work on a book or article manuscript that has been submitted for publication by a for-profit publisher or a journal under the control of a commercial publisher." (Jason Baird Jackson)
Cathy Davidson and other eminences may be able to get away with this, but if your university, like most, counts productivity in ways that engage with traditional publishing, this Bartleby "I would prefer not to" idea may not work.
  • "The idea that knowledge is a product, which can be delivered in an analog vehicle needs to be questioned. What the network shows us, is that many of our views of information were/are based on librocentric biases." (David Parry)
True, and again, something that's exciting and potentially liberating, although I confess to being librocentric (a librophiliac?). I don't know about this "knowledge as product in an analog vehicle," though. Haven't we been talking about alternative ways to exchange/preserve/present knowledge for at least the last 20 years or roughly the Internet age? That's how long I've heard about it, at any rate.
  • "In a world where the primary tools for finding new scholarship are tagged, social databases like Delicious and LibraryThing, the most efficient form of journal interface with the world might be a for journals to scrap their websites and become collective, tagging entities." (Jo Guldi) Guldi goes on to suggest a "wikification" that would allow a journal article to be crowdsource-reviewed for a year and to disappear if the author didn't make it a stronger article as a result.
Again, another interesting idea. Here the "survival of the fittest" ethos usually considered to be the province of official peer reviewers is crowdsourced--still Darwinian, in that a few will survive but many perish, but more democratic, maybe. Someone else suggested that reviews will still be "invited," so there will still be a hierarchy.

Meanwhile, the article dangles in the wind for a year, and if it is deemed insufficiently improved (by whom?) it disappears and the now publicly humiliated author . . . does what? Takes it off his or her cv, if it was on there to begin with? At what point does it count as "published," if we will still even have that category of evaluation?
  • "But the key point is that we need to take back our publications from the market-based economy, and to reorient scholarly communication within the gift economy that best enables our work to thrive. We are, after all, already doing the labor for free—the labor of research, the labor of writing, the labor of editing—as a means of contributing to the advancement of the collective knowledge in our fields." (Kathleen Fitzpatrick)
Can I get a big "amen"?
  • "But, as Cathy Davidson has noted, 'the database is not the scholarship. The book or the article that results from it is the scholarship.'” (Mills Kelly)
True--and yet what about the work that goes into establishing, curating, and mounting a database for use, not to mention the technical details? Kelly says, rightly, that it's not considered scholarship if it doesn't make an argument. Isn't the selection of texts and choice of access media a form of argument or at least an intellectual labor?

More to the point: Kelly never says this and never puts it in this way, but I'm uncomfortable with what could be seen as a distinction between worker bees who create the database and the "real scholars" who use it. Don't we value editions? Why should a database be less valued? Tom Scheinfeldt provides an answer for this:
  • At the very least, we need to make room for both kinds of digital humanities, the kind that seeks to make arguments and answer questions now and the kind that builds tools and resources with questions in mind, but only in the back of its mind and only for later.


  • Anyway, even if you don't agree with all of it, it's an exciting way to think about the possibilities of scholarship, so go read it.

    Your thoughts?

    Saturday, August 08, 2009

    Crowdsourcing in peer review

    Michele's comment in response to the crowdsourcing post made me think again about Kathleen Fitzpatrick's post at Planned Obsolescence on "The Cost of Peer Review." The whole post is worthwhile, but here's the essence of it:
    In that case, we’d be much better served, I believe, by eliminating pre-publication peer review. Perhaps the journal’s editorial staff reads everything quickly to be sure it’s in the most basic sense appropriate for the venue (i.e., written in the right language, about a subject in the field, not manifestly insane), but then everything that gets past that most minimal threshold gets made available to readers — and the readers then do the peer review, post-publication.

    This is an attractive idea in some ways, but I can also see some problems with it.
  • Especially in the sciences, wouldn't there be a risk of factionalism--that is, supporters of the author giving the article a thumbs-up and those opposed to his or her theories a thumbs-down? I say "the sciences" because I understand that STEM disciplines really place a lot of weight on numbers of citations as well as grants as indicators of professional development.
  • Print publications have only so much space, a limitation that they've carried over to their (subscription) web versions. While this seems to be a bad thing at first--why limit yourself to 5 articles when the server can contain 100 for the same issue?--this could fall prey to the same consumer behavior that happens when people are confronted with too many choices. Recent research suggests that people given a huge number of choices are less likely to buy, say, a jar of jam than those given only 5 or so, because they're overwhelmed by the numbers. If you're confronted with a Table of Contents that's even 50 items long, are you more likely or less likely to read all the articles and vote on them?
  • Let's say that you've decided that you really want to find out about this subject and are going to read some of the articles; wouldn't the arrangement of the articles count? For example, would an essay by Amy Aardvark get read more than one by Zeb Zebra? Would people just go through looking for famous names? Or, if the ratings system made some articles rise to the top of the Table of Contents, wouldn't those lower in the pack get less attention? Would people be tempted to game the ratings system to get their article placed higher? In a perfect world this wouldn't happen, but at Amazon this happens a lot, apparently.
  • Here's another scenario: although the English Romantics aren't your field, you're teaching an entry-level introduction to literature class and you want to find out some of the best current ideas about Coleridge's "Christabel" for Wednesday's class. You go to a special issue on this subject and are confronted with 30-50 essays. You could read them all to decide, but you have to move on to "The Rime of the Ancient Mariner" on Friday, and life's too short. What do you do? How do you decide?
  • In keeping with the need for numbers in "crowdsourcing," what about if you're in a small field where only about a dozen of you are really expert enough in the field to judge, but maybe the field is hot or trendy enough (or the author or subject popular enough) to attract a lot of readers/voters who don't know the field as well?

    These are just questions. I don't have any answers.
  • Sunday, August 02, 2009

    Crowdsource Grading

    From Cathy Davidson's "How to Crowdsource Grading" at HASTAC via The Chronicle:
    So, this year, when I teach "This Is Your Brain on the Internet," I'm trying out a new point system. Do all the work, you get an A. Don't need an A? Don't have time to do all the work? No problem. You can aim for and earn a B. There will be a chart. You do the assignment satisfactorily, you get the points. Add up the points, there's your grade. Clearcut. No guesswork. No second-guessing 'what the prof wants.' No gaming the system. Clearcut. Student is responsible.

    And how to judge quality, you ask? Crowdsourcing. Since I already have structured my seminar (it worked brilliantly last year) so that two students lead us in every class, they can now also read all the class blogs (as they used to) and pass judgment on whether they are satisfactory. Thumbs up, thumbs down. If not, any student who wishes can revise. If you revise, you get the credit. End of story. Or, if you are too busy and want to skip it, no problem.


    This sounds lovely, in theory. But, as usual, I have a few questions:
    1. Since this is "mastery grading" rather than "quality grading," wouldn't this be one of the cases where A = Adequate rather than excellent? Some professors don't have a problem with that, of course, but it makes me uncomfortable, since the professor is the one ultimately putting the A on the gradesheet.
    2. What about the retro soul who, having paid Duke U's high tuition, wants to know what an outstanding scholar like Cathy Davidson thinks rather than what his or her peers think? One comment that I used to get from time to time if I relied a lot on group work was "I'm paying to see what the experts think, not what my classmates think, about my work," and there's some justice in that position.
    3. As a corollary of the previous point (and this comes up in Jane Tompkins's A Life in School, too, where a similar method is described): do students ever get curious about what exactly the professor is doing to earn her salary? I don't think this is a question that ought to be posed, but I wonder whether students think about it anyway.
    4. So there are no petty jealousies, no cutthroat grad students, and no factions that might influence a student's willingness to make someone rewrite a post? I don't know grad students who would behave this way personally, of course, but there's a lot of trust involved with this system.

    Thoughts?