Monday, November 11, 2024

Town without Pity

 If you can't stand any more election stuff, skip this one.

 
This song (music by one of my favorite Old Hollywood bombastic composers, Dimitri Tiomkin) has been running in my mind for the last, oh, 6 days since November 5. In case you don't want to listen to a pop song/movie title music from 1961: "No, it really isn't pretty / What a town without pity can do." The lyrics are talking about young lovers, but I'm listening to the part that's something about "this crazy planet falls apart." 
 
She ran a great campaign, and we supported her with as many dollars as we could muster. Those who held back from voting, or voted again for the worst president in American history, have put us here, and here we are. 

So we're about to see an administration without pity, only with more juice from billionaires, a corrupt right-wing Supreme Court, and, as Mad Magazine used to say, "the usual gang of idiots." 

I'm sure you've all seen this great  meme, from https://knowyourmeme.com/photos/1870771-leopards-eating-peoples-faces-party

The Leopards Eating People's Faces Party is in power now, and, to complete the pity theme, I am having a very tough time dredging up any pity whatsoever for people who voted for that man and are now in for a world of hurt--like the rest of us, of course, but at least we tried.

Hope you are all doing well.




Friday, November 01, 2024

MLA on AI: I promised I wasn't going to write more about it, but here we are

 Internal monologue of the last 15 minutes: "You have papers to grade . . . don't look at that MLA AI report that you couldn't see the other day because its server crashed . . . papers to grade, remember?  . . . don't do it!" and here we are. It is the Great MOOC Panic of 2015 all over again, and it is pure catnip to people with opinions.

So as you probably already heard, CCCC and the MLA have joined their unholy forces to weigh in on Generative AI. (I kid because I love.)  https://hcommons.org/app/uploads/sites/1003160/2024/10/MLA-CCCC-Joint-Task-Force-WP-3-Building-Culture-for-Gen-AI-Literacy-1.pdf

There are three of these working papers; this one is the latest. I did read through it, although probably to get into the spirit of things I should have fed it into an AI engine and asked for bullet points.

Some positive thoughts:

1. I appreciate the work that went into this, truly. There are thoughtful people on the board, and they have really tried to make guidelines that would be helpful. 

2. It's really useful for distinguishing between AI and Generative AI and other forms as well as what they can and cannot do.

Some questions: 

1. Is it strongly promoting the use of GAI in every course? You betcha. I kind of see it, since they believe the wave of the future is training students to use it effectively, since the whole "help students to write better on their own" ship has apparently sailed.

2. What is our role as educators in all this? 

  1. Training students to evaluate GAI for accuracy, which means that we--instructors--get to spend more time getting cozy with GAI and checking up on it as well as evaluating student papers. Two for the salary of one!
  2. Teaching students 
    1. to evaluate GAI output for relevancy, bias, and data security, 
    2. to evaluate rhetorical situations where GAI is and isn't appropriate
    3. to having them write metacommentaries on their use of GAI in a paper
    4. to monitor how GAI helps (!) their development as writers. Yes, reading the GAI output and assessing it as well as assessing their papers: twice the grading fun.
  3. Toward the goals of #1 and #2, seek out more professional development opportunities about GAI, and "[r]ead current articles and popular nonfiction about AI as well as emerging Critical Artificial Intelligence (CAIL) scholarship" (10). Are you tired yet?

3.  Can you opt out?

Yes. "Respect the choice to opt out" (10). 

   BUT if you opt out and are contingent, could you lose your job? 

Also yes. "Some instructors may face consequences in hiring and evaluation processes when they opt out of teaching AI literacies in their classrooms, particularly when shared governance processes have determined department-wide uses for AI in the curriculum" (10).

4.  But if I'm just one instructor, can I decide it's not appropriate for my course? 

Theoretically, yes; in practice, probably not. The report strongly, and I mean strongly, advocates for program-wide and department-wide if not university-wide adoption of a consistent policy of integrating GAI training as a cohesive whole.

I agree that this should be done in a systematic or coherent fashion, and it's much better to have something consistent. Will there be professional development time and funding devoted to this? 

5. I hear the tinkling of shiny "if you're not on board with the tech, you don't understand it" bells with this one. 

Faculty development meetings should be a space for building instructors’ conceptual knowledge about GAI literacies, helping them develop theory-informed pedagogical practices for integrating GAI into their teaching, and allowing them to experiment with GAI technologies and develop their technological skills.
Such gatherings can simultaneously address instructors’ resistance, fear, and hesitation about using GAI in their teaching while also recognizing that faculty development programs cannot make instructors experts in GAI, which is not an attainable goal given the fast-changing nature of these technologies

 Translation: 

  • If you question it, it's because you fear it, which is stupid. You are stupid and not thinking correctly about this. 
  • We are telling you that this is the wave of the future, and if you don't get on board with a new technology, you are just plain wrong. 
  • If you have questions, you are wrong.
  • If you hesitate rather than swallowing this wholesale, you are wrong. 
  • You need to be persuaded, not listened to. Your fear and hesitation are not legitimate. They are resistance that needs to be overcome.

But this is not our first rodeo with the whole "look, it's shiny!" argument, is it? With MOOCs? With auto-graded essays? With Twitter? With every future-forward "get rid of the books" library?  

I'm not saying that it's wrong. I'm saying that rushing headlong into every new technology--tech enthusiast here, remember--without allowing for questions and a thoughtful assessment is what we keep doing, and I wonder if we will ever learn from our past experiences.



 

Thursday, October 24, 2024

A minor sign of hope after the AI maelstrom

 AI, and the students' use of it to generate papers, consumed far too much of my brain earlier this semester. I'm teaching online, so my usual expedient of having them write in class isn't an option. 

It was wearing me out, between worrying that I was letting them get away with something and thus disadvantaging honest students or that I wasn't living up to the syllabus by checking everything. It was making me discouraged with teaching.

Turnitin wasn't helpful, nor was GPTZero, the supposedly good AI-catcher. The results could be wildly at odds with each other if you tried it twice in a row, unless something was coming up 100% AI generated. 

I called out a few students, per the syllabus. What that means: I had them talk to me. Many said it was Grammarly, which has gone heavily to AI, and said they wouldn't use it again. I am not anti-tech--eighteen years of blogging here should tell you that--but if they are not doing their own work, I can't help them make it better.

Then things started to get better. Aside from modifying the LMS discussion board settings and Perusall (no copy & paste, post your reply first before seeing others' responses--this I learned to restrict after a few students were copying from each other), I think what happened is this:

They realized that I was reading what they wrote. 

Now, I tell them this in the syllabus, but reading any syllabus, especially with all the required institutional boilerplate, is like reading the instructions for setting up a wireless router or, my favorite analogy, Beetlejuice's Guide for the Recently Deceased. 

Was it just adjusting the rubrics that made the difference? Maybe some. I discovered that having good criteria there would take care of the few AI-written posts, which naturally fell down to the C- or D level.

But I like to believe that it was that there was a real person in there, in those discussion boards, commenting and upvoting and mentioning by name the students and the specific things that they did well. They know that there is a person behind the class.

And on their papers, addressing the specifics of what they had written, suggesting other ways to develop the argument, and so on.

And in answering their emails quickly and with a sense of their real concerns. 

What I noticed is that the AI boilerplate--so far, anyway--seems to have died down, and I've mostly stopped looking for it and thinking about it.

This may, of course, just be an artifact of its being five weeks from the end of the semester, or maybe I'm missing something.

But their writing seems to be more authentic, more as it used to be, and not that MEGO AI boilerplate

With some of the professional organizations in the field throwing in the towel and writing guidelines not about if we will use AI but how extensively we ought to use it, I count my students' responses as a sign of hope. 

Maybe if we give them authentic feedback, as the MLA-CCCC report suggests, they will respond with authentic writing. 



Thursday, October 17, 2024

What authors (and characters) could learn from When Harry Met Sally

 I’m working on an author now who made choices in her life—and whose characters make choices—that make you want to yell “don’t do it!” This isn’t anything I have a right to have an opinion on, of course: a dramatic life, and characters who make choices that seem irrational to me, are the stuff of literature. 

But I keep wishing that the author, and her characters, had the benefit of watching at least two scenes in When Harry Met Sally.

These aren’t the main scenes, but they seem to echo from classic fiction all the way down to today. 

1. This is actually a series of scenes. Throughout the first half of the movie, Carrie Fisher’s character, Marie, is having an affair with a married man. (I can’t remember their character names and so will refer to them by the actors’ names.) 

She keeps bringing up evidence that he’s going to leave his wife for her. I’m paraphrasing, but the dialogue goes like this:

Carrie Fisher: “I was going through the receipts, and he just bought her a $300 nightgown. I don’t think he’s ever going to leave his wife.”

Meg Ryan: “No one thinks he is ever going to leave his wife.” 

Carrie Fisher: “You’re right, you’re right, I know you’re right.” 

The scene is repeated, with comic variations, until the double date when she meets Bruno Kirby. Spoiler: the married man never leaves his wife for Carrie Fisher.

2. The second scene occurs when Meg Ryan tells Billy Crystal that she and Jack Ford don’t believe in marriage, that marriage isn’t modern, that Jack is holding off proposing to her because he doesn’t—they don’t, she hastily amends— believe in marriage. 

Later, to no one’s astonishment, he breaks up with her and is married almost immediately. As she tells Billy Crystal, weeping, when he comes to her apartment to comfort her:

“He said he never wanted to get married. What he really meant is that he didn’t want to marry me.

Now, not everyone wants to get married, and that’s fine, equality, feminism, etc.etc. It’s not always a good choice, but it is a choice that people get to make. 

But as evidenced by a thousand advice columns featuring women who do want to get married, who hang in there for years and years to men who are just this close to proposing, they’re sure, if she’s just patient enough and gives up her dream of having children because he doesn’t like them or whatever—and to the aftermath, which is that they break up and he’s married to someone else with a child on the way in a year or two—Nora Ephron’s home truth—that “he didn’t want to marry me”—is something that the characters in this novel, and the author herself, could stand to learn. If men want to get married, they will find a way. If they don’t, they won’t. 

Patience may work for 19th-century heroines like Jane Eyre, but in the modern era—well, these characters should just watch When Harry Met Sally.

Wednesday, September 04, 2024

Random Bullets of Not Much News

 Happy September! Here's the not-much-news so far:

  • Classes have started, as usual prompting a mad scramble to get everything done. 
  • Also as usual, online classes take about 4x the preparation of in-person classes. 
  • And, in the "some things never change" department, I am spending all my time on teaching instead of my own writing or some manuscript reviews. Did I write 2000 words over the course of several hours? Yes. Are those words for recording a 20-minute class lecture that usually I'd be giving extemporaneously? Also yes.
  • All of a sudden the streets around here are filled with those tiny white Teslas. It looks like an airport rental parking lot. Did Elon send out a bat signal and make everybody buy them? 
  • Speaking of Tesla, in my part of the world, some people have pickup trucks for show but a lot more have them because they need a pickup truck for real. The other day I saw someone driving a Tesla truck and it looked . . . out of place? Cute? Ridiculous? I guess spotting Tesla trucks is old news now, but if I can catch one beside those giant harvesting machines that are all over the roads at this time of year, I'll post a picture. 
  • In non-Tesla news, apparently NaNoWriMo supports AI in its annual challenge and has a few other problems as well, as Chuck Wendig hilariously reports it:  https://terribleminds.com/ramble/2024/09/02/nanowrimo-shits-the-bed-on-artificial-intelligence/
  •  For the second year in a row, the government has managed to make the misery that is FAFSA even more miserable and difficult to navigate. It's going to have another "phased rollout" on the twelfth of never, or December, which are the same thing if you're a student trying to get financial aid. 
  • Also in education: After seeing chatter about it online I read--well, skimmed--a long article in the Chronicle about drama in the English Department at Pomona, but the main takeaway was how rich the college was. 

How's the beginning of your semester? 

Friday, July 19, 2024

Is the true measure of AI-written content the MEGO test?

Our eyes are precious things, and they are also smart ones. I know they only transmit images--it's the brain that interprets--so maybe it's the combination that I'm talking about here. 

One of the tasks I'm doing right now requires a lot of concentration and is visually intensive (intense?).  I try to stop for breaks at intervals, but sometimes my eyes can't make it till the next break, so they get blurry and tears run down my cheeks. That's when I stop for the day. But as Laura Ingalls Wilder says about housework when she's pregnant in The First Four Years, "the work must go on, and she was the one who must do it," so I press on, but sometimes my eyes just plain close. 

So eye time is precious time, and I don't want to waste it unnecessarily. Necessary time-wasting: looking at pictures of old houses or gardens or something equally soothing. 

Unnecessary time-wasting: AI-written text.

We're probably all seeing the evidence of AI-written text on the web--wildly inaccurate howlers passing as "facts," weird word usages, etc. Are we reading it in the same way as human-generated writing, though?

Oddly enough, when I read an AI-cheerleading piece like the one at IHE now, or my students' AI-written work, my eyes have started to skim rapidly and, in fact, they glaze over. Is it because the text is generated by AI, or is it because it's not saying much?

That skimming effect, though--that's the MEGO test, from a term coined in (maybe) 1973, according to the New York Times. (I canceled my subscription, so I can't read it and tell you for sure.) 

 MEGO stands for My Eyes Glazed Over, and it's a reaction to boring, obvious text. From the article: "A true MEGO, however, is more than a soporific piece; it is an article written about a subject of great importance which resists reader interest."

Of course, other forms of text have failed the MEGO test before--AI in its current form didn't exist in 1973--but maybe AI has trained our brains to spot it. 

You scientists out there know that this can't be a real effect and I can't be totally serious, but it's a thought experiment that's giving my eyes a little break before going back to the Big Task, 

 

Saturday, July 13, 2024

The tradwives of Stepford


 Because I'm habitually late to the party with social trends, I'm only now catching up to the tradwife phenomenon. According to the NYTimes, the New Yorker, etc., a tradwife performs the traditional gender role of being a housewife, now "new & improved" with a heavy dose of white Christian nationalism. 

I say "performs" because they seem to be mostly rich influencers who make a fetish out of tasks that many of us (raises hand) have been doing forever--baking bread, cooking from scratch, taking care of children etc. These performances are apparently for the benefit of people who have the leisure and money and interest to spend time on TikTok and social media sites (lowers hand). 

The Reddit posts collected at BoredPanda provide a sobering counternarrative from women who lived this life a generation or two ago, and a lot of them focus on what seems the most obvious downside: without a way to make a living, what happens to the tradwife if the lord and master or whatever he's called loses his job, or is unable to work, or decides to take off with his secretary and abandon the family?

I thought the #1 lesson of feminism--and, oddly, of capitalism--is that without economic power you have no power. In fact, Ira Levin wrote a whole satirical horror novel about this, The Stepford Wives, which poses the question "what if men could have their wife fantasies fulfilled by replacing human women with robots?" The answer is, predictably, that men say "yes, please," and set about creating this utopia (which would include rollbacks in feminism, reproductive freedom, women working, and the rest) for a deeply bleak ending for women. (By the way, the original Katherine Ross movie was reasonably faithful to the book; the Nicole Kidman one was a mess that slapped on a happy ending.)

Levin was talking about what men would do, though. Why would women volunteer to be handmaidens/tradwives? I get why the influencers do it: there is not enough attention in the world, nor enough clicks, nor enough money, ever to fill that gaping void in their souls. But why would women sign up to have their rights curtailed more than they already have been in 2024?