Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Saturday, June 21, 2025

AI is to Writing as Cheez-Whiz is to Cheese (File under “cranky rantsmanship”)

 


Hear me out. I promise you that this is written by a real person, and possibly a Cassandra.

Exhibit A: A long time ago, I had Feelings about the incessant cheerleading for Twitter that first the media, and then colleagues, were going on and on and on about. (That was in the Before Times, before Twitter became Evil X.) My academic compatriots all but posted signs saying "Twitter will save the world, or academic discourse anyway." Now we are all on Bluesky, which alarms right-wing pundit Megan McArdle, she of the "If you're old and poor, sucks to be you" school of economic growth, because there is no profit in it for her, but for a time it did seem that Twitter could be great, and Bluesky, pace McArdle, might get there yet. 

But the point was, the cheerleading was too much, as it had been for other tech that was supposed to transform teaching. 

Exhibit B: Enter the MOOC. Remember them and the quaint old days they represented? They too were going to transform education in all the best ways--turning us into glorified tutors for the MOOC 'n' Bake classrooms we would all be grading for--not teaching, just grading, because that is why everyone wants to teach. UCLA is already going there: an AI + MOOC combination that will transform the world. 

Now to Exhibit C: AI and writing. I've already complained about the mind-numbing effects of reading AI-generated prose and the lengthy knuckling under that the MLA has done in bowing to our new environment-destroying overlords. 

But now I'm seeing professional writers (no names, of course) embrace it as an idea factory. All they have to do is clean it up a bit to mimic their voice and bingo, there's a Substack or blog post. 

I read a few, and the whole "idea factory" thing? Not so much. If that's what ChatGPT 4.0 or Claude or Grok or any of the other idea factories generate what you  consider ideas, then okay. Whatever helps to monetize the site. You do you. If your idea of writing is tweaking some very anodyne non-content, then go for it.  

But all AI-generated prose, in a perfect world, would have a disclaimer: "AI wrote this, so decide whether you want to spend 3 or 5 or 10 minutes of your only precious brain life in reading it." 

Oh, and is it good for your brain to use AI to write for you? Maybe not.

https://www.psychologytoday.com/us/blog/urban-survival/202506/how-chatgpt-may-be-impacting-your-brain 

If you want to eat Cheez-Whiz, it definitely has its uses. But don't pretend that it's a true aged cheddar like you'd eat with fresh apples. 

Same goes for AI writing.

 

Sunday, December 08, 2024

Brave new AI world: UCLA comp lit course to be fully AI (except the grading, of course)

 https://newsroom.ucla.edu/stories/comparative-literature-zrinka-stahuljak-artificial-intelligence?fbclid=IwY2xjawHCkWJleHRuA2FlbQIxMQABHevSkyx0i5RjId-pd4g4_W-cs4zgTUOz19XwrxVpLU7LD2jt9E4FEkX7Jw_aem_LRrvJbHS0TFgq9uokRfYFw

So here we are: some kind of highly monetized tech company called Kudu is going to take Professor Zrinka Stahuljak’s comp lit course materials and turn them into a fully AI-driven course—except for the grading, of course, which will be left to Teaching Assistants. 

Benefits to students? Consistency across multiple sections. 

Benefits to Teaching Assistants? They get to work on the students’ writing (autocorrect wants to make this “writhing,” and I kind of get it).

Benefits to the Professor? No additional work except for submitting grades. She doubtless has tenure and is not worried about her job. 

Any downsides? 

1. Well, the TAs are going to be stuck reading writing that is likely to be about 95% AI generated (ChatGPT, Claude, Google Notes or whatever) and maybe 5% from Chegg or Course Hero. And they will get to track it all down to report to Academic Standards.

2.  If you were a student, would you put forth your best effort, knowing that everything was canned and your professor was never going to look at any of it? If your professor was like the Karl Marx God in the clouds in a Monty Python skit, who, once having generated content, just says “Get on with it!”? Look, I teach online often, and one of the hardest but most rewarding parts is talking to students as real person to real person. What if you remove even that layer of connection?

3. Who’s going to answer their questions? AI. 

4. Oh, and the textbook that will generate all the AI? Is it OER? Oh, you sweet summer child; of course it’s not. It’s going to cost the students an additional $25 per semester. 

5. If this is the wave of the future—the MOOC of the future, if you will—what about intellectual property? 

Questions:

1. Is this going to cost the university more than simply having a course in D2L, Canvas, or Blackboard? 

2. Since Kudu is compensating the professors, how does this affect their employment with the university? 

3. Cui bono? Kudu and the University’s bottom line, probably. But what about the students? What about the connections that teaching a humanistic subject is supposed to foster? 

4. Out of state tuition at UCLA is 43K a year. Would you send your kid to be taught by a bot? Call me when Bezos and the other billionaires and millionaires send their students to be taught by nonhuman objects instead of people. 

Friday, November 01, 2024

MLA on AI: I promised I wasn't going to write more about it, but here we are

 Internal monologue of the last 15 minutes: "You have papers to grade . . . don't look at that MLA AI report that you couldn't see the other day because its server crashed . . . papers to grade, remember?  . . . don't do it!" and here we are. It is the Great MOOC Panic of 2015 all over again, and it is pure catnip to people with opinions.

So as you probably already heard, CCCC and the MLA have joined their unholy forces to weigh in on Generative AI. (I kid because I love.)  https://hcommons.org/app/uploads/sites/1003160/2024/10/MLA-CCCC-Joint-Task-Force-WP-3-Building-Culture-for-Gen-AI-Literacy-1.pdf

There are three of these working papers; this one is the latest. I did read through it, although probably to get into the spirit of things I should have fed it into an AI engine and asked for bullet points.

Some positive thoughts:

1. I appreciate the work that went into this, truly. There are thoughtful people on the board, and they have really tried to make guidelines that would be helpful. 

2. It's really useful for distinguishing between AI and Generative AI and other forms as well as what they can and cannot do.

Some questions: 

1. Is it strongly promoting the use of GAI in every course? You betcha. I kind of see it, since they believe the wave of the future is training students to use it effectively, since the whole "help students to write better on their own" ship has apparently sailed.

2. What is our role as educators in all this? 

  1. Training students to evaluate GAI for accuracy, which means that we--instructors--get to spend more time getting cozy with GAI and checking up on it as well as evaluating student papers. Two for the salary of one!
  2. Teaching students 
    1. to evaluate GAI output for relevancy, bias, and data security, 
    2. to evaluate rhetorical situations where GAI is and isn't appropriate
    3. to having them write metacommentaries on their use of GAI in a paper
    4. to monitor how GAI helps (!) their development as writers. Yes, reading the GAI output and assessing it as well as assessing their papers: twice the grading fun.
  3. Toward the goals of #1 and #2, seek out more professional development opportunities about GAI, and "[r]ead current articles and popular nonfiction about AI as well as emerging Critical Artificial Intelligence (CAIL) scholarship" (10). Are you tired yet?

3.  Can you opt out?

Yes. "Respect the choice to opt out" (10). 

   BUT if you opt out and are contingent, could you lose your job? 

Also yes. "Some instructors may face consequences in hiring and evaluation processes when they opt out of teaching AI literacies in their classrooms, particularly when shared governance processes have determined department-wide uses for AI in the curriculum" (10).

4.  But if I'm just one instructor, can I decide it's not appropriate for my course? 

Theoretically, yes; in practice, probably not. The report strongly, and I mean strongly, advocates for program-wide and department-wide if not university-wide adoption of a consistent policy of integrating GAI training as a cohesive whole.

I agree that this should be done in a systematic or coherent fashion, and it's much better to have something consistent. Will there be professional development time and funding devoted to this? 

5. I hear the tinkling of shiny "if you're not on board with the tech, you don't understand it" bells with this one. 

Faculty development meetings should be a space for building instructors’ conceptual knowledge about GAI literacies, helping them develop theory-informed pedagogical practices for integrating GAI into their teaching, and allowing them to experiment with GAI technologies and develop their technological skills.
Such gatherings can simultaneously address instructors’ resistance, fear, and hesitation about using GAI in their teaching while also recognizing that faculty development programs cannot make instructors experts in GAI, which is not an attainable goal given the fast-changing nature of these technologies

 Translation: 

  • If you question it, it's because you fear it, which is stupid. You are stupid and not thinking correctly about this. 
  • We are telling you that this is the wave of the future, and if you don't get on board with a new technology, you are just plain wrong. 
  • If you have questions, you are wrong.
  • If you hesitate rather than swallowing this wholesale, you are wrong. 
  • You need to be persuaded, not listened to. Your fear and hesitation are not legitimate. They are resistance that needs to be overcome.

But this is not our first rodeo with the whole "look, it's shiny!" argument, is it? With MOOCs? With auto-graded essays? With Twitter? With every future-forward "get rid of the books" library?  

I'm not saying that it's wrong. I'm saying that rushing headlong into every new technology--tech enthusiast here, remember--without allowing for questions and a thoughtful assessment is what we keep doing, and I wonder if we will ever learn from our past experiences.



 

Thursday, October 24, 2024

A minor sign of hope after the AI maelstrom

 AI, and the students' use of it to generate papers, consumed far too much of my brain earlier this semester. I'm teaching online, so my usual expedient of having them write in class isn't an option. 

It was wearing me out, between worrying that I was letting them get away with something and thus disadvantaging honest students or that I wasn't living up to the syllabus by checking everything. It was making me discouraged with teaching.

Turnitin wasn't helpful, nor was GPTZero, the supposedly good AI-catcher. The results could be wildly at odds with each other if you tried it twice in a row, unless something was coming up 100% AI generated. 

I called out a few students, per the syllabus. What that means: I had them talk to me. Many said it was Grammarly, which has gone heavily to AI, and said they wouldn't use it again. I am not anti-tech--eighteen years of blogging here should tell you that--but if they are not doing their own work, I can't help them make it better.

Then things started to get better. Aside from modifying the LMS discussion board settings and Perusall (no copy & paste, post your reply first before seeing others' responses--this I learned to restrict after a few students were copying from each other), I think what happened is this:

They realized that I was reading what they wrote. 

Now, I tell them this in the syllabus, but reading any syllabus, especially with all the required institutional boilerplate, is like reading the instructions for setting up a wireless router or, my favorite analogy, Beetlejuice's Guide for the Recently Deceased. 

Was it just adjusting the rubrics that made the difference? Maybe some. I discovered that having good criteria there would take care of the few AI-written posts, which naturally fell down to the C- or D level.

But I like to believe that it was that there was a real person in there, in those discussion boards, commenting and upvoting and mentioning by name the students and the specific things that they did well. They know that there is a person behind the class.

And on their papers, addressing the specifics of what they had written, suggesting other ways to develop the argument, and so on.

And in answering their emails quickly and with a sense of their real concerns. 

What I noticed is that the AI boilerplate--so far, anyway--seems to have died down, and I've mostly stopped looking for it and thinking about it.

This may, of course, just be an artifact of its being five weeks from the end of the semester, or maybe I'm missing something.

But their writing seems to be more authentic, more as it used to be, and not that MEGO AI boilerplate

With some of the professional organizations in the field throwing in the towel and writing guidelines not about if we will use AI but how extensively we ought to use it, I count my students' responses as a sign of hope. 

Maybe if we give them authentic feedback, as the MLA-CCCC report suggests, they will respond with authentic writing. 



Friday, July 19, 2024

Is the true measure of AI-written content the MEGO test?

Our eyes are precious things, and they are also smart ones. I know they only transmit images--it's the brain that interprets--so maybe it's the combination that I'm talking about here. 

One of the tasks I'm doing right now requires a lot of concentration and is visually intensive (intense?).  I try to stop for breaks at intervals, but sometimes my eyes can't make it till the next break, so they get blurry and tears run down my cheeks. That's when I stop for the day. But as Laura Ingalls Wilder says about housework when she's pregnant in The First Four Years, "the work must go on, and she was the one who must do it," so I press on, but sometimes my eyes just plain close. 

So eye time is precious time, and I don't want to waste it unnecessarily. Necessary time-wasting: looking at pictures of old houses or gardens or something equally soothing. 

Unnecessary time-wasting: AI-written text.

We're probably all seeing the evidence of AI-written text on the web--wildly inaccurate howlers passing as "facts," weird word usages, etc. Are we reading it in the same way as human-generated writing, though?

Oddly enough, when I read an AI-cheerleading piece like the one at IHE now, or my students' AI-written work, my eyes have started to skim rapidly and, in fact, they glaze over. Is it because the text is generated by AI, or is it because it's not saying much?

That skimming effect, though--that's the MEGO test, from a term coined in (maybe) 1973, according to the New York Times. (I canceled my subscription, so I can't read it and tell you for sure.) 

 MEGO stands for My Eyes Glazed Over, and it's a reaction to boring, obvious text. From the article: "A true MEGO, however, is more than a soporific piece; it is an article written about a subject of great importance which resists reader interest."

Of course, other forms of text have failed the MEGO test before--AI in its current form didn't exist in 1973--but maybe AI has trained our brains to spot it. 

You scientists out there know that this can't be a real effect and I can't be totally serious, but it's a thought experiment that's giving my eyes a little break before going back to the Big Task, 

 

Wednesday, March 08, 2023

Can AI (ChatGPT) Replace Writing?

AI writing (ChatGPT, etc.) is supposed to be transformative and all, the Washington Post tells me.

 The New York Times  worries about jobs; no worries, says the Washington Post, since AI requires "chat whisperers" to get the best from the software. CNET tried publishing articles written using it, but oops--too many mistakes.

AI can also spark creativity, apparently, although the examples most outlets choose to illustrate the creative problem-solving sound like obvious solutions. It's more of a springboard to creativity than a solution to writing, as scifi magazines have found out since they were flooded with AI-written stories and had to stop submissions for a while.

But what can the writing part of AI do well as of this writing?

1. Perfect C essays, all correct grammar, platitudes, puffery, and no substance.

2. Mission statements, ditto. 

There are surely nuances I'm missing, but everything I've seen so far either when testing it myself or when reading about it has been pretty much on the spectrum from 1 to 2.

It will get better, but ChatGPT--which, yes, is in its infancy--is so far like a bar trick--you know, where someone puts a quarter inside a bottle or something by magic means, or folds a dollar bill in a certain way to win a bet. I can't be more specific because I mostly only see this on Better Call Saul, but you know what I mean. 

There's been a lot of hand-wringing about how this might affect the classroom: one IHE writer calls it a "plague on education"  while others suggest creative uses for it. Some instructors have started using it to get students to question their assignments and test its limits. 

I'm kind of eager to see how this might work in the classroom next year. What about you?

Edited to add: Here's the first paragraph of what it wrote when I asked it to write like me. Like looking in a mirror, don't you think? This blog is all about the peppy self-help pitch, am I right?

Write a blog post in the style of notofgeneralinterest.blogspot.com 

 V

"The Art of Doing Nothing" Greetings and salutations, my dear readers! Today, I would like to talk about the lost art of doing nothing. Yes, you heard that right, doing nothing! In our fast-paced world, it seems like everyone is in a rush to do something, be it work, hobbies, or socializing. The constant stimulation and activity can be exhausting and leave us feeling drained. But, what if I told you that taking a break from it all and doing nothing can actually be beneficial for our health and well-being? Studies have shown that taking time to do nothing can reduce stress, increase creativity, and improve our overall mental and physical health. It's a simple concept, but often overlooked. So, how exactly can we incorporate this into our daily lives? One way to start is by simply taking a few minutes each day to sit quietly and do nothing. No phone, no TV, no distractions.