Friday, November 01, 2024

MLA on AI: I promised I wasn't going to write more about it, but here we are

 Internal monologue of the last 15 minutes: "You have papers to grade . . . don't look at that MLA AI report that you couldn't see the other day because its server crashed . . . papers to grade, remember?  . . . don't do it!" and here we are. It is the Great MOOC Panic of 2015 all over again, and it is pure catnip to people with opinions.

So as you probably already heard, CCCC and the MLA have joined their unholy forces to weigh in on Generative AI. (I kid because I love.)  https://hcommons.org/app/uploads/sites/1003160/2024/10/MLA-CCCC-Joint-Task-Force-WP-3-Building-Culture-for-Gen-AI-Literacy-1.pdf

There are three of these working papers; this one is the latest. I did read through it, although probably to get into the spirit of things I should have fed it into an AI engine and asked for bullet points.

Some positive thoughts:

1. I appreciate the work that went into this, truly. There are thoughtful people on the board, and they have really tried to make guidelines that would be helpful. 

2. It's really useful for distinguishing between AI and Generative AI and other forms as well as what they can and cannot do.

Some questions: 

1. Is it strongly promoting the use of GAI in every course? You betcha. I kind of see it, since they believe the wave of the future is training students to use it effectively, since the whole "help students to write better on their own" ship has apparently sailed.

2. What is our role as educators in all this? 

  1. Training students to evaluate GAI for accuracy, which means that we--instructors--get to spend more time getting cozy with GAI and checking up on it as well as evaluating student papers. Two for the salary of one!
  2. Teaching students 
    1. to evaluate GAI output for relevancy, bias, and data security, 
    2. to evaluate rhetorical situations where GAI is and isn't appropriate
    3. to having them write metacommentaries on their use of GAI in a paper
    4. to monitor how GAI helps (!) their development as writers. Yes, reading the GAI output and assessing it as well as assessing their papers: twice the grading fun.
  3. Toward the goals of #1 and #2, seek out more professional development opportunities about GAI, and "[r]ead current articles and popular nonfiction about AI as well as emerging Critical Artificial Intelligence (CAIL) scholarship" (10). Are you tired yet?

3.  Can you opt out?

Yes. "Respect the choice to opt out" (10). 

   BUT if you opt out and are contingent, could you lose your job? 

Also yes. "Some instructors may face consequences in hiring and evaluation processes when they opt out of teaching AI literacies in their classrooms, particularly when shared governance processes have determined department-wide uses for AI in the curriculum" (10).

4.  But if I'm just one instructor, can I decide it's not appropriate for my course? 

Theoretically, yes; in practice, probably not. The report strongly, and I mean strongly, advocates for program-wide and department-wide if not university-wide adoption of a consistent policy of integrating GAI training as a cohesive whole.

I agree that this should be done in a systematic or coherent fashion, and it's much better to have something consistent. Will there be professional development time and funding devoted to this? 

5. I hear the tinkling of shiny "if you're not on board with the tech, you don't understand it" bells with this one. 

Faculty development meetings should be a space for building instructors’ conceptual knowledge about GAI literacies, helping them develop theory-informed pedagogical practices for integrating GAI into their teaching, and allowing them to experiment with GAI technologies and develop their technological skills.
Such gatherings can simultaneously address instructors’ resistance, fear, and hesitation about using GAI in their teaching while also recognizing that faculty development programs cannot make instructors experts in GAI, which is not an attainable goal given the fast-changing nature of these technologies

 Translation: 

  • If you question it, it's because you fear it, which is stupid. You are stupid and not thinking correctly about this. 
  • We are telling you that this is the wave of the future, and if you don't get on board with a new technology, you are just plain wrong. 
  • If you have questions, you are wrong.
  • If you hesitate rather than swallowing this wholesale, you are wrong. 
  • You need to be persuaded, not listened to. Your fear and hesitation are not legitimate. They are resistance that needs to be overcome.

But this is not our first rodeo with the whole "look, it's shiny!" argument, is it? With MOOCs? With auto-graded essays? With Twitter? With every future-forward "get rid of the books" library?  

I'm not saying that it's wrong. I'm saying that rushing headlong into every new technology--tech enthusiast here, remember--without allowing for questions and a thoughtful assessment is what we keep doing, and I wonder if we will ever learn from our past experiences.



 

No comments:

Post a Comment