It is Mark Twain's birthday--happy birthday, Mark!--and I am reading, or rather listening to, Ron Powers's biography of Twain as I drive and walk each week. I'm now in 1907, when Twain is basically past his authorship phase and well into his very creditable phase of anti-imperialist writings. And since it is 1907, an epic brouhaha is about to ensue with Isabel Lyon, his secretary/companion whose motives & actions have caused considerable debate for Twainiacs over the past many decades.
But one of the key takeaways from any Twain bio is just how convinced he was of his own smarts as a businessman and just how much money he lost from that belief, from investing hundreds of thousands first in the Paige Compositor (which bankrupted him), to the miracle food Plasmon, to a new Jacquard loom printer for which there was no market. He would say he'd been a fool before, but this new thing would really work and make his fortune--once it was perfected, of course.
I'm reminded of this because one of the big tech bro billionaires asserted recently that AI would revolutionize learning because we could teach children to focus on all the big picture stuff--big thoughts--and leave all the random junk like facts to AI. Since AI spreads literal fabrications like Johnny Appleseed on a mission, I thought he was kidding, but no.
I've heard this "teach kids to reason abstractly and the rest will follow" stuff for 30 years, and you know what? Without actual facts or contexts, they can't draw any "big picture" abstractions that are worth anything. No one can. The reasoning that happens without a grounding in facts and contexts is nonsense.
And since nature abhors a vacuum, metaphorically speaking, we have become all too aware of what happens when we stop dealing with facts, and the news media stop reporting them to become either a mouthpiece of Jeff Bezos or worse. If we don't have facts and teach people how to think critically about the inferences we can draw from them, here's what happens: they make up their own, with "big thoughts" that emerge from their worst fears and impulses. (Insert your own example here.)
But the tech bros and ed tech bros aren't buying that AI can't do it all. As Michael Clune says in The Atlantic, trying to warn us about its limits:
We don’t have good evidence that the introduction of AI early in college helps students acquire the critical- and creative-thinking skills they need to flourish in an ever more automated workplace, and we do have evidence that the use of these tools can erode those skills.
There's that pesky word again--"evidence."
And Charlie Warzel puts it even more bluntly:
We are waiting because a defining feature of generative AI, according to its true believers, is that it is never in its final form. Like ChatGPT before its release, every model in some way is also a “low-key research preview”—a proof of concept for what’s really possible. You think the models are good now? Ha! Just wait. Depending on your views, this is trademark showmanship, a truism of innovation, a hostage situation, or a long con. Where you fall on this rapture-to-bullshit continuum likely tracks with how optimistic you are for the future.
So do we believe our own lying eyes about the effects of teaching "big picture" versus facts, or do we believe the tech bros? Or do we just plunge ahead while we wait for the AI rapture?
Twain believed that the Paige Compositor would outclass the Mergenthaler linotype machine once it was perfected--that is, to quote Warzel, "You think the models are good now? Ha! Just wait."
TL;dr. At this point, AI in the classroom is the Paige Compositor, and we all now know how that turned out for our birthday boy.
No comments:
Post a Comment