Suppose you do all your assignments using AI. Suppose you do a test and have to survive another year. Sounds easy, doesn’t it?
It can’t be.
The Academic End of The World, aka students using AI, has turned out to be a lot more complex. My view is that the original paranoia was too simplistic and purely reactive. The predictions of doom have turned out to be wrong, for a lot of good reasons.
The way the students see AI is indicative. The risk factor for getting caught is too obvious. A system that writes your work also means you’re committed to whatever it produces and so are your grades. You can easily make a mistake in what you’re telling the AI to write. You’re effectively pretending to be educated, and even the laziest students know that can’t work all the time.
The students, however, came up with much more functional perspectives. Tedious work was one of the issues. AI can do that easily and quickly. There’s supposed to be a difference between tedium and education, for those who don’t know.
Time management is another issue. How much time do students really have? What’s the best way to use that time? AI is fast. Plodding mindlessly for hours through regurgitated information by rote isn’t education.
One of the bases of basic education is “what you’re supposed to know”. Whether you understand it or not, all you have to do is to recite that and someone assumes you know your stuff. Comprehension may or may not get a word in sideways. That’s why regurgitation doesn’t work.
You also have to check your AI work. Is it right? Does it say what you want to say or need to say? You could easily come up with “George Washington was a great NFL star”, and not check it. AI isn’t a gimme for students. It’s a basic tool, no more, and possibly a lot less.
Educators, meanwhile, have made the perfectly reasonable point that students do need to be able to work with AI. After all, they’ll be spending the rest of their lives with it. You see how many contradictions there are in any system that won’t let you use AI for practical purposes purely “on principle”.
There are also some serious weak points in AI. AI derives information from searches and learning training. If you do a search in quotes from any text, if it exists, you’ll find it. You’ll also find closely related text. I’ve been doing that for decades. It’s a useful editorial technique.
It’s also a great way of finding plagiarists, human or otherwise. Any kind of direct copying is findable, even if camouflaged with extra text. It’s nowhere near the issue it’s made out to be. In specialized fields, it’d be even easier. You can cite Whatisname et al, sure, but any quotes will show up sooner or later. This process takes about as long as any cut and paste and search takes, a matter of seconds.
One thing is for sure – The sheer brutality of any kind of professional or academic environment will make confetti out of non-self-generated work. You can’t get away with being quite that dumb anywhere but politics or online advertising.
In a truly competitive environment, forget it. AI can’t work on that level, simply because that’s the way it’s trained.
I have a theory about any sort of student writing. Throw away the rulebook. Get them to write about something they actually like and are genuinely interested in writing about. Something they care about.
You get an instant overview of their usage, literacy, and expression in 300 words. I hope this helps somebody with managing AI writing because AI will always be the least passionate form of writing.
AI is useful. That doesn’t mean you don’t need to stay alert.
_________________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.