The Great Pumpkin has risen. Elegant beagles in Sopwith Camels fill the skies. The subject of regulating the hordes of black and beige cats called AI is now on its way to confront the permanent Halloween of US politics. This could well be a whole new form of Trick or Treat.
President Biden has announced an initiative on regulation of AI which is already being reported like a bag of cats in a catnip patch. Some say it’s to regulate AI development. Some say it’s about AI safety. Others say it will cause crypto chaos. It could “impact marketing tools”.
Meaning of course that nobody’s trying too hard to find out what it means. That uncanny understanding of the need for factual information is alive and well and living in some meth freak’s macro somewhere.
The actual executive order, all 100 pages, as hauled from the depths by The New York Times, explains another issue. This is a truly vast subject for regulation. Somebody would have to know what they’re talking about. They might even eventually have to know why they’re talking about it.
It includes catchy hook lines like “Artificial Intelligence must be safe and secure” and “AI systems … are secure against misuse or dangerous modifications”. Think about that regarding actual functional AI software for half a nanosecond. We’re right back to the traditional meaning of Halloween.
Imagine the prehistoric hollowed-out pumpkins of present-day US politics, with their little candles of sanity glowing inside them. Some wave the mythical great toilet roll of deregulation on high. Some put rocks in kids’ Halloween bags. Their strangely elusive intellects grasp at words with more than one syllable. Praise the Lord and pass the Mylanta.
Seriously, to get this regulation in any sort of working order, it’ll have to be something the courts can look at without running North America out of barf bags. The terminology alone in AI is worse than litigious. It’s ambivalent by definition.
“Did you make dangerous modifications to this AI?”
“Well, golly gee shucks… No.”
Just to nitpick a bit; what’s dangerous and what’s a modification? You could have an industrial AI producing carnivorous underwear and call it a glitch.
The main principle of AI regulation has to be based on actual or potential harm or injury in the traditional legal sense. That makes sense, and it’s not avoidable in any court.
Just keep it simple. AI laws must be able to address any possible scenario. They must be protective under all conditions. Let litigants try working for a living instead.
__________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.