In 1984, Arnold Schwarzenegger uttered his iconic phrase, “I’ll be back” in The Terminator. The basic premise of the movie is that Arnold is an assassin sent back in time to eliminate someone who will one day destroy a hostile artificial intelligence (AI) system called “Skynet.”
What’s even scarier? Arnold’s villain returns from…2029. Six years from now.
Well, in real life, AI is “back.” It’s been out less than a year, and already, there’s a dire warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This brief (and scary-as-hell) statement was released in May 2023, signed by over 400 top tech leaders, including executives from Google and Microsoft – plus those responsible for introducing AI to the world.
In November 2022, ChatGPT was released, the most public form of AI thus far. It stands for “chat” as in the “chatbot” functionality, plus the language model, “Generative Pre-trained Transformer.” That doesn’t sound “Terminator”-esque at all…
We’ve already heard of college students using it to write papers. Hollywood is warning actors to further trademark their image so as not to be AI-repurposed into movies they have nothing to do with (and don’t even get me started on ChatGPT-created books or movie scripts). Now comes a case where a lawyer used it to research a case…and the bot came up with prior cases that didn’t even exist. Of course, we’ve already seen and heard “deep fake” videos and audio, which serves up worrying scenarios for critical political and global interactions.
AI is also shaping the future of employment. A recent study by the Labor Department showed college enrollment dropped eight percent between 2019 and 2022, the steepest on record. More kids are seeing that taking on a load of college debt for a degree in a field that may soon be ruled by AI isn’t such a great bet. Instead, they’re going for trade schools and careers that are more hands-on” versus what’s sure to be an impact on “the laptop class.”
Regulation of AI would seem to be a solution, but it’s not so simple. The technology is evolving on a daily basis, so defining what AI “is” plus its various risks/benefits is like trying to nail Jell-O to the wall. Some have suggested organizing an international community comprised of academics, tech experts, and international agencies – along the lines of groups that deal with nuclear energy or climate change.
The other thought is to regulate it by making the humans behind AI accountable. In other words, the person who made the fake video, or making the company behind a certain technology liable for any damages.
As we wrestle with controlling Artificial Intelligence, it’s up to each of us to do our best to discern what’s real and what’s “artificial” in our overloaded information ecosystem. There are ways to tell a deepfake video from the real thing; same goes for fake audio. If you read a story that seems “off,” consult multiple, reputable news sources to confirm (or deny) it.
Because we don’t have a 21st century cyborg assassin to do it for us.
Have you used ChatGPT? What’s been your experience? What are your thoughts on AI; are you concerned how it will impact schools or jobs? Sound off in our Community Soapbox (which is most definitely NOT run by AI…)
Cindy Grogan is a writer, lover of history and "Star Trek" (TOS), and hardcore politics junkie. There was that one time she campaigned for Gerald Ford (yikes), but ever since, she's been devoted to Democratic and progressive policies.