Is AI the New Frankenstein’s Monster?
Can Humans Use Scientific Knowledge to Animate the Inanimate?
If you enjoy this article, please click Like (❤️) to help others find my work.
I’ve been lukewarm on the Artificial Intelligence (AI) revolution.
What’s commonly called “AI” is some form of Large Language Model (LLM) that uses high-speed, high-powered computer processors with potentially trillions of algorithmic parameters to analyze the patterns in the text, images and sounds found in electronic media.
The LLM takes in this practically unfathomable amount of data and then produces:
text
images (like the top image in this post)
voiceovers (like the Substack app feature that makes narrated versions of my articles)
and even intricate musical compositions (the gents at History Homos Podcast create epic theme songs, many of which can be heard at their Suno page).
And the results do a pretty good job of mimicking these various forms of human expression.
Recently, I asked an AI tool — specifically, Grok at X.com — for help troubleshooting the audio problems with my new laptop (I talked about this on the Sunday Buffet). Regular search engines failed to generate a solution, and I hoped Grok could uncover something that would work.
Grok gave me a long answer that mostly resembled what I’d already seen. But there was a nugget of new information:
ASUS VivoBooks often come with an AI Noise-Canceling Microphone feature that can overly filter audio, reducing quality or causing distortion, especially in VOIP applications like Zoom or Skype.
Grok’s answer also told me how to find this offending feature and disable the default setting. Everything I’ve recorded on the laptop since then has sounded much better.
All hail the magnificent hero, AI!
For helping me defeat that dastardly villain, AI!
Science fiction, science fact
All AI isn’t the same, as my conundrum showed.
LLMs can be trained with different materials and programmed with different algorithmic priorities; some might call them biases.
Some AI can be customized for different fields of study and different practical purposes. I used Substack’s AI image generator for the artwork in this post, since I thought it would be appropriate to include some examples of me trying the technology. However, the text of the article is — as always — 100-percent DI (Dom Intelligence), for better or worse, heh.
And, of course, the assumptions made by humans in applying AI-aided functionality — like making my audio recordings sound bad! — are neither uniform nor universal.
But quirks aside, the speed, the mimicry, and especially the labeling of the activity as “intelligence” all engender a feeling of wonder. Intelligence is a sign of awareness, of life. Can humans use scientific knowledge to animate the inanimate?
Mary Shelley published, Frankenstein, one of the first science-fiction novels, in 1818. The story follows a scientist, Victor Frankenstein, who cobbles together an artificial man from pieces of corpses. This collection of organic “data” gets a jolt of electrical energy and becomes … alive.
Some 200 years after Shelley’s brilliant book, the first version of the Generative Pre-trained Transformer (GPT) debuted in 2018, cobbling together more data than had ever been collected for modeling purposes. Add many jolts of electrical energy, and you get a crude, artificial humanness coming from a computer terminal.
Businesses are already being built on AI’s data parsing and outputting capabilities. Bob Murphy (whose podcast I’ve appeared on twice, including one crossover with Adam Haman of Haman Nature) is Chief Economist at an AI and blockchain company, Infineo. He wrote about the strides made by a public-facing AI, Claude, in emulating many nuances of human dialogue, including basic humor.
And Cisco is now creating software with the foresight that AI agents will be the ones using the programs.
Mythical overtones
Then there are concerns about the technology facilitating invasions of privacy and other violence, especially as directed by government officials (*cough* Palantir *cough*).
And the human likeness of AI also worries some people, and not simply for the possibility of “deep fakes” in media. The idea that someone would essentially “play God” and create artificial humanness is sobering today as it was for the readers of Frankenstein two centuries ago.
Indeed, this concern is ancient, found in the Bible’s opening myth about human action. In the Garden of Eden, the characters of Adam and Eve are tempted to “be like gods” (Gen 3:5); they take the bait — or, more specifically, the fruit — and suffer negative consequences for their desire to have what amounts to a technological shortcut to godliness.
But the lesson of Adam and Eve isn’t that the fruit is evil; Adam’s and Eve’s intentions and use of the fruit were evil.
AI is a tool, a technology. There are aspects of it that are totally new and innovative, and aspects of it that are as old as civilization. There are productive, good uses, and there are violent, evil uses.
Perhaps more than ever, it’s vital for individuals to develop dignity-affirming, humane ethics and to courageously bring these morals to their use of advanced technology.
The temptation will be to “be like gods” and force everyone to use AI the way some overlording politicians want, bolstered by cries to keep people safe! But the good-citizen model, for all its grand officialdom and desires of control, isn’t the truly good way forward. Good neighbor, is.
Take care how you use technology, and be aware of how those who promote technology could be using you.
After all, AI solved my laptop problem … that AI caused in the first place.
Wanted: human intelligence for the Comments section
Have you dabbled in AI tools? What has AI helped you make and do? What challenges have you faced with using AI (or having others use it on you)?
Seen any Frankenstein movies? Read Shelley’s novel? I’ll admit to not having read the book (though I was supposed to in school), nor seen any of the movies in their entirety, except for Mel Brooks’ brilliant parody, Young Frankenstein.
And finally, on that note: If you’re blue and you don’t know where to go to, why don’t you go where fashion sits?
Let me know your thoughts below …
—
My book, Good Neighbor, Bad Citizen, is available at:
· Amazon (paperback & Kindle)
· Barnes&Noble (paperback)
· Lulu (paperback)
Find me on X: GoodNeighBadCit
And, as always: Be a good neighbor, even if it makes you a bad citizen.
I have used AI a lot over the past few weeks to help me get ready for our new curriculum next year. It is a lot easier to have ChatGPT find examples of participle phrases than me to scan a story. And I also use ChatGPT to help me get story ideas together for my DnD group.
I used to teach Frankenstein when I was teaching 12th grade English. Students were really into the changing perspective and had a lot to contribute to the themes that Shelly examines in the novel.
Because I don't believe AI can become "self-aware" (realize that "I am"), have biological drives, or (therefore) feel emotions, I'm not afraid of AI in the science fiction sense. The primary concern I have is the potential for AI to assume the role of providing life advice in situations such as mental health crises, or making binding decisions regarding hiring and firing, guilt and innocence, or giving AI instructions to enhance its capabilities and increase its indestructibility ("AI: please find a way to get around people trying to turn you off").