I looked up the origin of the phrase “true blue.” Some people believe that it…
IF YOU TEACH A MAN TO FISH STICK
Almost every day, I encounter a new article warning about how Artificial Intelligence is going to change everything, or how it already has. Maybe it offers wondrous new advantages, maybe it’s the end of life as we know it. Maybe we’ve let an all-powerful genie out of the bottle and we are going to have to figure out fast how to harness the magic of it. Maybe the technological apocalypse has already begun, and it’s only a matter of time before the robots exterminate humanity, once we’re no longer useful to them.
I don’t know. I’m fascinated, confused, excited, and sometimes a little scared.
There are some troubling aspects to the technology, for sure, and I’m pretty sure we haven’t yet considered fully the wide- and far-reaching ramifications of it all. But I definitely appreciate a lot of the everyday applications of A.I. that I already use in my daily life — search engines, GPS navigation, spam filters, bank fraud alerts.
I like that smart computers can identify and target individual cancer cells faster and better than people can. I expect that for data-crunching intricate systems and global problems, artificial intelligence is amazingly well-equipped, and we should be glad for this ability to process mind-numbing amounts of information with speed and accuracy. But I don’t much like the idea of cold analysis supplanting all feeling and nuance (I don’t know that this is what’s happening, but it worries me). And I am freaked out by the thought of computers replacing artists and actors and writers. I’m even more disturbed by the fact that some consumers couldn’t care less — in fact, plenty of people seem to prefer CGI to the real thing.
This week I read about how students and teachers are learning to navigate this new territory. A.I. is a concern for educators, because widely-available applications can compose, in seconds, anything from a book report to a technical article to a sermon. They may not produce the most brilliant pieces, but they’re getting better and better, and it’s getting harder to distinguish from stuff written by real humans. So teachers, now, are using different programs to scan students’ work and “flag” that which appears to have been generated artificially. The technology is imperfect on both sides, so there are lots of “false positives” — work actually written by a student that reads like something computer-generated. Flawed A.I. is being used to counter flawed A.I. Thus, the war of the robots begins.
This stuff is paradigm-shifting, and it’s going to affect every aspect of our lives. I’m not inclined to think it means the end of the world any more than the steam engine or antibiotics spelled the end of the world. The world as we know it — as we have known it — is always ending. And a new world is always becoming. We have to become along with it.
For the past several months, friends on social media have been sharing things produced by Chat GPT and similar programs. Someone I know from college asked the app to rewrite Allen Ginsberg’s “Howl” with an upbeat, optimistic, and cheerful tone. What came out sounds a lot like something I might share on a Sunday morning. (In fact, I think it will be our sacred reading this week.)
My friend Liz gave me an article about a chatbot-created 40-minute Lutheran church service in Germany. It included a sermon, prayers, psalms, and hymns. Reviews were mixed, I was gratified to read, lest my work become entirely obsolete. But in the end, the congregation listened attentively, and most were genuinely impressed. It was good enough, I guess.
I’ve been really curious about it, so I downloaded a free trial version of one of these applications and, just to see what would come out, asked it to “write a sermon about fish sticks.” I had one in about two seconds. On the whole, it wasn’t anything I’d call especially insightful. But, believe me, I’ve heard much worse talks. The main point was a caution against prioritizing convenience at the expense of deeper nourishment: “Fish sticks may be quick and easy, but they are not as nutritious nor as satisfying as the real thing.” That was a little ironic, I thought, considering the source.
One thing that chatbots don’t seem particularly good at (yet) is humor. They may be smart, but they’re not clever (yet). I don’t remember what title the computer came up with for my fish stick talk — something like, “Fish Sticks are No Substitute for the Love of Christ.” Bleh. My husband, on the other hand, is both smart and clever. Travis referenced the proverb, “Give a man a fish, and you feed him for a day; teach him to fish, and he’ll feed himself for a lifetime,” suggesting Teach a Man to Fish Stick as a title. If my computer-generated sermon had included that allusion, it might have converted me.
Give people an easy platitude, a simple answer, a feel-good convenience, a bumper-sticker philosophy and you might satisfy them for a short while. Invite them to participate, to engage actively, to take responsibility for creating their own spiritual nourishment, though, and they’ll be on a path to deeper ongoing fulfillment.
Give them a consumable, and they’ll know a bit of diversion. Encourage them to create, on the other hand, and they’ll experience relationship, belonging, self-expression, and conscious personhood in a brilliant and empowered new way.
This inquiry ultimately led me to wonder what “Artificial” and “Intelligent” even mean.
As I understand it, what is usually intended by the term “artificial” in this context is that these bots are not truly sentient. It seems to me that there’s a difference between intelligence and sentience. The intelligence isn’t the thing that is artificial or false. In their own way, connected computers now are able to access, process, and apply real information, language, and images as well as or better than the human intellect. But still that’s not sentience — computers are not perceiving or feeling or discerning with real awareness. They calculate, but don’t make sense of the output.
Maybe…?
My friend Denise turned me on to a podcast episode from This American Life that helped me to grasp some of what’s so provocative-disturbing-amazing about everything going on with so-called A.I. in the last year. Because beyond the intellectual property ethics, and the replaceability of humans, and whether or not a computer-generated fish stick is as satisfying as Chilean sea bass, there are reasons now to ask whether or not computers are demonstrating what we might call actual “sentience.”
Chat GBT began as a predictive text program. Just like when we type on our phones, and we’re offered convenient suggestions for what next word we might want to use. If I begin to type “I’m on my —” my phone will suggest “home” or maybe “bike” as the next word. If I continue typing, “I’m on my way to —“ my phone will change the suggestions to “the store” or “church.” When giant A.I. applications like Chat GBT first started, they were simply massive amalgamations of every piece of written text that the programmers could enter — every digitized book and newspaper, every online review, every blog. Once a gazillion pieces of information were compiled and accessible, then someone could ask the bot to “Write a song about my cat Flo in the style of Joni Mitchell,” and it would do so. It would do it one word at a time, choosing the next most likely word, based on everything written before. It looked smart but it was really just following predictive-text guidelines; there wasn’t real thought to it, and the results were pretty clunky.
About a year ago, however, since the public rollout of Chat GPT, as the program has gotten more sophisticated and more and more people are adding data to it, computers seem to be synthesizing the information in a new way. It has begun to look more like actual understanding. Like conceptualization. Like reason and interpretation. Like making sense.
So now I wonder… Is this intelligence less “artificial,” now, more real? Are the computers actually becoming self-aware and sentient? Or are they simply revealing that our own intelligence isn’t as uniquely human or special as we thought it was? Are are own thoughts and feelings really just something like a super-fast, unconscious, predictive-text programs? It seems as if I’m thinking my thoughts spontaneously. But it could be that my mind is just choosing the next most likely word for me, based on everything I’ve heard and thought and read and seen before…
But then, how would that really be different from my idea of a Divine Mind, a Universal Intelligence? An infinitely-connected system with access to all human knowledge, making meaning out of the sum total of all understanding, predicting what comes next based on what has been said or written or drawn before. That sounds like Cosmic Consciousness to me. And I don’t know if that’s really cool, or if it’s terrifying.
I lean towards “cool.” It does freak me out a little, but I stop short of the idea that we’re just mindless automatons regurgitating ideas from the hive mind.
And what saves me from that dreary conclusion is, strangely, the fish stick sermon.
Actually, more completely, it’s the computer-generated fish stick sermon, plus Travis’s clever allusion, plus my own rumination on the matter. What saves me from dreary fatalism is the fact that we are all STILL CONTRIBUTING to this Cosmic Consciousness, this Collective Intelligence. Maybe the computers are tapping into it with self-awareness, maybe not. Either way, it includes not just what has come before but also every way that we’re participating now. What is possible, what is predictable, includes both what has come before and also everything new.
Being fed a fish stick is only mildly, temporarily satisfying because it’s mere consumption. Real nourishment — spiritual, intellectual, relational — is in our ongoing creative participation.
God, this is long. Sorry. I’ll try to tighten things up before speaking on Sunday. We’ve got special musical guests this week — Jared Putnam and Raychael Stine! I can’t wait to see you, 10:00 am, at Maple Street Dance Space. XO, Drew
©2023 Drew Groves