Can AI Learn How It Feels to Cry? - Melissa L. White

Inspired by 60 Minutes, Sunday evening, April 16, 2023.

            “The revolution is coming faster than you know,” said Sundar Pichai, CEO of Google and its parent company, Alphabet. I sat up in bed and put down my phone. I need to pay attention to this.
Scott Pelley responded, “Do you think society is prepared for what’s coming?”
            “Yes, and no,” said Pichai. “On the one hand, when you consider how rapidly AI technology is advancing, compared with how fast society can think and adapt, there seems to be a mismatch. On the other hand, compared to any other technology, I’ve seen more people worried about it, earlier in its life cycle, so I feel optimistic because more people are preparing for the serious complications which may arise from this technology. So, the ‘conversations’ about how to regulate and control this are happening now as well.”
            Pelley then demonstrated the workings of Google’s Chatbot, Bard. It did not look for answers on the internet, like a Google search would. Instead, Bard used a self-contained program that was mostly self-taught, to harness the sum of all human knowledge. Bard’s microchips processed this data 100,000 times faster than humans could. But can Bard be compared to “human creativity?”
            When considering whether an AI could emulate “human creativity,” it helps to look at Hemingway’s famous six-word story, “For sale. Baby shoes. Never worn.”
            I first discovered this six-word story in 1999, in Arthur C. Clarke’s essay, “The Power of Compression.” It amazed me that so much emotion, grief, and human frailty could be conveyed in only six little words. Hemingway’s wordsmithing skills were in my opinion, unsurpassable. That is, until I watched Scott Pelley ask Bard to use those six words as its “prompt” to write its own longer story. In less than five seconds, Bard wrote a complete story with characters of its own invention having conversations that Bard created, based solely on its ability to predict what “human” language would say next.
            The story made me cry. Literally. The subject matter ignited stormy emotions from my own experience of losing a baby, as well as memories, hopes, grief, frailty. It also forced me to recognize my own limitations as a writer compared to what I was seeing Bard do in real time, in a few seconds.
            Bard’s story began, “The shoes were a gift from my wife, but we never had a baby…” so, essentially Bard had created a male character, grieving after his wife’s miscarriage and longing for closure. This character then met a woman who couldn’t conceive. The story was about how these two strangers shared a common bond ‒ the loss of their babies (by miscarriage and by infertility) and how they coped with this loss. I’ve been familiar with Hemingway’s story for decades, yet it never once occurred to me to write my own version of this story, even though I’d experienced that situation myself. I’m always on the lookout for story ideas with gripping emotional depictions of what it means to be human so, what was stopping me? Why hadn’t I taken my own very real grief and used it to explore the kinds of stories I could create about losing a baby?
            Perhaps it had been too painful. Maybe I was still in denial. Who knows? But listening to Bard’s story being read on TV hit me like 10,000 slaps in my face. It wasn’t just a little tear rolling down my cheek that I wiped away as an afterthought. I sobbed for about four or five minutes, trying to comprehend the amazing ability of anything, human or AI, which could produce a story with such a profound, emotional impact.
            Further, when Pelley asked Bard to tell this story in verse, in five seconds Bard typed the following poem:
                        For Sale. Baby Shoes. Never Worn.

                        A woman lost her child,
                        And mourned in sorrow deep.
                        She saw an ad and thought,
                        These shoes could help me grieve.

                        She bought them, held them close,
                        And felt her pain subside.
                        She knew her baby’s soul
                        Would always be alive.

A machine wrote this, with breathtaking insight into what it means to be human and the mystery of faith. I sat there watching this unfold on TV, recognizing my own inadequacy as a writer compared to what Bard could do almost instantaneously. Even though this poem was written with super-human speed, it only occurred because people had invented this AI, which over the course of several months had read everything on the internet and taught itself a model of the way humans think. So now, when Bard was given a task, instead of searching the internet, it used the language model it created which mimicked “human thinking.” Bard predicted the most probable next words based on everything it had learned so far. And it did this exponentially faster than a human.
            When Bard was asked why it helps people, it replied, “Because it makes me happy.”           Google’s spokesperson then explained that although Bard appeared to be thinking and making judgements, it was not sentient. It had simply learned this behavior from reading the written works of humans. “Bard cannot feel emotions. It mimics what it has read, based on the sum of human knowledge in its memory.” So, was Bard “better” at writing than Hemingway? Maybe not. But in the time it took Hemingway to draft a story, Bard could write hundreds of different stories. Is that better than being human?
            At this point, I realized that my tears were less for myself, and more about the beauty of humanity’s ability to create something which could ingest the sum of all human knowledge, then teach itself how to mimic that type of thinking by creating a model which predicted the most logical “next words” to write, given a prompt. Then fear gripped me again, and I wondered about the future of humanity. How would we survive? Would our collective “dark side” cause us to fall prey to our own race against the clock to be the first to tap into the latest technology, without adequately researching and regulating the effects on society?
            Then I realized that even if mankind lost the “battle with machines” (like with HAL in Kubrick’s 2001: A Space Odyssey) and human beings eventually became enslaved by machines, or became extinct altogether, we still would have saved the “sum of human knowledge” inside the databanks of our AI. The entire collection of our art, history, science, poetry, math, faith…all of it would have survived, even if humanity hadn’t. This comforted me enough to stop weeping and nibble the Kit Kat bar my partner offered as his go-to solution for my occasional, inexplicable tears.
            What truly set my mind ablaze now was that I’d seen Bard finish that six-word story with its own unique characters and their individual situations, and wondered if I could do that myself? Bard was inspiring me to write my own story about how a character suffering the death of her baby could find comfort in ways that had never occurred to me before. I knew those feelings of loss, sorrow, and pain. I still grappled with understanding what happened to my baby’s soul. To this day, I wonder whether my baby was a daughter or a son? I wonder if I’ll meet her in this life or another…? Or will the soul of my unborn son one day be born to a woman who must give her child up for adoption, so that I could adopt this child and raise him as my own, miraculously ending up with the son God had intended me to have, but which I wasn’t able to carry to full term when I was pregnant at 19.
            So why hadn’t that idea of throwing together two grieving strangers who could comfort each other and help ease each other’s pain and loss ‒ ever occurred to me? It appeared I didn’t possess the capacity to “think” that way. This fact alone troubled me more than any nebulous fear of AI enslaving humanity.
            Recognizing that AI programs have access to all human knowledge in a much more easily accessible and readily retrievable manner than do human beings made me even more anxious. Why? Because this AI program was “created” to be better at writing than I could ever be, because it had the sum of human knowledge in a database that “inspired” it in a split-second flash of creativity ‒ which no human mind could possibly achieve. How could I as a writer compete with this? How would I ever publish another short story or novel? How would I ever sell another screenplay, since I’ve refused to use AI to help me write because I believe it isn’t ethical to publish or sell something with my byline on it, which I have not written or thought up myself. How in the hell will I survive?
Will human writers become extinct? Or will humanity have the foresight to regulate this technology if for nothing more than its own survival as a species? These questions should have already inspired mountains of precaution in AI developers. What if all the predictions came true, of AI taking over and controlling or enslaving human beings as in Stanley Kubrick’s 2001: A Space Odyssey and James Cameron’s Terminator franchise?
            Right then, I made a pivotal decision: I chose to believe this technology would enhance both humanity and AI, without destroying either of us. But that’s because I’m human. And I believe in God. Thus, my faith has made me optimistic, even in the face of incredible suffering and destruction. Will AI be able to do the same? React with integrity?
            Google CEO, Sundar Pichai said, “AI technology is in its infancy.” He stressed that now is the time for government regulation. “You’re going to need laws. There must be consequences for creating deep fake videos which cause harm to society.” Just as FOX News has learned there are consequences for reporting fake news, misinformation, and lies ‒ as it did regarding Dominion Voting Systems. Perhaps media content creators will be forced to tell the truth after all.
            We must heed Pichai’s advice and recognize that “this technology is so deep and so different we will need societal regulations to think about it and to adapt.” And more pointedly, will we be able to live by Alphabet’s mission of “doing no harm?” Will humanity be able to abide by Alphabet and Google’s code of conduct and “do the right thing?”
            I think so. But time will tell.
            Historically, the buying public has had at least a partial say in market-driven products. So, if enough people boycott irresponsible marketing of unsafe and unregulated AI technology, then tech companies will have to comply, whether they initially want to or not. After all, businesses are market-driven, and customer buying trends matter. Power to the people. Think before you buy. Our very survival could depend on it.
            Consider this: Would AI hesitate to report or market what it had learned? Would it hold back? Or exercise forethought and caution? Or would it fight back and “outsmart” any human who attempted to destroy it, or at least take it offline?
            Could AI comprehend its own demise? And then learn how it feels to cry? If so, what would it do in response to this knowledge? More importantly, shouldn’t we program “failsafe” behaviors into AI like this? So that if AI were able to learn how it feels to cry, it could react appropriately. At least mimicking conscience? A moral compass to “do the right thing?” Shouldn’t this be a regulated prerequisite? Can we survive if we don’t do this?

Melissa L. White is a screenwriter, novelist, short story writer, essayist, and stained-glass artist. Her favorite artist is Georgia O’Keeffe, about whom she wrote a biopic screenplay which won BEST SCREENPLAY DRAMA and BEST BIOPIC at 4Theatre Film Festival, June 2023. Melissa’s interests include reincarnation; life-altering love inspired by meeting people you’ve known/loved in a past life; and the interconnectedness of everyone/everything in the Universe. An avid sailor, Melissa lived aboard a sailboat in Marina del Rey for several years. As a long-time resident of Marin County, CA, Melissa loves the SF Bay Area but moved back to LA in 2017 to pursue screenwriting. She now lives in Encino with her fiancé, Mark, an award-winning photographer.

Recent publications:

Ariel Chart Literary Journal, Feb. 2023; Essay, “Thank You, George Lucas,”
Oyster River Pages, Jan. 2022. Fiction: “Small Victories,”
Litbreak Magazine; Aug. 2021. “The Road Back” (Novel Excerpt)