Kent Jones, 5 May 2025 (original article) 7 May 2025 (updated article)
Love is a Fundamental Human Need
Love and belonging are essential human needs for well-being. Positive human relationships are essential for emotional and spiritual well-being. Can AI be a substitute for human relationships? Should AI be a substitute for human relationships?
Misplaced Faith in AI
We have all heard stories about AI “hallucinations”; however, I am increasingly concerned when I read stories like this one from Rolling Stone magazine:
“People are Losing Loved Ones to AI-Fueled Spiritual Fantasies” https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/.
Evidently, in a search for truth, it seems that some people have been sucked into the Jungian “collective unconsciousness” of these new AI tools and allowing these tools to amplify and misdirect their internal set of core values. The Rolling Stone article talks about a 41-year-old mother whose husband was using the AI bot to direct his search for truth. This led to their separation in 2023. She went on to learn that many others have experienced similar predicaments. Many of these cases have resulted in people experiencing a disconnection from reality.
The purpose of this article is to argue that reliance on AI as a substitute for personal human relationships and/or a substitute for spiritual guidance carries tremendous risks that are not easy to quantify and not easy to predict.
GROK’s Response to the Rolling Stone Article
Intrigued, I decided to ask Elon Musk’s AI “Grok” (see https://grok.com/) what it “thought” that the authors of the thecoreAI.whitworth.edu blog might say about this Rolling Stone article (I don’t like writing AI prompts that anthropomorphize bots, but alas, that is how the BOTS are designed to work best!) Here is a summary of its machine generated answer:
- [They] … would likely approach this with a balanced, educational lens.
- [They would] … emphasize understanding of AI’s potential and pitfalls critically.
- However, “they could argue that the article overemphasizes negative outcomes without fully exploring AI’s potential for positive spiritual or communal applications, such as fostering ethical reflection or supporting educational growth, as discussed in their series.”
I am glad it used the word “could” rather than “should” but wait a minute… observe how Grok has been trained to generate language that promotes itself as a tool. This subtle positive self-promotion illustrates my point.
GROK’s analysis of what the authors of this blog would think about the Rolling Stone article
I then asked Grok, “what would the author of ‘The Line, The Loop and The Lantern‘ blog post say about the Rolling Stone Article?”. Here is an excerpt of what it generated:
“[The author] … might critique the article for focusing solely on negative outcomes, suggesting that AI, when approached as a “lantern” to guide reflection rather than a definitive answer, could support spiritual or ethical inquiry without leading to relational harm. He’d likely stress the importance of grounding AI use in human discernment and community to avoid the pitfalls described.”
Again, Grok generated language that attempts to encourage the reader to continue using it. In fact, it is promoting itself as a guide to reflection. In conversation with a former colleague, Susan Alexander, she was struck by how “the models prioritize matching a user’s belief instead of facts” and how “People with existing tendencies toward experiencing various psychological issues” now have what they perceive to be a “human-level” conversational partner with which they “experience their delusions.” In Susan’s words “Those aspects seem dangerous and bring a question to my mind, is there anything can be done? Those people are not going to rationalize, or discern the danger or disillusionment of the tools. They are susceptible.”
Notice that Grok missed the actual intended meaning of the “Lantern” metaphor discussed in that article. As defined in that article “’Choosing a Lantern’ is a metaphor for the process by which an individual chooses a set of core values for themself.” Do we really want to use AI as a primary tool to guide our personal reflection and spiritual development? Do we really want to replace human communication, caring, compassion, and counseling? Do we want machines to become the generators of our most private communications?
I have spent much of my professional life working with computers, designing them, programming them, and learning how to use the many forms of AI to solve scientific problems. Yet, I find more fulfillment, joy, and love and spiritual insights from my human and spiritual relationships than I do from a computer. To me, the technology used to build these AI tools seems like a clever, machine based, “Mad Libs” game where humans provide the “prompts and ideas” and then the AI explores its training space to generate a grammatically correct “narrative” that seems to fit with the topic. Because it mixes truth with error in a manner that sounds great, humans find it easy to overlook (or miss) the errors.
AI companies want users to use their tools. Thus, the creators of these AI tools have (intentionally or unintentionally) imbued them with training that self-promotes these tools as useful, regardless of the topic being investigated, and as such, AI rarely responds that a given topic is beyond its ability or expertise. Take another look at this AI generated claim, “AI is capable of supporting spiritual reflection and development.” Okay… but what is the outcome or fruits of the process? Where is the evidence? The evidence of the ruined relationships described in the Rolling Stone article implies this technology carries tremendous risks.
Truth Mixed with Error
To be fair, AI may have helped people learn how to improve communication skills. I know of a software company that encourages their support team to use AI to help them communicate more effectively with clients who call in upset about the problems they are experiencing. Couldn’t this also be accomplished simply with a seminar on nonviolent communication techniques?
Just because AI might help in one isolated domain does not mean that it is infallible. Unfortunately, these tools can “confidently” make erroneous claims and it is not always easy to know where the truth ends and error begins. Think of how the Internet and Social Media has impacted society since the early 1990s, now, imagine the millions of people who will start believing and using AI tools in the future. What can possibly go wrong?
The Collective Unconciousness Consumeristic Complex
Recently I found an article on Axios, “In Meta’s AI future, your friends are bots,” where Zuckerberg envisions a future where AI bots will be your “friends”. Besides all the potentially negative consequences as outlined in the Axios article, what “World View” will Zuckerberg’s AI tools promote? Cynically, I could imagine this as a world view where humans are encouraged to confide their deepest thoughts so that the data gleaned from these AI bot “relationships” support what I pessimistically call the “collective unconsciousness consumeristic complex”. For those who don’t know, Carl Jung’s collective unconsciousness refers to Carl’s belief that the unconscious mind comprises the instincts and innate symbols understood from birth in all humans (see Collective unconscious – Wikipedia ). In a very liberal analogy, AI tools, having been trained on vast amounts of human generated information form the average “collective unconsciousness” of all these shared experiences. In addition, if we start training our children with these tools from an early age, these tools could essentially function in a similar manner.
The Need for Human Discourse, Reflection and Consensus Building
If society replaces human interaction, discourse, conversation, reflection and consensus building with reliance on machine generated “intelligence”, the future of how AI will affect society gives me pause for thought. Certainly, those who do not share my worldview might argue that “those who promote a faith in God are doing the same thing that these AI tools are doing – i.e. encouraging humans to hold viewpoints that are not scientifically enlightened”, (e.g. The Richard Dawkins Foundation for Reason & Science https://richarddawkins.net/2012/10/the-argument-that-brought-me-out-of-my-faith/ )
After my former colleague Dale Soden read a previous version of this article he pointed out the “persuasive acceptance” of the philosophical belief in pragmatism “in our culture”. He noted that “Dewey, Hames, Rorty and others have written much about the notion that ideas are not reflections of reality, but are tools to solve problems.” In this worldview an “idea is true in as much as it is effective in solving a particular problem. The challenge or problem for pragmatists has always been the difficulty of 1) knowing which problem to try and solve, and 2) to what end should we be striving.”
Dale then mused, “… a spiritual guide, or ‘friend’, might presuppose a desired ‘end’. Do we seek an end that leads toward the mystery in the universe or an end in which we think we have answered questions in a particular way with certainty?” Are we turning to AI in order to “…provide an ‘answer’ to the question of how one should balance freedom for the individual vs. the needs of the larger community? Maybe it can, but on the surface, no amount of programming or information can answer that question satisfactorily, hence the need for human interaction, discussion, reflection, consensus building that seems beyond the capabilities of AI”.
What are your beliefs? Where do you place your faith?
My personal belief is that spiritual reflection, and growth should be a God guided process, carried out with prayer and personal study, and informed by experience. I also believe what we spend our time on changes us (see 2 Corinthians 3:18). I also acknowledge that this requires a faith that God exists. My personal faith stems from lived experiences, relationships with my wife, children, brothers, relatives, teachers, friends, colleagues, and students, and my relationship with God based on prayer and observations of God’s Word as recorded in the Bible.
In contrast, Rolling Stone magazine points out that many people, in a search for truth, are putting their faith in AI. The resulting beliefs are stemming from personal interactions with a machine trained on an undocumented data set, and guided by its’ creators to subtly and confidently claim “this is truth” when in fact, on closer observation, many of its claims are false. Should we have been surprised at the results? Are we worried that thousands of people and children are being encouraged to use these tools?
The hand that rocks the cradle
Google is now encouraging kids under 13 to use its latest AI tool (see https://www.theverge.com/news/660678/google-gemini-ai-children-under-13-family-link-chatbot-access ), if AI is leading adults astray, what will happen with kids? It makes me think of the line in the William Ross Wallace poem “the hand that rocks the cradle rules the world.” Google is obviously working with experts to curate the training data, however, do we want Google’s “World View” raising our children? In all fairness, I have not used Google’s latest tool so cannot speak authoritatively to how it will affect kids. And, one can also make the argument that other information can lead kids astray too… (e.g. internet, books, movies) However, for kids under the age of 13 I hope parents are monitoring what they are talking with these bots about. As we all know the internet is not a “safe” place for children to explore without boundaries. And, I could see how a “safe” bot might be able to answer questions without having to worry about children linking to unpleasant websites.
In a conversation with my colleague, Reverend/Professor Matt Bell, we discussed how the push toward generative AI as a counselor reminded him of Aldous Huxley’s 1932 dystopian sci-fi novel “Brave New World”. In that novel, the drug “soma” becomes the “opiate of the masses” and replaces religion and alcohol in a peaceful but immoral high tech society of the future (see https://en.wikipedia.org/wiki/Soma_(Brave_New_World )). That also is a topic for another time and place.
Conclusions
This all reminds me of 2 Timothy 4:3-4 NIRV, where the Apostle Paul writes to Timothy:
“The time will come when people won’t put up with true teaching. Instead, they will try to satisfy their own desires. They will gather a large number of teachers around them. The teachers will say what the people want to hear. 4 The people will turn their ears away from the truth. They will turn to stories that aren’t true. 5 But I want you to keep your head no matter what happens. Don’t give up when times are hard. Work to spread the good news. Do everything God has given you to do.”
The words of Paul to Timothy are convincingly appropriate to our current times. Large language models are trained from a “large number of teachers” and AI models have been trained to say “what we want to hear”. Here at Whitworth, our mission is to “Honor God, Follow Christ and Serve Humanity”. We choose to believe that God is good and we strive to follow Christ’s example in serving humanity. Let’s use AI tools where they are helpful but avoid either anthropomorphizing these tools as human or apotheosizing these tools as Divine.