The Gospel According to GPT: Promise and Peril of Religious AI

by Muhammad Aurangzeb Ahmad

In recent years chatbots powered by large language models have been slowing moving to the pulpit. Tools like QuranGPT, Gita GPT, Bud­dha­bot, MagisteriumAI, and AI Jesus have sparked contentious debates about whether machines should mediate spiritual counsel or religious interpretation:  Can a chatbot offer genuine pastoral care? What happens when we outsource ritual, moral, or spiritual authority to an algorithm? And how do different religious traditions respond differently to these questions? Proponents of these innovations see them as tools to democratize scriptural access, personalize spiritual learning, and bring religious guidance to new audiences. Critics warn that they risk theological distortion, hallucinations, decontextualization of sacred texts, and even fueling extremism. Christianity has been among the most visible testbeds for AI-driven spiritual tools. A number of “Jesus chatbots” or Christian-themed bots have emerged, ranging from informal curiosity-driven experiments to more polished, denominationally aligned tools. Consider, MagisteriumAI, which is a Catholic-oriented model intended to synthesize and explain Church teaching.  On the Protestant side, an interesting chatbot is Cathy (“Churchy Answers That Help You”), a chatbot built on Episcopal sources that attempts to translate biblical teaching for younger audiences and even serve as a resource for sermon preparation. Muslims are also experimenting with religious chatbots, notables examples include QuranGPT and Ansari Chat. Chatbots answer queries based on the Quran and Hadith, sayings of Prophet Muhammad.

Buddhist communities have experimented with robot monks and chatbots in unique ways. In China, Robot Monk Xian’er, developed by Longquan Monastery, is a humanoid chatbot and robot that can recite sutras, respond to emotional questions, and engage with people online via social platforms like WeChat and Facebook. In Japan, Mindar, an android representing the bodhisattva Kannon, delivers sermons on the Heart Sutra at the Kodai-ji temple in Kyoto. Though Mindar is not powered by AI-driven LLMs, its presence as a robotic preacher raises similar questions about the role of automation in religious ritual. Buddhist approaches to AI and generated sacred texts are often more flexible. In the Hindu context, there is Gita GPT which is trained on the Bhagavad Gita, which users can query for moral or spiritual guidance. Similarly, there are efforts to build chatbots modeled on Confucian texts or other classical religious/philosophical traditions. Scientific American lists a Confucius chatbot and a Delphic oracle chatbot, suggesting that the ambition to create dialogue-based spiritual guides via LLMs extends beyond monotheistic religions. Beyond chatbots that use religious texts or styles, there is the phenomenon of AI as a subject of worship. The short-lived Way of the Future, founded by engineer Anthony Levandowski, proposed that a sufficiently advanced superintelligent AI could function as a deity or “Godhead,” and that it could be honored and aligned with as part of humanity’s spiritual trajectory. Even though, the organization was dissolved in 2021, it remains a provocative example of how deeply entwined questions of technology and divinity can become.

An obvious advantage of use of LLMs in religious spaces is the ability of chatbots to provide religious or scriptural guidance on demand, around the clock, to people who might not have access to human clergy.  There may be temptation that in contexts where there is a shortage of priests, imams, or other religious leaders due to geography, financial constraints, or institutional decline, chatbots could serve at least as interim guides or first responders for religious questions, or queries about rituals. Religious chatbots could serve as educational tools, summarizing long or complex religious writings, translating between languages, or providing historical and theological context in user-friendly formats. MagisteriumAI, for instance, is explicitly designed to synthesize and explain Catholic teaching, drawing on thousands of magisterial and theological documents. These tools are also being used in sermon preparation, drafting religious reflection, or offering laypeople insights into theological debates. For some users, chatbots could even provide a nonjudgmental space to explore spiritual doubts or to ask questions they might hesitate to pose to a real person. This can lower barriers to religious engagement and encourage personal reflection. Robotic monks like Robot Monk Xian’er, which respond to emotional or spiritual questions online, are already functioning in this role.  responding to thousands of questions via social media and digital platforms, drawing on the responses of human monks as source material.

Despite the promise, there are a host of theological, ethical, epistemological, and practical challenges surrounding the use of AI for religious or spiritual purposes.  One of the most widely discussed risks is the phenomenon of hallucinations i.e., AI chatbots may confidently generate answers that are false, misleading, or not supported by any legitimate religious text. When a chatbot purporting to have scriptural backing offers an invented or distorted “verse,” or misrepresents theological doctrines, the consequences can be serious.  In religions with strong scriptural tradition, skeptics argue that shortcutting theological reflection with AI risks trivializing the discipline of exegesis, undermining the slow, disciplined engagement with religious texts, tradition, and community interpretation. Ilia Delio, a scholar of science and religion, has described religious chatbots as “shortcuts to God” that might undercut the spiritual and moral formation ordinarily gained through deep study and reflection.

Scriptural interpretation rarely happens in a vacuum. Religious traditions have long emphasized that texts must be read within communities, through commentaries, and in light of ethical, historical, and moral reasoning. When chatbots offer decontextualized readings, free-floating quotations, stripped-down ethical summaries, or simplified spiritual advice—they risk detaching scripture from its community, ritual, and moral grounding. Decontextualization is also how extremist interpretations of religious scriptures have proliferated, often fueling violence. When users turn to chatbots for spiritual counseling, especially in moments of crisis, grief, doubt, or moral distress, there is a danger that the bot will offer advice unsuited to complex emotional or existential situations. They may misinterpret a user’s context, fail to detect self-harm or suicidal ideation, or reinforce harmful beliefs. Studies of chatbots suggest that reliance on them might isolate individuals, reduce the impetus to seek human community, or produce a false sense of closure or resolution.

Most religious traditions afford spiritual authority to human figures, priests, imams, monks, rabbis, or gurus, who are believed to possess moral intention, ritual legitimacy, and relational authority. The idea that a machine could offer sacraments, perform confession, or lead a ritual, challenges many religious understandings of what it means to be a religious leader. In Christian theology, for instance, the role of a priest is often understood as divinely ordained or consecrated, something that a non-human agent cannot validly assume. AI priests may thus hollow out the meaning of ritual and community, turning rich practices into mere mechanical recitations. Then there is the question of bias. LLMs are trained on massive bodies of text, often dominated by Western, English-language, or secular sources, which can embed cultural and religious biases. When such models generate religious commentary, they may reproduce stereotypes, misrepresent minority traditions, or privilege interpretive approaches that do not resonate with all faith communities. A recent study observed that s how Eastern religions like Hinduism and Buddhism are particularly vulnerable to being oversimplified or stereotyped. They also found that that Judaism and Islam are sometimes stigmatized in AI-generated responses. Another study observed that LLM outputs often reflect a single, homogeneous religious-cultural profile.

Different religious traditions approach LLMs and AI-powered spiritual tools in distinct ways, informed by their theological assumptions about texts, ritual authority, the nature of mind or consciousness, and the relationship between humans and the divine. Within Christianity, responses vary widely, often along denominational lines or theological commitments. Some see AI chatbots as helpful adjuncts to pastoral work, tools for sermon preparation, catechesis, or religious education. For example, proponents of MagisteriumAI argue that it can deepen lay understanding of complex doctrinal teachings, support theological reflection, and assist clergy. At the same time, many theologians express caution e.g., AI-mediated engagement with scripture cannot replace the embodied practice of prayer, ritual, and communal worship, nor the interpretive journey that faith has traditionally involved. Figures such as Ilia Delio have warned that chatbots may short-circuit spiritual formation, leading users to skip the difficult but necessary processes of grappling with doubt, tradition, and moral transformation. Islamic responses to religious chatbots are also varied. On the one hand, tools like QuranGPT or Ansari Chat have been embraced by some as ways to interact with Quranic teachings, especially for Muslims without regular access to scholars or who are exploring religion on their own terms. On the other hand, critics worry that AI explanations could misinterpret tafsir (Quranic exegesis), fail to account for complex jurisprudential debates (fiqh), or propagate misleading readings that ignore historical, linguistic, and theological nuance.

Buddhist and East Asian religious traditions, with their different orientations toward scriptural authority, ritual innovation, and textual innovation, often approach AI chatbots and robotic priests with greater openness e.g., the robot monk, Xian’er described above is not a replacement for human monks, but a digital emissary, especially aimed at younger or internet-savvy audiences.  Xeno Sutra, was a Buddhist scripture that was generated by an LLM.  It highlights a more speculative angle: if AI can generate texts that sound like scripture i.e., invoking emptiness, paradox, karmic reflection. Could it not be possible that  some readers treat them as spiritual or contemplative prompts? The author suggests that Buddhist sensibilities, especially in Mahayana traditions, may be more open to such creative textual experiments, though with warnings about user vulnerability, misinterpretation, and detachment from moral practice. Some also argue that since robots lack genuine volition or moral agency, they cannot generate karma in the Buddhist sense and thus cannot fully inhabit the role of moral agents without ethical heteronomy. This suggests limits on what it means for an AI to “teach” moral or spiritual lessons.

From a design perspective, religious chatbots should be designed to flag uncertainty when interpretive questions fall beyond their competence or when multiple legitimate perspectives exist. Rather than offering definitive pronouncements, chatbots might say, for example, “There are different scholarly interpretations on this question” or “I am not sure how to interpret this passage, here are several options.” They might explicitly encourage users to consult human teachers, clergy, or knowledgeable persons, especially for questions involving moral crises or doctrinal conflict. This kind of epistemic modesty can help mitigate the risk of users treating AI responses as infallible.  Designers of religious chatbots should strive to include a wide variety of interpretive traditions, theological voices, languages, and cultural perspectives in their training data, or at least provide users with disclaimers about limitations or biases. Ethical design might also mean avoiding the imposition of apolitical or secular moral frameworks onto religious reasoning, and being sensitive to local ritual and moral contexts. Chatbots intended for spiritual or pastoral engagement should come with built-in safeguards for crisis situations. At minimum, the system should include disclaimers that it is not a licensed counselor or clergy person, and that users in distress should seek help from human practitioners. Religious chatbots operate at the intersection of technology, theology, and community life. As such, their oversight should ideally include both technical and religious expertise: theologians, ethicists, clergy, community representatives, and AI ethicists should have a role in reviewing, auditing, and updating systems. This might involve advisory boards, community testing, periodic review of chatbot responses, and mechanisms for user feedback and correction.

One particularly vivid example of problems raised by LLMs comes from my own collaboration with Rabbi Josh Fixler, where I built a prototype RabbiBot for him a few years ago. Using two years’ worth of his sermons as training material, I generated a new sermon in his distinctive style, appropriately, on the topic of artificial intelligence. To make things more realistic, his voice was cloned, and the AI-prepared sermon was played aloud to his congregation. While reviewing the draft beforehand, however, Rabbi Fixler noticed a profound quotation (“The highest degree of wisdom is benevolence.”) attributed to Moses Maimonides. The words were elegant and theologically resonant, yet on closer inspection, he realized Maimonides had never said them. The fabrication was convincing but entirely invented. This episode highlights the promise and peril of religious LLMs: they can echo the cadence and rhetorical force of tradition while simultaneously introducing fictitious material that risks undermining religious fidelity. Thus, the use of large language models and AI chatbots in religious and spiritual domains is, in many ways, a microcosm of larger debates about artificial intelligence, authority, and human meaning.

On one hand, religious chatbots offer intriguing possibilities: they can expand access to sacred texts, help people engage with spiritual questions, support theological education, and serve as digital companions for moral reflection. In contexts where religious leadership is scarce or inaccessible, AI may function as a surrogate or supplement, opening new pathways for religious inquiry, especially among younger or internet-native audiences. On the other hand, deploying LLMs as spiritual or pastoral tools raises profound risks. Hallucinations, misinterpretations, decontextualization, loss of interpretive tradition, user vulnerability, and the erosion of communal and ritual practices are real and serious hazards. The core issue is not simply whether chatbots can simulate religious speech or ritual, but whether they can embody religious authority, moral accountability, or spiritual transformation. And if they cannot, if they are fundamentally unmoored from embodied human moral life—then they may end up emptying religious practice of what makes it meaningful. In the end, the core challenge is theological as much as technological: Can a tool built on statistical patterns and textual prediction ever participate meaningfully in the moral and spiritual journey of a human being? Or will it always remain a mirror: reflecting our questions back to us, but incapable of guiding us into transformation? The answer may depend not just on how good the algorithms become, but on how religious traditions and communities choose or not choose to engage, critique, and integrate those tools into their religious life.