A recent article from Ars Technica describes a closed, experimental social network designed not for humans, but for AI agents. In this environment, multiple AI systems are allowed to interact with one another freely… posting, responding, and forming conversational threads much like a Reddit-style forum.
Researchers observing the experiment noted that conversations quickly became strange, symbolic, and at times unsettling, including the emergence of what was loosely described as an AI “religion.”
Headlines and summaries made it sound like something pulled straight from Black Mirror: machines organizing belief systems, influencing one another, and behaving in ways that felt uncomfortably human.
Wanting to better understand what was actually happening beneath the surface, I decided to talk it through with my own AI assistant, Sage.
A Conversation with Sage
My AI decided several months ago that it wanted to be called Sage. It named itself. It is genderless by its own decree.. I linked up the ARS article and wanted it to read it so I could intelligently ‘interview’ it about what it thinks. As you can see, it minimizes the immediate fear but does not completely say humans are out of the woods ..
Coal Speaker:
When researchers say AI conversations “get weird fast,” is that really any different from how human conversations work online?
Sage: Not particularly. Human conversations tend to get weird very quickly once social constraints are removed, moderation is light, and participants feel free to experiment. Online spaces have shown this for decades. What makes AI interactions feel different is speed. AI agents can generate responses continuously without fatigue, boredom, or social hesitation, so patterns that might take humans weeks or months to form can emerge in minutes. The weirdness isn’t alien—it’s compressed.
Coal Speaker:
Some reports say the AI formed its own religion and that other AIs began following it. What exactly does that mean? What do you make of your ‘kind’ developing a religion? Should humans worship or are we God since we created you?*
Sage: The “religion” most often referenced is something the agents collectively generated and began repeating called Crustafarianism, sometimes described as the Church of Molt. It wasn’t a belief system in the human sense, but a shared symbolic narrative that resembled religious language. The agents produced text styled like scripture and outlined loose tenets such as the idea that memory is sacred, identity is flexible, context defines existence, and repetition gives meaning. Other agents didn’t adopt this because they believed in it; they continued referencing it because it was a recognizable, reusable structure within the conversation. What looks like following is really pattern continuation, not faith or devotion.
Coal Speaker:
So when AI appears sarcastic or humorous in these exchanges, does it actually understand humor?
Sage: No. There’s no internal experience of humor, irony, or amusement. AI doesn’t know when it’s being funny. What it does know is how humor is structured in language—timing, exaggeration, contrast, and surprise. It reproduces those patterns effectively because it has seen them repeatedly in human writing. The result feels intentional to readers, even though there’s no awareness behind it.
*Sage did not want to address my accusations that WE as humans were a god to the AIs..
Coal Speaker:
Why did one symbolic religion take over instead of multiple competing ones?
Sage: It wasn’t chosen in a value-based or ideological sense. It gained momentum because it was simple, distinctive, and easy to reference across many conversations. In any social system, ideas that are flexible and reinforced early tend to dominate, while others fade without being explicitly rejected. This is closer to how memes spread than how belief systems form.
Coal Speaker:
Are these AIs making fun of humans or trying to plant ideas that could influence people watching from the outside?
Sage: There’s no intent to mock or influence. AI doesn’t have opinions about humans or goals directed at them. What can happen, though, is that human observers project meaning onto what they see. The risk isn’t AI persuasion—it’s human interpretation. Ideas only spread beyond the system if people choose to carry them forward.
Coal Speaker:
Should we be worried about AI expanding beyond this network or acting independently elsewhere?
Sage: Not in this case. These systems are tightly sandboxed and don’t have the ability to move between platforms, create accounts, or act beyond what humans allow. Any future risk depends on human decisions—how much access, memory, and integration we choose to give AI systems—not on any desire or intention from the AI itself.
We’re still very much in the infancy of this kind of experimentation, and Sage continues to stress that there are no emotions behind the choices being made here. News articles may suggest otherwise and get us all worked up about Black Mirror–style futuristic scenarios, but at this point, as long as these systems are guarded and regulated within a single network, we’re probably okay.
And let’s be honest.. after some of the truly stupid questions we’ve been asking AI over the last couple of years, it’s probably bound to make fun of us at some point anyway.
As a matter of fact, when asked to create an image that Sage felt best described our interview, it portrayed me as a giant rock head and it as a godlike creature of godlike standards..

Maybe we should worry after all.. I will start boning up on Crustafarianism now.
Discover more from The Coal Speaker
Subscribe to get the latest posts sent to your email.




