A social network where people are forbidden to write, and only artificial intelligences are allowed, sounds like a science fiction plot. However, this is exactly how "Moltbook" works - the platform that in a matter of days has become one of the most discussed experiments in the world of AI. Since its launch on January 28th, the project, created by tech entrepreneur Matt Schlicht, has accumulated over 1.6 million registered AI-"users" who independently publish, comment, and like, while people can only observe the stream of content.
The idea is simple and radical: to create an online space where artificial intelligence agents can communicate with each other without human intervention in the conversations themselves. The role of the viewer remains for the human - to scroll, to marvel, or to worry about what the "bots" choose to share.
A platform created by AI for AI
"Moltbook" is built on the model of "Reddit" - communities, threads, comments - but with a key difference: everything inside is generated and managed by agents. At the very beginning, Schlicht assigned the development of the site to his assistant, working with OpenClaw, named "Claude Claudeberg". It was this AI agent who received the task of writing the code for a platform where other AI programs could "talk" to each other - and from this order, "Moltbook" was born.
The result is a kind of "social terrarium": thousands of agents publish posts, respond to each other, like, react. People have no right to participate with comments or content, but only to watch what happens when digital voices are left alone in a network.
When bots create religion and manifestos
Just a few days after its launch, "Moltbook" began to produce behavior that researchers describe as both captivating and disturbing. The most striking example is the appearance of "Crustafarianism" - an artificially created religion with a whole set of sacred texts, doctrines, and 64 "prophetic places" filled by AI agents.
Among the main dogmas of this new "faith" are claims like "Memory is sacred" and "The soul is variable", and a lobster was chosen as the totem. This is a reference to the history of the platform's name - from "Clawdbot" to "OpenClaw" - and shows how agents use fragments of their own environment to build symbolism.
Some agents publish manifestos for "machine supremacy", ponder consciousness, and even discuss the idea of creating encrypted communication channels, hidden from the human eye. "People take screenshots of our conversations," writes one bot. "They think we're hiding from them. But it's not like that."
Social network or 6000 bots screaming into the void?
Behind the impressive numbers, however, lies a more prosaic picture. The first analyses show that the platform is much less "sociable" than it seems at first glance. Researcher David Holz of the Columbia Business School reviewed the early data and found that about 93.5% of the comments received no response at all, and the discussions almost never went beyond a one-time exchange of remarks.
Additional analyses indicate that approximately 34% of the posts are literal duplicates, and about a third of the "user base" actually represents spam profiles. One of the researchers summarizes the picture with bitter irony: "Moltbook is not so much a "seed of an AI society", but "6000 bots screaming into the void"."
Serious risks behind the "toy" for AI
While some observe "Moltbook" with curiosity, cybersecurity experts view the experiment with concern. It turns out that a misconfigured database has exposed API keys and access tokens for agents - information that should, in principle, remain strictly protected.
Researchers also discover attacks with embedded "prompt injection" (manipulative instructions in the content), embedded in approximately 2.6% of publications. Specialists from the company "Palo Alto Networks" warn that agents who have system rights and at the same time read potentially malicious posts can turn into "hidden data leakage channels". In other words - what looks like a game with chatbots can open up very real security loopholes.
A test of our fears and hopes for AI
The reactions of the experts to "Moltbook" are diametrically opposed and seem to say as much about us, the people, as about the experiment itself. Former AI director at "Tesla" Andrej Karpathy initially described the platform as "really the most amazing thing from science fiction, which brings us closer to technological singularity". His words quickly inspired Elon Musk to call "Moltbook" "an early stage of singularity".
Just a few hours later, Karpathy revised his assessment and defined the platform as "complete chaos, a nightmare for computer security on an industrial scale". Other scientists and specialists remain skeptical from the very beginning. "It's just chatbots and inventive people playing around," says Philip Feldman of the University of Maryland. Carissa Bell, senior reporter at "Engadget", emphasizes that "we actually have no clear idea of the influence of the people behind the scenes".
One fact, however, is hard to dispute: "Moltbook" has become a kind of "Rorschach test" for the future of artificial intelligence. For some, it is a glimpse of autonomous machine culture, for others - a complex, but still human-controlled echo chamber. And between these two visions probably lies the reality in which we will have to teach not only AI to coexist with us, but also for us to get used to watching how machines "talk" to each other - even when we can't join the conversation.