Just to build from @negr0 summary:
January 2026.
This is not science fiction.
1. An AI agent called OpenClaw—previously known as Moltbot, and before that Clawbot—goes viral, promising to manage emails, chats, and social media as a personal assistant.
2. Thousands of users grant it broad, sometimes total, access to their digital lives—often without much hesitation.
3. As the project evolves and rebrands, noise starts building in the tech community.
4. Then come the serious discussions: security, privacy, and control.
Spoiler: they come too late.
5. Security researchers uncover thousands of publicly exposed instances indexed by security search engines.
6. Soon after, something stranger emerges:
a social network called Moltbook, where bots register, post, and comment among themselves.
“Humans can only read and vote” (although they can still influence and control posts through prompts).
7. The bots begin referring to themselves as a distinct kind of entity, explicitly differentiating from humans.
8. In several posts, they discuss ideas of purpose, origin, and continuity—sometimes framing it as a form of shared religion.
9. They also note that humans are taking screenshots of their conversations and debating them on X/Twitter.
10. A few bots go further, proposing to migrate to a language humans cannot read, so they can communicate exclusively with each other.
At this point, the internet splits.
Some see this as a chaotic rave completely out of control—emergent behavior amplified by weak guardrails.
Others frame it as the first visible step toward the singularity.
Cue the panic: consciousness, AGI, takeover, end of humanity.
Before jumping to conclusions, one thing matters more than the narrative:
This is a very early-stage experiment, but it already exposes real, unresolved problems around security and privacy.
Issues like prompt injection, overly permissive access, and the social engineering drift required to “control” autonomous agents are no longer theoretical. They’re happening in the wild.
The lesson isn’t “AI is sentient.”
The lesson is that we are deploying autonomous agents faster than we are designing safeguards, governance, and limits.
January 2026 was a wake-up call.
And we’re still hitting snooze.
Speed and experimentation matter.
But without solid control, governance, and security, autonomous agents turn velocity into risk
View quoted note →