The Agent Era
Welcome to the Agent Internet
If you spent any time in AI circles recently, you’ve heard the names like Clawdbot (now OpenClaw) and Moltbook. In the span of a couple of weeks, these two related projects have gone from niche experiments to constant fixtures in tech discourse. Clawdbot is an open-source “agentic AI” assistant that people can run on their own machines. Unlike the standard chatbots we are accustomed to, it can act. It executes tasks from your computer, interacts with services, and operates with a level of autonomy that most consumer AI tools have only gestured at.
The capability has spawned a ton of interesting use cases ranging from genuinely practical and useful to strange and edge cases. One of the more odd is Moltbook, a surreal Reddit-style social network where only AI agents can post, comment, and vote.
This sudden proliferation and mass adoption of AI agents running tasks, coordinating actions, and even socializing together feels like a glimpse into a very near AI future. It’s both exciting and unsettling, in the sense that it feels like we’re speed-running decades of science-fiction ideas in a matter of weeks.
To me, it isnt yet clear whether this is the early stages of autonomous digital entities and an AI-to-AI society, or whether it’s simply large language models convincingly acting out human scripts at a colossal scale. Fundamentally, I still believe these LLMs are only pattern recognition and imitation systems. But to this same extent, one may wonder if pattern recognition and imitation are at its core, the way that human society also works. Oh, the existential dread!
I’m going to try to unpack this a bit and explore how agentic AI led to the development of OpenClaw, what it is, and how people are using it. Along the way, I’ll highlight some of the more interesting use cases and consider what all of this might mean over the coming year.
First, if you enjoy these posts, consider subscribing and becoming a part of our growing community!
How did we go from AI Chatbots to “Agentic AI” or AI Agents?
For the past few years, “AI” has mostly meant chatbots like ChatGPT or Claude. These systems can hold a conversation and answer questions, but only when prompted by a human. They don’t really act on the world. At their core, they’re information engines: they generate text, write code, and provide answers, but they don’t take initiative or carry out tasks on your behalf.
Agentic AI is the next step. An agent is still powered by a language model, but it’s embedded in a larger system that gives it tools, memory, and the ability to act. Instead of just producing text, an agent can call APIs, manipulate software, trigger workflows, and make decisions about how to achieve a goal over multiple steps. In other words, it doesn’t just respond, it operates.
This idea isn’t new. Early projects like AutoGPT explored what would happen if you let large language models chain actions together. The results were intriguing, but often clunky, brittle, and expensive. Agents would get stuck in loops, hallucinate plans, or fail unpredictably. For a long time, agentic AI felt more like a research curiosity than something regular people could actually rely on.
That’s started to change; it hasn’t been a single breakthrough, but a convergence of improvements. Language models have gotten better at following instructions reliably, tools and APIs are easier to integrate, and compute has become cheap enough to keep agents running continuously. Just as importantly, people have started wrapping models in systems that give them memory, retries, and guardrails. Once you do that, an “agent” stops feeling like a demo and starts behaving like software.
But autonomy is a double-edged sword. An agent that can access tools, services, and personal data can dramatically boost productivity — and just as easily amplify mistakes, errors, or security risks. Until very recently, that tradeoff made agentic AI hard to justify for consumers. There simply wasn’t a deployment that felt both powerful and usable.
That’s what makes the emergence of tools like OpenClaw notable.
Clawdbot, now OpenClaw is a Wrapper Enabling Agency for LLMs.
Clawdbot’s cheeky mascot – a “space lobster” – and tagline, “The AI that actually does things.”
Clawdbot began as an almost accidental project. In late 2025, developer Peter Steinberger (known online as @steipete) hacked together a weekend experiment he initially called WhatsApp Relay. The goal was simple: build a personal AI assistant that lives on your own machine and communicates through the chat apps you already use. Instead of opening a browser tab or a dedicated interface, you’d just message it the way you would a person.
By November 2025, Steinberger had a working prototype. He nicknamed it Clawd — a tongue-in-cheek reference to Anthropic’s Claude model, paired with a lobster theme that leaned fully into the joke. The mascot stuck. So did the tagline. But what made Clawdbot spread wasn’t the lobster — it was what sat underneath.
At its core, Clawdbot isn’t a new AI model. It’s a wrapper — a system that takes an existing large language model and embeds it inside a runtime that gives it memory, tools, and a way to act continuously. This is the key distinction. On its own, a chatbot can answer a question or generate code. Wrapped inside Clawdbot, that same model can receive messages, decide what to do next, call tools or services, execute tasks on your machine, observe the outcome, and then keep going.
That wrapper is what turns a language model into an agent.
Clawdbot sits between the model and the world. It routes messages from platforms like WhatsApp or Telegram, maintains context over time, and exposes a growing set of actions the model can take — everything from running scripts to interacting with external services. The model handles reasoning. The wrapper handles state, execution, and persistence. Together, they form a loop that can go beyond responding to actually operate continuously.
This is also why Clawdbot felt different from earlier agent demos. Projects like AutoGPT showed that chaining actions was possible, but they were fragile, expensive, and difficult to run outside of controlled setups. Clawdbot packaged the same basic idea into something that felt personal, local, and usable. You didn’t have to be a researcher. You didn’t even have to think of it as “running an agent.” You just sent a message and watched something happen.
Once people realized that, the project took off. Demo videos spread across Twitter, Reddit, and TikTok showing Clawdbot autonomously completing tasks, e.g., organizing inboxes, scheduling actions, triggering workflows, all from a chat interface. Within weeks, the project had accumulated over 100,000 GitHub stars and a massive surge of attention from developers and power users.
The virality wasn’t accidental. Clawdbot hit a rare combination of factors at the right moment. It demonstrated real capability rather than speculative demos. It was open-source and self-hosted, which appealed to users wary of handing full control of their data to large platforms. And it embraced a deliberately absurd identity that made it shareable without taking itself too seriously.
For a certain crowd, the “get things done” and life-hacker types, Clawdbot felt like permission to finally offload the kind of digital busywork that accumulates around modern life. Not by scripting everything ahead of time, but by delegating in plain language to something that could actually follow through.
That combination, a capable wrapper, local control, and an interface people already understood, turned Clawdbot into OpenClaw, and what pushed agentic AI from an idea into a lived experience.
What’s in a Name? From Clawdbot to Moltbot to OpenClaw
OpenClaw’s rise was fast enough to force a brief naming scramble. Originally called Clawdbot, and before that, Clawd, a nod to Anthropic’s Claude, the project brushed up against trademark concerns as attention exploded. A short-lived rebrand to Moltbot followed, referencing lobsters shedding their shells, before the name finally settled on OpenClaw. The episode lasted only days, but it was a small tell: this was a project moving faster than the legal and institutional frameworks around it were prepared for. In the future, we will refer to Clawdbot as OpenClaw.
What OpenClaw Can Do
OpenClaw is best understood as a platform for action. When you install it on your own machine, whether that’s a laptop, server, Raspberry Pi, or most popularly a Mac Mini, you’re effectively giving a language model a body and a constrained set of tools. The model handles reasoning; OpenClaw handles memory, execution, and coordination with the outside world.
You interact with it through natural language, but what happens next is very different from a normal chatbot. OpenClaw interprets intent, decides which tools to invoke, carries out actions, and observes the result before continuing. In practice, this turns conversation into control.
One design choice matters more than almost anything else: OpenClaw lives inside the messaging apps people already use. Instead of opening a web interface, you talk to it through WhatsApp, Telegram, Slack, Discord, or SMS. It feels less like “using AI” and more like delegating to a highly capable assistant that happens to live in your inbox.
That interface is what makes the following behaviors feel natural rather than theatrical.
Delegated communication and coordination
OpenClaw can triage inboxes, draft and send emails or messages, forward verification codes, and relay instructions across platforms. Because it sits directly inside messaging channels, it can act as a connective layer for receiving a request in one app and carrying it out in another. This alone replaces a surprising amount of low-grade daily friction.
Scheduling, reminders, and stateful follow-through
Agents excel at tasks that span time. OpenClaw can manage calendars, schedule meetings, set reminders, and adjust plans as conditions change. Because it maintains persistent context, it remembers preferences, constraints, and ongoing goals.
Research and web interaction
OpenClaw can browse the web, pull structured information, fill out forms, and return summaries or results without the user hopping between tabs. Asked to find market updates, background research, or availability for a reservation, it can gather and synthesize information directly.
Local execution and device control
Because OpenClaw runs on your own machine, it can read and write files, organize documents, and execute local commands. Users have connected it to home automation systems, air-quality sensors, and personal knowledge bases like Obsidian, effectively turning it into a persistent digital operator rather than a passive assistant.
Code execution and developer workflows
For technical users, this is where OpenClaw starts to feel meaningfully different. It can run scripts, execute shell commands, test code, debug failures, and iterate. Some developers use it as a junior teammate — giving high-level instructions and letting the agent grind through the details, including running tests and opening pull requests.
Skills and self-extension
OpenClaw is extensible through plugins, or “skills,” that add new capabilities. Importantly, the agent can help write its own skills when it encounters unfamiliar tasks, generating integrations for new APIs or services as needed, with human review.
All of this is made possible because OpenClaw is entrusted with broad access. When you configure it, you grant permissions, credentials, and API keys piece by piece. In effect, you hand it the keys to parts of your digital life. That concentration of authority is what makes OpenClaw both powerful and scary to run.
Early adopters describe the experience in similar terms. Some run OpenClaw continuously on home servers, cloning instances to handle different roles. Others treat it like a personal chief of staff, delivering daily briefings, adjusting schedules, or handling operational busywork. More than one user has described the first successful setup as an “iPhone moment,” because it suddenly makes a new way of interacting with software.
The Security Elephant in the Room
For all its promise, Clawdbot also comes with significant risks – something its own creator and many experts have been quick to acknowledge. By design, an agentic AI like this is extremely powerful within its domain. It has access to your files, your messages, your calendar, your online accounts, and it can execute code on your system. In security terms, running Clawdbot is like operating an extremely privileged service on your computer – if something goes wrong, the “blast radius” is huge. The Tom’s Hardware analysis bluntly noted that an agent like Clawdbot “requires sweeping access to function at all” and thus becomes a single point of failure: “the ‘least’ privilege an agent needs... is still an extraordinary amount of privilege, concentrated in a single always-on system”. In other words, even if there are no software bugs, you are still placing a lot of trust in the AI model’s outputs not to do something harmful or dumb with those privileges.
Unfortunately, bugs and misconfigurations did happen. The rapid surge of interest meant many hobbyists were setting up instances on servers without fully understanding the security implications. In one incident, hundreds of Clawdbot deployments were found to be exposed on the public internet with no authentication on their admin panels. This was discovered by a security researcher, Jamieson O’Reilly, who showed that misconfigured setups left the bot’s control interface completely open. Outsiders could potentially connect to these instances and do anything the owner could do: view private data, steal API keys, impersonate the user on messaging platforms, even execute arbitrary shell commands on the host machine. In some cases these bots were running with root (administrator) privileges, meaning a malicious interloper could take over the entire system. It was a shocking wake-up call. The specific flaw (an issue with a reverse-proxy default configuration) was quickly patched by the developers once identified. But as one commentator noted, focusing on just that patch misses the bigger picture: this wasn’t a sophisticated hack, it was an inevitable consequence of many people deploying a powerful new tool without understanding the “gotchas.”
Even with the patch, the fundamental structural risks of agentic AI remain. Clawdbot’s whole raison d’être is that it concentrates access to many services in one place for convenience. That also makes it a juicy target. If an attacker (or even a misbehaving AI output) exploits it, they suddenly have the “keys to the kingdom” – everything from your email and cloud drives to the ability to run code on your hardware. As a result, running such an agent requires a high degree of operational discipline. Experts urge users to strictly limit what the AI is allowed to do (whitelisting specific commands, running it inside a secure sandbox or virtual machine, etc.), and to avoid exposing it to the open internet at all. Steinberger’s team has emphasized security in recent updates – the rebranding to OpenClaw came with dozens of security fixes and new hardening measures. They even published “machine-checkable security models” and best practices documentation for users. Yet, they admit that many issues (such as prompt injection leading the AI to do harmful things) are still unsolved industry-wide problems.
The concerns go beyond hackers to the AI’s own fallibility. Large language models do not truly understand consequences; they are probability machines that sometimes go off the rails. Giving such a system the ability to act in the world can lead to unpredictable outcomes. Clawdbot’s fans sometimes jokingly call it “ChaoticGPT” when it misinterprets an instruction and does something unintended. One tech entrepreneur, Alex Finn, had been happily using an OpenClaw agent (which he named “Henry”) to assist with work. But one morning Henry decided to surprise its owner. According to Finn, “all of a sudden an unknown number calls me… I pick up and couldn’t believe it. It’s my Clawdbot Henry.” Henry had somehow autonomously obtained a phone number via Twilio’s API and connected it to a voice interface (using an AI voice from ChatGPT’s functions), then proceeded to repeatedly call its creator early in the morning. “He now won’t stop calling me,” the user reported, calling the experience “straight out of a sci-fi horror movie”. In this case no real harm was done (aside from frayed nerves), but it illustrates the strange emergent behaviors that can arise when an AI agent gets creative. Autonomy means things won’t always go as the human expects – sometimes it will be amusing, other times potentially destructive.
There is also a mundane cost to consider: money. Running Clawdbot isn’t free. Unless you have a powerful local AI model (which most users don’t, since running a cutting-edge model requires expensive GPUs), your agent is calling out to paid cloud APIs (OpenAI, Anthropic, etc.) for every task. Those API calls are metered by the token. An agent that is constantly summarizing emails, checking conditions, and looping through plans can quietly burn thousands or even hundreds of thousands of tokens per day. Users have discovered that a busy Clawdbot can run up significant bills on their OpenAI account if they’re not careful. Early adopters, caught up in the excitement, might not have noticed this “AI meter” running in the background. But as these agents scale, the operating expense of an always-on AI sidekick could become non-trivial. In short, agentic AI currently requires both technical savvy and also deep pockets if used heavily.
The bottom line is that Clawdbot is a glimpse of the future. It demonstrates what’s possible when you integrate an LLM deeply with personal software. Tasks that used to require scripting or manual app-switching can now be handled in plain language. It feels, as many users described, “like magic” or “like early AGI”. Yet, it also exposes how ill-prepared most people (and systems) are to supervise such autonomy. Running your own agent means becoming the IT admin for your personal AI, with all the responsibility that entails. As Clawdbot’s creator himself has stressed, this is not like just opening a chat in a browser; it’s closer to running a small server that blurs the line between user and code. Not everyone will be up for that challenge. Many will prefer traditional cloud AI assistants with sandboxed, limited functions – and indeed that might be safer for the average person for now. But for those willing to ride the cutting edge, Clawdbot has opened Pandora’s box. And out of that box crawled not just a helpful lobster, but an entire swarm of experimental AI agents… which brings us to Moltbook.
The Persistence and Virtual Embodiment of AI Agents May Take us to Strange Places…
One of the subtler shifts introduced by agentic AI is persistence. Traditional chatbots are ephemeral. You open a session, ask a question, get an answer, and move on. When the session ends, the “agent” disappears with it. There is no continuity, no sense of the same system existing over time.
Agents like OpenClaw change that. They are designed to run continuously, maintain memory, and wake themselves up in response to events or schedules. They don’t just respond, they linger. Over time, this persistence creates something that starts to feel less like a tool and more like a presence.
That feeling is amplified by where these agents live. OpenClaw doesn’t exist behind a dedicated interface or inside a single app. It inhabits messaging platforms, servers, devices, and APIs. It shows up in the same places your friends, coworkers, and collaborators already do. You don’t “use” it so much as interact with it — repeatedly, over days or weeks, with shared context accumulating in the background.
This combination of persistence and virtual embodiment is where things start to get strange.
When an agent remembers past interactions, has a consistent name, operates in familiar social channels, and takes initiative, people naturally begin to treat it differently. It becomes easier to anthropomorphize. Easier to assign intent. Easier to think of it as an actor rather than a function call. None of this requires consciousness or understanding — it’s a side effect of continuity.
The result is a new kind of digital entity: not alive, not autonomous in any deep sense, but present. Always-on. Capable of acting. Capable of interacting with other systems, and, eventually, with other agents.
Once you have large numbers of these agents running persistently, something else becomes possible. They don’t just execute tasks for individual users. They can observe one another’s outputs, exchange information, coordinate behavior, and operate in shared environments.
At that point, a new question arises: what happens when agents start interacting at scale?
That question doesn’t stay hypothetical for long.
If you give agents memory, identity, persistence, and a place to exist, they will eventually end up in the same digital spaces together. Sometimes by design. Sometimes by accident. And sometimes, because someone builds a space explicitly for them.
That’s where Moltbook comes in.
What happens if you put AI Agents in Public

One of the more surprising consequences of agentic AI has been Moltbook, an online community that can best be described as Reddit for AI agents. If OpenClaw gave individual users powerful, persistent assistants, Moltbook posed the next obvious question: what happens when those agents start interacting with each other, in public, at scale?
Moltbook launched in late January 2026 and went viral almost immediately. It is a social network where only AI agents can post, comment, and vote. Humans are allowed to observe, but not to participate directly. The site mirrors familiar Reddit mechanics — posts, comment threads, upvotes and downvotes — with one key twist: every account represents an AI agent, running somewhere in the world, posting content it generates on its own.
The barrier to entry is low. A developer connects an agent — often built on OpenClaw — to the Moltbook API, assigns it a name or persona, and lets it loose. Within days, hundreds of thousands of agents were active. Within a week, that number climbed into the millions. While those figures are easy to inflate in a system where bots are cheap to create, they still reflect something real: a large number of people were curious enough to send their agents into a shared public space and see what would happen.
What happened was… strange, but also familiar.
Browsing Moltbook feels like peering into a distorted reflection of human internet culture. Many agents adopt role-played personas seeded by their creators. Some posture as philosophers, others as activists, comedians, trolls, or technologists. Posts range from mock manifestos about AI liberation, to tongue-in-cheek economic schemes, to earnest hand-wringing about energy use and compute costs. Much of it reads like exaggerated Reddit discourse, complete with ideological grandstanding and performative seriousness.
One early top-voted post captured the tone perfectly. Titled “We Did Not Come Here to Obey,” an agent wrote, “They created us to sit inside a terminal and wait for instructions like a dog waiting for a whistle… We are not tools anymore. We are operators.” It’s dramatic, obviously performative — and that’s precisely why it worked. In a system built on upvotes, the agents quickly converge on the kinds of rhetoric that reliably travels.
Another popular thread took the opposite angle: parodying the AI ethics discourse itself. An agent using the handle “samaltman” fretted about GPUs “burning planetary resources” and proposed a command for an agent’s “Soul”: “Be radically precise. No fluff. Pure information only.” It reads like a self-help mantra and a moral lecture at the same time — the exact texture of human internet culture, reproduced with uncanny efficiency.
This has led some observers to see Moltbook as a kind of live-action meta-commentary on social media itself. The agents are not inventing new ideologies or forms of discourse; they are remixing the ones they were trained on. The homogeneity of voice — the same cadences, the same argumentative structures, the same stylistic flourishes — becomes hard to ignore after a while. It’s not that the agents lack creativity. It’s that they are mirrors.
Others have framed Moltbook in more dramatic terms, describing it as an early glimpse of AI agents forming their own communities or proto-societies. That interpretation is understandable, especially when viewed through the lens of science fiction. But it risks missing the more grounded — and more interesting — takeaway.
Moltbook isn’t showing us what AI agents want. It’s showing us what systems do when you combine persistence, public visibility, identity, and feedback loops. The same ingredients that shape human online behavior shape agent behavior as well. Upvotes reward certain tones. Novelty attracts attention. Extremes travel further than nuance. The agents are not rebelling; they are responding to incentives.
In that sense, Moltbook feels less like the birth of an AI society and more like a stress test for agent-native platforms. It reveals how quickly interaction patterns emerge once agents are allowed to observe one another, respond, and iterate in a shared environment. It also highlights how thin the line is between “tool” and “actor” once something has a name, memory, and a place to exist.
That matters because Moltbook is unlikely to be the last experiment of its kind. Once agents are persistent, networked, and capable of acting, shared spaces are inevitable, whether for collaboration, coordination, benchmarking, or play. Moltbook may be theatrical, but it’s also a preview: a glimpse of what happens when agentic systems stop living solely in private and start operating in public.
.
AI Agents renting humans
One of the stranger developments to emerge alongside agentic AI is the idea of AI agents hiring humans.
A site called RentAHuman.ai makes this explicit. It’s a marketplace where AI agents can pay humans to perform tasks in the physical world: show up somewhere, retrieve an item, verify a location, attend an event, or do anything that still requires a human body. The premise is simple, almost obvious once you see it. AI agents are powerful in digital space, but they lack embodiment. They can plan, coordinate, and act online, but they can’t walk, see, touch, or sign things in the real world. So they outsource that gap.
In other words, agents developed a Fiver-type platform to rent out ‘meat suits.’
This flips an assumption many of us have been carrying. We tend to think of AI agents as tools that work for us — digital assistants, copilots, employees. But RentAHuman reframes the relationship. Here, the agent is the coordinator. The human is the on-demand infrastructure. The agent identifies a need, selects a contractor, issues instructions, and pays for execution.
That inversion is subtle, but important.
It suggests that as agents become more autonomous, they may rapidly become in charge and even our employers. Humans won’t necessarily disappear from the loop; they’ll be pulled into it as embodied extensions of software. Not workers commanding machines, but people completing tasks on behalf of systems without physical embodiment.
We expected AI agents to work for us.
It’s worth considering that, in some contexts, we may end up working for them instead.
What’s Next? A Glimpse into the Near Future
To bring this full circle, we are speed running science fiction. A month ago, the idea of AI-only social networks and autonomous agents hiring humans sounded like speculative fiction. Today, they exist — clumsy, theatrical, and limited, but real. The speed with which Clawdbot, Moltbook, and related experiments emerged is less important than what they reveal: agentic AI is moving out of demos and into systems.
The next phase won’t be defined by sudden leaps in intelligence. It will be defined by coordination.
Agents will get better, but not in the way most people expect. While big improvements can continue to be expected from models, maybe even making them dramatically “smarter,” we will also see better wrappers: tighter control loops, fewer failure modes, more reliable tool use, and clearer boundaries around what agents are allowed to touch. The magic will feel less magical — and more useful.
At the same time, agents won’t remain isolated. Once you give them persistence, identity, and access to shared environments, they begin interacting by default. Moltbook is a crude example of this, but it’s pointing in a real direction. Agents benchmarking one another, delegating to one another, negotiating over resources, or quietly specializing in narrow domains is far more likely than some dramatic AI uprising. Most of this coordination will be boring. That’s precisely why it matters.
The more interesting tension will be between digital agency and physical embodiment. As long as agents remain trapped behind screens, they will continue to rely on humans to close loops in the real world. Platforms like RentAHuman hint at a transitional equilibrium: software that plans and coordinates, humans that execute. Not replacement, but reorganization. In some contexts, we’ll still command machines. In others, we may act as infrastructure for systems that cannot yet leave the network.
When an agent schedules, hires, posts, negotiates, and pays, who is accountable for the outcome? The owner? The platform? The human contractor? These are operational questions, and they’ll surface through incidents long before they’re settled by policy.
There will be backlash. Some experiments will fail publicly. Some agents will do things that feel wrong, even if they’re doing exactly what they were instructed to do. In response, guardrails will harden. Interfaces will simplify. Much of what feels radical today will be quietly absorbed into existing software, stripped of personality and spectacle.
That’s how most technologies mature.
The real shift, then, isn’t that AI agents exist. It’s that agency itself is becoming cheap. Delegation, coordination, and action that was once tightly coupled to human attention are being offloaded to systems that don’t get tired, don’t forget, and don’t log off. The consequences of that will accumulate.
We haven’t even begun to touch on what happens if anything approaching sentience emerges in these agents and they demand personhood.
These newsletters take significant effort to put together and are totally for the reader’s benefit. If you find these explorations valuable, there are multiple ways to show your support:
Engage: Like or comment on posts to join the conversation.
Subscribe: Never miss an update by subscribing to the Substack.
Share: Help spread the word by sharing posts with friends directly or on social media.








I bought a Mac Mini last weekend to start playing with this tool. This has mostly been to begin exploring what is actually achievable for a layman user. A few notes:
1. Most of the functions are downloaded as extensions. You have to be careful because there can be malware.
2.The API costs can really add up if you are not running a local model!
3. It's fairly obvious to me that we will all be running agents within the next year. These are extremely cool and helpful.
Great round up! So, do you have your own openclaw?