What Is An AI Agent?
A plain-language explainer to help legislative staff understand the rapidly developing world of AI agents.
Many are now familiar with AI chatbots like Claude, ChatGPT, or Gemini that respond to conversational prompts. An emerging range of AI agents can take multi-step actions to accomplish continuous tasks or entire workflows including: searching the web, reading and sending emails, creating and editing documents, filling out forms, writing and executing code, and taking other real-world actions with limited supervision.
Different Kinds of Agents
The term “AI agent” has been used to describe an array of products or applications that can be distinguished by their level of autonomy. Persistent, autonomous agents can operate without the user being present, and can execute tasks over long periods and take broad action across many applications. Embedded "Copilot-style" agents live inside a specific application (e.g., Word, Outlook, Excel, a CRM) and take action within a given application, in response to user prompts.
Persistent, Autonomous Agents
These agents run either in the cloud or on computers that must be left on and can operate without the user being present. They execute tasks over long periods — hours, days, even weeks. They take broad action across many applications.
Example workflows for persistent agents:
Run a scheduled task every morning at 6 AM that pulls relevant newsclips, suggests potential action items, drafts statements
Monitor a regulatory docket continuously and alert when new filings appear
Research a topic across dozens of sources over several hours, then produce a memo
Cross-reference a bill draft against existing statute and committee reports and flag conflicts or precedents
Review overnight constituent emails, categorize them by topic, and draft responses for staff review
Pros:
Considerable productivity leverage
Handle repetitive work completely
Run while the user sleeps
Cons:
Minimal real-time human oversight
Permissions are broad
Actions are fast and can cascade
If the agent misunderstands the task, it can produce a lot of wrong output or take a lot of wrong actions before anyone notices
Examples: OpenClaw (now owned by OpenAI), Claude Routines, ChatGPT Agent, Codex, Perplexity Computer, NemoClaw (NVIDIA), Manus.
Embedded "Copilot-style" Agents
These agents live inside a specific application and help with one task at a time. The user prompts each action. The agent generally does not take action across other applications without being asked.
Example uses for Copilot-style agents:
Summarize a long email thread in Outlook
Suggest edits to a brief in Word
Answer questions about a spreadsheet
Draft a social media post or press release
Pros:
Scoped permissions
User is in the loop at each step
Fits existing compliance, procurement, and records frameworks
Cons:
Less powerful
Requires a human at the keyboard
Doesn't chain actions across systems on its own
Examples: Microsoft Copilot in Office, GitHub Copilot, most vendor-specific "AI features" inside existing software products.
Note: The line between these two categories is eroding fast. Microsoft Copilot Studio lets organizations build more autonomous agents on top of the Copilot foundation. Anthropic's Routines extend Claude (originally a chatbot) toward persistent autonomy. Expect this distinction to blur further over the next 12–18 months, but right now it matters for understanding risk.
The Growth of Autonomous Agents
In November 2025, Austrian software developer Peter Steinberger released OpenClaw,¹ an open-source AI agent that runs as software on a user's own computer and connects an AI model of their choice (Claude, ChatGPT, Gemini, or others). Users interact with it through messaging apps they already use such as WhatsApp, Telegram, Signal, or Slack. Once set up, the agent can take real actions on the user’s behalf. Many early adopters have used OpenClaw to create AI agent executive assistants, homeschool aides, and more.
OpenClaw was purchased by OpenAI in February 2026 and by April 2026, had 3.2 million users.² In March, NVIDIA’s Jensen Wang said: “Every company in the world today needs to have an OpenClaw strategy; this is the new computer.”
A screenshot of OpenClaw’s homepage at openclaw.ai.
This “new computer” is not without issues. OpenClaw itself is considered highly dangerous and insecure and banned by most institutions. Even its fans suggest that it should only be deployed in a “sandbox” environment, an isolated space with no access to sensitive files or personal or financial information.
After the success of OpenClaw, many other companies are rushing to build more secure versions that leverage the functionality without the security vulnerabilities. New persistent agent offerings include:
NemoClaw (NVIDIA) — An enterprise-grade secure wrapper around OpenClaw, announced at NVIDIA's GTC conference in March 2026. Adds guardrails that let administrators define which files, tools, and network connections an agent can access. Developed with Steinberger's involvement.
Claude Cowork (Anthropic) — A desktop agent launched in January 2026 that works directly with a user's files and tools. Emphasizes safety guardrails and visible action logs.
Claude Routines (Anthropic) — Launched April 14, 2026. Runs tasks on a schedule, via an API call, or in response to external events, all on Anthropic's cloud infrastructure. The user's computer can be closed, asleep, or off.
ChatGPT Agent (OpenAI) — Runs entirely in a cloud environment on OpenAI's servers rather than on the user's computer. Cannot access local files, a deliberate security tradeoff.
Codex (OpenAI) — Originally a coding-focused agent, now expanding into a broader enterprise agent platform. Runs tasks in isolated cloud sandboxes, with subagents that work in parallel and scheduled "Automations" that run in the background. Over two million weekly users by March 2026.
Perplexity Computer — Orchestrates nineteen different AI models simultaneously, routing each subtask to the best-suited model. Tasks run in isolated sandboxes and can persist for hours, days, or months.
Manus — A general-purpose agent founded in China in 2022, relocated to Singapore in 2025. After being agreed to be acquired by Meta in December 2025 for approximately $2 billion, the deal was blocked by the Chinese government in April 2026 after it determined that the sale violated laws governing technology exports and outbound investment.
The agent landscape is consolidating rapidly into the largest AI and tech companies. Four frontier labs — OpenAI, Anthropic, Google, and Meta — plus Microsoft (through its partnerships with both OpenAI and Anthropic) now effectively control the category. The list above will likely be materially out of date in just a few months.
Why Legislatures Need to Understand Agents
As of April 2026, no parliamentary institution has publicly approved AI agents for internal use. However, understanding what an AI agent is, how this technology differs from chatbots, and how agents are beginning to affect industries, ensures that elected officials and their staff remain active in governing conversations.
Agentic AI is likely already operating in your environment.
Lobbyists, advocacy organizations, and other external actors are adopting agentic tools to streamline their engagement with democratic institutions. Although AI agents are cutting-edge technology, it will not be long before constituent outreach, policy monitoring, and legislative tracking are performed by AI agents on behalf of users. Knowing what agents can do helps you assess the communications and research products you are receiving.
The technology is developing faster than guidance.
OpenClaw's trajectory, from a small open-source project to one of the most-discussed tools in global tech communities, is a useful illustration of how quickly the technology industry and environment is changing. Institutions that wait to be reactionary to issues caused by emerging technologies and use oversight as an opportunity to understand them for the first time will consistently be behind.
Policy and regulation questions are coming.
Regulatory frameworks for agentic AI, questions about disclosure when AI agents interact with government, and debates about what it means for democratic accountability when autonomous systems are submitting comments or contacting elected officials are live policy issues. Having a baseline understanding of how agents work is essential for engaging seriously with these topics.
Fully autonomous AI agents are not a distant hypothetical. Lawmakers and legislative staff should work to understand this emerging technology, even before they are allowed to use it internally.
Questions about AI tools and legislative modernization? Contact POPVOX Foundation at info@popvox.org.
¹ OpenClaw was originally called Clawdbot, then briefly Moltbot after a trademark dispute.
² https://openclawvps.io/blog/openclaw-statistics
