Updated: Where the House and Senate Are on Internal Use of AI

Updated March 2026

The first commercially available large language model (LLM), ChatGPT, became publicly available on November 30, 2022. Since then, a thriving ecosystem of LLMs, applications, and functionalities, has developed and is transforming industries and institutions around the world.

Both chambers of Congress, via their internal technology authorities (the House Chief Administrative Office [CAO] and Senate Sergeant at Arms [SAA]), have issued internal guidance for the use of AI tools for official business by Members and staff.

This guidance is not publicly accessible. Staffers in both chambers report a lack of awareness or understanding of current policies.

POPVOX Foundation maintains this blog as a resource outlining the House’s and Senate’s official AI use guidance based on our review of these documents. We are not publishing the documents verbatim because the chambers have not yet chosen to make them public. However, we encourage House and Senate leadership to reconsider this position.

We believe transparency around the Congressional use of AI technology and the guidance issued by institutional offices are essential for responsible and confident adoption of this technology by elected officials and their staff. We also believe that Congress has an opportunity to show leadership on this issue, both for US organizations and legislatures at the state and local level that are working to develop their own policies and also for parliaments globally.

At a Glance

Congressional AI Use Tracker

POPVOX Foundation tracks both chambers' official guidance on AI use by Members and staff — including approved tools, policy frameworks, and key observations on adoption and awareness. We update this tracker as policies evolve.

Updated March 9, 2026
Transparency note: Neither chamber has made its AI policy publicly available. This overview is based on POPVOX Foundation's review of those documents. We encourage both chambers to make these policies public.

CURRENT STATUS

Approved Tools by Chamber

House of Representatives
Policy: Sept 2024  ·  Copilot rollout: 2026
  • OpenAI ChatGPT Pro Approved*
  • Anthropic Claude Pro Approved*
  • Google Gemini Approved*
  • Microsoft Copilot Approved*
*Approval is conditional — permitted use cases vary by data sensitivity and audience. Sensitive House data and constituent PII may not be entered into any of these tools. See Use Case Tiers for details. 6,000 Copilot licenses rolling out in 2026; offices must set an internal use policy before access.
Senate
Framework: Oct 2025  ·  Tier 2: Mar 2026
  • Microsoft Copilot Chat Tier 2
  • Google Workspace + Gemini Tier 2
  • OpenAI ChatGPT Enterprise Tier 2
Tier 2 = authorized for use with official Senate data. First tools to reach this status.
Both policies live behind a firewall. House guidance is on HouseNet; Senate guidance on an internal SAA portal. Staff in both chambers report difficulty finding or understanding current guidance — and guidance that is hard to find often goes unread.

Policy Frameworks

Topic
Chamber
Status
House Use Case Tiers What staff can and can't do with AI
House
In effect
View details →
House AI Guardrails Five principles from HITPOL 8
House
In effect
View details →
Senate Two-Tier Framework Risk-based tool authorization
Senate
In effect
View details →
Senate AI Governance Board Who sets Senate AI policy
Senate
Established
View details →
Copilot Enterprise Rollout 6,000 licenses, plus an early issue
House
In progress
View details →

Staff Awareness & Key Observations

Staff Awareness — House

Staffers report low awareness of current guidance. HITPOL 8 is posted on HouseNet but not proactively communicated. In offices with high turnover and heavy workloads, a policy that is hard to find often goes unread.

Staff Awareness — Senate

Similar challenges. Identifying an approved tool requires navigating a separate internal cybersecurity portal. None of the approved AI tools appear on the Senate's broader Supported Software list, suggesting limited institutional integration.

No Affirmative Vision

Neither policy articulates how AI could improve congressional effectiveness, constituent services, or legislative quality. Both are framed entirely around risk mitigation — which may help explain why adoption remains low.

A Leadership Opportunity

State legislatures and parliaments worldwide are watching how Congress handles AI adoption. Making these policies public and building in an affirmative vision would position Congress as a leader rather than a laggard.


Current House Guidance

In September 2024, the House Chief Administrative Office (CAO) announced the House's first AI policy (HITPOL 8) via e-Dear Colleague. The policy, which is (reportedly) posted on the HouseNet intranet and available to Congressional staff behind the House's firewall, outlines approved use cases of AI technology. It also includes updated guardrails and principles to guide Members and staff in their further exploration and adoption of this technology. The policy also establishes a process by which Member offices can submit new AI tools for review and approval through the My Services Request Portal on HouseNet. CAO and the Committee on House Administration will review the policy annually and issue updated guidance as deemed necessary.

The House policy establishes five AI guardrails, drawn from the Committee on House Administration's April 2024 Flash Report on Artificial Intelligence Strategy & Implementation:

  • Human Oversight and Decision-Making

  • Clear and Comprehensive Policies

  • Robust Testing and Evaluation

  • Transparency and Disclosure

  • Education and Upskilling

Underneath these guardrails, the policy outlines a set of more detailed AI Principles that all House users must follow:

  • Security (use approved tools, limit harm to infrastructure)

  • Data Privacy (protect sensitive data, do not share PII with AI tools unless explicitly approved)

  • Accuracy (validate all outputs against hallucinations and bias; never present unvalidated AI output to the public)

  • Reliability (adopt a human-centered approach; attribute all AI-generated outputs to their source and tool)

  • Transparency (disclose AI use; retain documentation about data origin and training)

  • Fairness (monitor for biased results, employ technology controls)

  • Ethical Use (align use cases with House ethics guidance).

 

A Note on Scope and Authority

The House policy set by the CAO defines “House User” broadly to include Members, Committees, Leadership, House Officers, staff, interns, fellows, contractors, vendors, and detailees. It applies its requirements — including mandatory use of approved tools, required human review of outputs, mandatory disclosure of AI use, and reporting obligations — uniformly across all of these categories.

This framing raises a question about institutional authority. The CAO's jurisdiction extends to House IT systems and security infrastructure. Members and Committees, however, are independent employing authorities who set the terms and conditions for their own offices consistent with applicable federal laws and House Rules. The policy does not clearly distinguish between what the CAO can require of all users as a condition of using House IT systems (e.g., security requirements, approved software on House devices) and what it can require of Members and Committees in the conduct of their legislative and representational duties (e.g., how they review content, whether they disclose AI use, or how they manage constituent data).

For example, the policy states that “all permissible and approved uses must be routinely reported to the CAO's House Information Resources upon request” — a requirement that, if applied to Member offices, would mean the CAO could demand information about how Members are using AI in their legislative work. Similarly, the policy's Fairness principle instructs users to ensure AI-generated outputs are “presented without bias,” a directive that sits oddly in a legislative context where Members routinely and appropriately present information from their own perspective.

 

The policy's Purpose statement focuses exclusively on security and data protection. There is no stated goal of encouraging innovation, improving constituent services, enhancing legislative effectiveness, or supporting the institution's core mission. This framing shapes the entire document as one of risk mitigation rather than capacity building.

House Approved Use Cases

The policy categorizes AI use cases into four tiers:

Generally Permissible: Uses that do not involve personal or sensitive House information, are for internal audiences only, and whose output will not be used in major decisions. Examples include AI-assisted internal research, summarizing content, formatting data, grammar correction, draft refinement, and brainstorming.

Requires Management Approval: Use cases involving public-facing information or outputs used in strategic decision-making. Examples include AI-generated constituent correspondence, AI-assisted scheduling for Member activities, and first drafts for talking points.

Requires HIR Approval: Use cases that generate code or directly impact House technology capabilities. Examples include AI-assisted chatbots and installation of third-party AI applications or extensions.

Prohibited: Use cases that fail to align with House principles, manipulate human images or data, or exploit vulnerabilities — including inputting constituent PII for casework, executing personnel actions based on AI outputs, creating deepfakes, and “finalizing legislation” (a prohibition whose practical meaning is unclear).

Scope and the Web

The policy's scope covers “House-approved, limited use of external websites that rely on generative AI.” In practice, this means that any external website incorporating AI features — an increasingly broad category that now includes search engines, social media platforms, and most major web services — technically falls within the policy's purview and would require pre-approval. As AI features become embedded across the web, this framing becomes increasingly difficult to operationalize.

User Requirements and Restrictions

The policy requires that House users only use approved AI tools on House-managed devices for approved use cases. Users may not install unapproved AI tools, APIs, plug-ins, connectors, or software on House devices. Users may not use House credentials (including House email addresses) to log in to AI tools outside the House environment. Users may not share House sensitive information or PII with AI tools, and may not create AI tools outside of approved projects.

The policy's enforcement section warns of potential criminal and civil penalties, including prison terms and fines, as well as administrative actions including suspension or termination. However, the policy also notes that these are suggestions, that HIR is not responsible for risks resulting from user misbehavior, and that disciplinary measures are within each employing authority's discretion. The disconnect between the severity of the warnings and the practical enforceability creates additional ambiguity.

House Copilot Depoyment

In late 2025, the House signed an enterprise-wide agreement with Microsoft to provide 6,000 Microsoft Copilot licenses available to all House Member offices and committees that opt in to use the technology. License distribution began in 2026 and requires offices to establish an internal use policy outlining approved use cases within each unique office prior to Copilot use.

 

Note (as of March 3, 2026)

While Copilot deployment is in its early days, we have heard from House offices (and witnessed) that the system appears to have a system prompt that refuses requests to generate content that are political in nature, policy positions, or that summarize political stances on current events.

Offices report that this restriction is limiting the tools’ usefulness for assisting with constituent correspondence, Member statements, or analyzing potential stakeholder positions.

 

Tools Currently Approved for Use in the House of Representatives

House staff may access a list of approved AI technologies on HouseNet.

The list currently includes:

  • OpenAI’s ChatGPT Pro

  • Anthropic’s Claude Pro

  • Google’s Gemini

  • Microsoft’s Copilot

Questions on the House’s AI guidance and approved tools can be directed to ai@mail.house.gov.

House Guidance and Initiatives Over Time

In April 2023, the House Digital Service (HDS), an innovation hub within the technology department of the House Chief Administrative Officer (CAO), announced the launch of an institution-hosted AI working group. This pilot project provided 40 ChatGPT licenses to a bipartisan group of staff to create an information stream of use-case examples and user experience feedback to aid the House's understanding of how GenAI could be adapted to Hill workflows.

Following the launch, in June 2023, the CAO's House Information Resources (HIR) provided House-wide guidance regarding authorized use of GenAI on House-issued devices. This policy:

  • Authorized the use of ChatGPT Plus, the paid version, as the only approved AI Large Language Model (LLM) for use on official devices due to its advanced privacy features

  • Limited authorized use of ChatGPT to research and evaluation tasks

  • Prohibited staff from fully integrating the LLM into regular operations

  • Required staff use the LLM with privacy settings enabled and only with non-sensitive data


Current Senate Guidance

Senate AI Policy (March 2026)

On March 9, 2026, the Senate Sergeant at Arms office of the Chief Information Officer announced that three AI tools are authorized for use with Senate data. These include:

  • Microsoft Copilot Chat

  • Google Workspace with Gemini Chat

  • OpenAI’s ChatGPT Enterprise

These tools are the first to be authorized for Tier 2 use under the Senate AI governing framework established in October 2025.

Screen capture of the Senate Sergeant at Arms’ Chief Information Officer’s March 6, 2026 announcement on the authorization of Tier 2 approval of AI tools for use with Senate data.

The Senate AI Policy Governing Framework (October 2025)

On October 27 2025, the Senate significantly expanded and formalized its approach to AI governance with the release of a comprehensive Artificial Intelligence Policy (SAA-CIO-CYB-040 v1.00), signed by the Assistant Sergeant at Arms and Chief Information Officer (CIO), and the Sergeant at Arms and Doorkeeper (SAA).

The policy applies to all Senate offices and committees, support offices, contractors, suppliers, and vendors using Senate IT networks. It references the NIST AI Risk Management Framework, NIST SP 800-53 (security and privacy controls), and other NIST standards.

AI Governance Board

The policy establishes an AI Governance Board responsible for setting the overarching AI policy and strategy for the Senate, overseeing AI initiatives, and assessing how the Senate can use AI internally while minimizing cybersecurity risks.

The Board consists of the SAA Executive Office, Acquisitions, Office of General Counsel, and CIO. The Board provides updates to the Senate Committee on Rules and Senate Leadership, who reserve the right to join the board as required.

Two-Tier Approval Framework

A formal two-tier categorization was put in place to structure the risk assessment of tools:

  • Tier 1 — Approved for non-Senate data: Tools in this category have passed review and may be used for research and evaluation purposes, but only with non-sensitive and non-official Senate data. Data entered into these products should not be considered private, and generated content should always be human-reviewed for accuracy.

  • Tier 2 — Approved for Senate Official Information: Tools in this category have met higher security and contractual requirements and may be used for official Senate business. Senate data may be entered into these products, with the same expectation of human review.

While the policy outlines this two-tier structure, identifying which tier a given tool falls into requires consulting a separate Cybersecurity Risk Assessment Portal available on the Senate's internal network.

Office-Specific Policies

Similar to the House, the Senate encourages each Member office to develop its own internal AI policy to outline use cases, human review of AI-generated content, and AI-use disclosure. The policy provides example use cases organized into internal uses (summarizing documents, drafting reports, brainstorming, conducting research, analyzing datasets, generating reports and visualizations) and external uses (drafting content for constituent meetings, speeches, social media posts, translating content, transcribing events, aggregating feedback, and generating images for external communications).

User Precautions

The policy outlines a detailed list of user precautions, recommending that Senate users:

  • Only use SAA CIO-approved AI products for Senate business

  • Only use approved products in ways recommended by the SAA CIO

  • Review all AI-generated text, images, and other content for accuracy before use

  • Do not present AI-generated content to the public that has not been validated by a human

  • Do not enter PII into AI tools

  • Do not enter physical security information into AI tools

  • Do not make personnel decisions based on AI

  • Do not create deepfakes

  • Ensure AI use does not violate any copyright or intellectual property rights

  • Understand that data entered into unsupported AI tools may be visible to other users or used to train AI systems

  • Do not upload Senate usernames or passwords in AI prompts

  • Refrain from using Senate network credentials or email addresses to log in to AI tools on non-Senate devices or for non-official functions

Notably Absent from the Senate Supported Software List

The Senate maintains a Supported Software list (last updated November 2025) that catalogs all software for which the SAA provides support. This list includes browsers, productivity suites, cloud services (Box, Asana, Canva, Google Workspace, Cisco WebEx), and other tools. Notably, none of the approved AI tools — ChatGPT, Copilot, or Gemini (as a standalone AI product) — appear on this list. This suggests that while these tools have passed a risk assessment for limited use, they have not been integrated into the Senate's supported technology infrastructure in the way that other approved software has.

Senate Guidance and Initiatives Over Time

In December 2023, the SAA CIO announced its first official internal AI guidance for Senators and staff, authorizing use of a selection of “conversational AI services.” That initial policy approved OpenAI's ChatGPT, Google's Bard AI, and Microsoft's Bing AI Chat.

Similar to the approach taken by the House, the Senate SAA established an institution-hosted working group to encourage and explore internal use of GenAI and share best practices. Resources created through this initiative can be found on the SAA’s Generative AI project page.


Key Observations

Having reviewed both chambers' policies in detail, our team notes several themes:

The House policy does not clearly delineate between IT security requirements and the conduct of legislative work. The CAO has clear authority over House IT systems and infrastructure. But the policy applies its requirements uniformly to all “House Users,” including Members and Committees who are independent employing authorities, without distinguishing between what is appropriately required as a condition of using House IT systems and what crosses into directing how Members conduct their representational and legislative duties. This creates confusion about what is actually required versus recommended, and by whom.

Governance frameworks are in place, but operational clarity lags behind. Both chambers have established governance structures, risk assessment processes, and principles for AI use. The practical question: what can I actually use, and for what? remains difficult for staff to answer without navigating internal portals that are themselves behind firewalls.

Both policies are framed entirely around risk mitigation, with no stated goals around institutional effectiveness or innovation. Neither policy articulates a vision for how AI could help Congress better serve constituents, improve the quality of legislation, enhance institutional capacity, or reduce costs. The absence of any affirmative purpose means the policies function purely as restrictions, which may help explain low adoption and awareness. Both chambers’ policies approach artificial intelligence technology through the lens of traditional IT risk assessment and procurement, rather than with the awareness of this technology’s ability to integrate and fundamentally alter the operations across existing systems en masse.

The enterprise Copilot deployment in the House has surfaced an important mismatch between the tool's content moderation defaults and Congress' core function: political work. This is an early illustration of a challenge that will recur as legislatures adopt commercial AI tools built for general-purpose enterprise environments.

The scope of “AI” in both policies is increasingly difficult to operationalize. The House policy covers “external websites that rely on generative AI,” a category that, as AI features are embedded into search engines, email platforms, and virtually every major web service, is rapidly becoming coextensive with “the Internet.” Neither policy grapples with the reality that AI is not a discrete tool to be approved or disapproved but an increasingly ambient capability embedded across the technology stack.

Staff awareness remains low. Both policies are posted on internal networks only. Staff in both chambers report difficulty finding or understanding current guidance. Congressional offices have high staff turnover and demanding workloads. If the policy can't be found or understood quickly (and if institutional offices are not proactively advertising resources to assist employees in complying with the policy), it effectively does not exist for many users.

Neither chamber has chosen to make its AI policy public. We believe this is a missed opportunity. Transparency about internal AI governance would strengthen public trust, support the development of similar policies at the state and local level, and position Congress as a leader in democratic AI adoption rather than a laggard.


As the House’s and Senate’s institutional approaches to the internal adoption of AI evolves, updates will be made here to provide an across-the-Hill perspective of which tools are allowed and additional policy changes of note.

Do you work in the House and Senate and believe this information needs updates or corrections? If so, please contact us at info@popvox.org.

Previous
Previous

The 2,800-Mile Gap

Next
Next

What Is RAG And Why It Matters for Legislative AI Use Cases