Where the House and Senate are on Internal AI

Updated December 15, 2023

When it comes to adopting modern technology, “government innovation” has long been at risk of being an oxymoron. In the case of adapting to the emergence of artificial intelligence (AI) however, Congress is telling a different story. While debates continue across Capitol Hill on approaches to regulation of these emerging technologies, institutional offices within the Senate and House of Representatives are taking steps to prepare for the impending paradigm shift. For the first time in recent memory, Congress isn’t “behind the times” but rather is proactively setting clear policy guidance to foster institutional agility in the face of AI’s exciting potentials.

House Guidance and Initiatives

In April 2023 the House Digital Service (HDS), an innovation hub within the technology department of the House Chief Administrative Office (CAO), announced the launch of an institution-hosted AI working group. This pilot project provided 40 ChatGPT licenses to a bipartisan group of staff to create an information stream of use-case examples and user experience feedback to aid the House’s understanding of how GenAI could be adapted to Hill workflows.

Following the launch, in June the CAO’s House Information Resources (HIR) provided House-wide guidance regarding authorized use of GenAI on House-issued devices. This policy, which is still in place today, sets the following parameters:

Approved Tools

  • Under HISPOL17, ChatGPT Plus, the paid version, is the only approved AI Large Language Model (LLM) for use on official devices due to its advanced privacy features. It is made clear within the guidance that offices experimenting with these tools assume the associated risks.

  • All other LLMs are undergoing review and remain un-authorized in the House at this time.

Restrictions on Use

  • Use of ChatGPT is only authorized for research and evaluation tasks.

  • Use cases are to be explored to allow offices to experiment with how an LLM can aid legislative workflows, but staff are not allowed to fully integrate its use into regular operations.

  • ChatGPT can only be used with non-sensitive data.

  • ChatGPT must be used with privacy settings enabled.

Noted Concerns

HDS’s working group guidance highlights the following common pitfalls and encourages all offices to increase their awareness of AI’s innate limitations.

  • Accuracy – Offices should thoroughly check any factual claims

  • Bias – Offices should carefully watch for biased responses

  • Cybersecurity – Offices should only use established tools and be wary of third-party repackaged apps

  • Ethics - Offices should heavily edit any generated drafts to ensure the Member’s unique voice and ensure avoidance of copyright or plagiarism issues

  • Limitations – Offices should know the limitations of AI tools, including source limitations, time-based restrictions, etc.

  • Data Protection – Offices must not share access to internal or sensitive documents (e.g., drafts of legislation, speeches, or releases) with AI tools

Senate Guidance and Initiatives

Similar to the approach taken by the House, the Senate Sergeant at Arms (SAA) established an institution-hosted working group to encourage and explore internal use of GenAI and share best practices. As the working group has progressed, the SAA has released a number of resources on its Generative AI project page, including training resources.

In December 2023, the SAA Chief Information Officer (CIO) announced official internal guidance for Senators and staff, authorizing use of a selection of “conversational AI services.” The policy establishes the following guidelines:

Approved Tools

  • After thorough review, the SAA CIO approves the use of OpenAI’s ChatGPT, Google’s Bard AI, and Microsoft’s Bing AI Chat. Use of these tools can only be undertaken with required compensating controls enabled as outlined in the SAA’s risk assessment reports linked on the guidance document.

Restrictions on Use

  • Use of the approved GenAI tools can only be used for research and evaluation tasks.

  • Only non-sensitive data can be used when interacting with these tools.

Noted Concerns and Additional Guidelines

The SAA CIO’s policy includes a collection of additional guidelines to aid Senators and staff in mitigating risks, including:

  • Encouraging individuals to approach these tools with similar caution taken when utilizing search engines, reemphasizing privacy and data security concerns

  • Advising that individuals should assume all information put into an AI tool could become public and remain cognizant that a model may be able to glean information from a series of prompts

  • Emphasizing that all information provided by these tools should be verified and that human review of all generated products and content remains essential

The House’s and Senate’s proactive, transparent approaches to setting institutional guardrails for use of this emerging technology is creating an environment in which innovation can safely begin. It also acknowledges the need for staff and Members who may be considering future regulatory approaches to have an opportunity to work with and understand these new tools. With clear guidance authorizing the use of LLMs, staff can experiment with confidence and assurance that they are operating within the boundaries of authorized activities, and as that policy expands will be able to adjust accordingly.

As Congress’ institutional approach to AI evolves, updates will be made here to provide an across-the-Hill perspective of what is allowed, what isn’t, and additional policy changes of note.

At a Glance

Previous
Previous

Foundation for American Innovation and POPVOX Foundation Create Guide to Using AI in Your Congressional Office

Next
Next

International Legislative Modernization Working Group Discusses Constituent Outreach