AI Tools for Casework

A Guide for Congressional Staffers

Why should I use LLM tools for casework?

While casework can be one of the most rewarding parts of Congressional service, it involves a ton of repetitive labor. Tasks like taking notes, writing case summaries, and drafting professional emails are easy but time-consuming for experienced caseworkers, and can be difficult and tedious to teach to new staff.

Recently-developed — and Congress-approved — generative AI tools aren’t a magic bullet to fix casework, but they can go a long way toward leveling the playing field, making it easier for new staff to quickly get up to speed and experienced staff to more effectively use their time where it counts the most.

In this manual, we’ll share a few starting points to use tools like ChatGPT in casework—we also suggest reviewing our Case Note post on some ways generative AI may impact casework. However, before we dive in, we need to start with some caveats:

Check In

Check in with your team’s leadership and with the House and Senate for their guidance. In the House, only ChatGPT 4 Plus (paid version) from OpenAI is officially approved for research and evaluation with non-sensitive data by the Chief Administrative Officer. The Senate Sergeant At Arms has issued guidance for the research and evaluation use of ChatGPT (but notes that only official funds may be used to purchase Chat GPT Plus licenses), Google’s BARD, and Microsoft Bing AI. Both chambers have encouraged offices to develop their own additional guidance for staff, and are continuing to monitor and evaluate these tools as they evolve.

Avoid

Avoid putting constituent PII or identifiable information from your office into an LLM (Large Language Model) tool. While the paid version of ChatGPT and other programs provide options to turn off user inputs being incorporated into model training data, casework teams should be extremely cautious when using sensitive information that could be traced back to your team or constituents or accidently made public by the program to other users.

Verify

Verify information you receive from LLM tools before sharing or using it to take action. While these programs are incredibly powerful, they are still prone to “hallucinations” or providing inaccurate information that sounds extremely convincing. Additionally, they may only have access up to a certain point of time. For example, until recently, ChatGPT was only trained on data prior to September 2021. In the examples below, we highlight some examples of responses that may need further proofreading or refinement.

And then some tips:

Don’t expect perfection right away

It’s helpful to think about these tools as able to perform the type of work you would assign to a talented junior staffer. For the most part, they will produce excellent first drafts, formulas, and ideas that are primed for editing into a final form.

Be patient with yourself

While these new tools are game changers, it takes some experimentation and practice to learn how to best prompt them for what you need. Try asking for revisions to responses to increase or decrease the number of adjectives, change the length, change the tone, try a different approach, incorporate different formatting, give multiple examples, telling it how important its task is, and more.

The sample prompts in this guide include real prompts and responses from popular LLM tools. Your results may vary, but we encourage casework teams to view them as a starting point to inspire other uses to streamline casework operations. If you have a great idea, please tell us about it!

Sample Prompts

Additional Resources