AI Summer Camp: Camper Safety

Like any good adventure, using AI involves well-marked paths to follow—and a few areas best avoided! Even the best explorers occasionally make a wrong turn, so be prepared to be smart and responsible by keeping these AI safety tips in mind:

Don’t veer off the trail: Know the rules and follow them.

  • DON'T use AI tools without knowing your institution's policies. Not all AI tools have been approved for official use.

  • DON'T let staff use AI without proper training and guidelines

  • DON'T use random third-party AI apps or repackaged tools

  • DON'T download suspicious AI-related software

  • DON'T forget to routinely revisit AI use guidelines set by House Information Resources or Senate Sergeant at Arms as they may be updated over time

Why? Institutional guidance is there for good reason: unapproved AI tools such as third party portals or unauthorized LLMs may be developed and hosted by unverified, and potentially malicious, actors. Use of these tools could lead to your data being read, stored, and shared in ways out of your control.

Don’t feed the bears sensitive or non-public information.

  • DON'T input confidential drafts of legislation, speeches, or press releases (if you wouldn’t want it on the front page of the New York Times, don’t feed the bears confidential information)

  • DON'T share names, addresses, or personal identifiable information (PII)

  • DON'T paste internal documents or confidential communications (including email threads)

  • DON'T assume your data is private - treat every input as potentially public

Why? Nearly all AI tools track and store your input data in some form. Some organizations do this to train on your data, and other tools do this to ensure their service is working as intended. Although following institutional guidance requires you to use LLMs through a paid subscription and with all privacy settings activated, generally you should assume that someone else can see your inputs and outputs, just like a web search.

Don’t believe everything you hear (or read).

  • DON'T assume AI-generated facts are accurate without checking. AI confidently hallucinates, and it's up to you to prevent the spread of false information.

  • DON'T expect AI to know recent events or current legislation

  • DON'T assume AI understands your specific district or state context

  • DON'T rely on AI for specialized legal or technical analysis

  • DON'T use AI-generated links, references, or citations without verification

  • DON'T let AI choose your trail. Leave all decision-making responsibilities to humans.

Why? AI Tools are trained on pre-existing data, such as news articles, books, blog posts and more. As a result, the AI may not have consumed data that is pertinent to your particular situation or information that is completely up to date. Moreover, AI mimics the behaviors it’s seen before, meaning it’s fallible to the same riddles and brain teasers we fall for…be careful!

Don’t get lost: You’re responsible for your work.

  • DON'T use AI-generated content without heavy editing

  • DON'T let AI write in your voice without reviewing for tone and style

  • DON'T submit AI drafts as final work without customization

  • DON'T forget to ensure content aligns with your Member's position

Why? AI is just one tool to add to your toolbox to improve your workflows and increase your ability to perform your job well. It is not a replacement for your expertise or skillset. As such, you should not outsource entire tasks to it or have it speak on your behalf. AI is not a substitute for your voice or opinions.

Watch out for trail blazes (and biases).

  • DON'T ignore potential biases in AI-generated content

  • DON'T assume AI output is politically neutral

  • DON'T forget about copyright and plagiarism concerns

  • DON'T use AI to avoid your responsibility for ethical judgment

Why? AI is a human created tool that has been trained on human-created content — content that was created with biases. As such, AI tools can be wrong, repeat or reinforce biases, and ultimately can be too agreeable to your position. Always double check what is being said to ensure neutrality and remain in the driver’s seat when informing your opinions and conclusions.

 

Quick Safety Check

  1. Am I using an approved tool and is this an approved use?

  2. Is the information or source I’m planning to input into an LLM already public?

  3. Have I verified the facts (and checked all citations) that AI has provided for me?

  4. Did I review and edit this content to make it my own?

Previous
Previous

AI Summer Camp: Summarization