Recommendations

1. Initiate Early and Manage Timing Expectations

Legislatures should begin the integration of AI technologies as soon as possible, focusing initially on low-risk areas to build a foundational understanding and capacity for these tools. Starting with non-critical functions allows for a learning curve, where mistakes have limited consequences, and insights can be gained without significant investment or risk. It is essential to communicate with the public and stakeholders about the gradual pace of adoption, prioritizing and preserving human decision-making and setting clear expectations that while AI can bring about significant improvements, it will not happen overnight. This transparency helps in managing expectations and fosters a trust-based relationship with the public regarding the use of AI in legislative processes.

2. Prioritize Data as a Strategic Resource

Data integrity, clarity, and accessibility is paramount for harnessing AI’s capabilities effectively. Institutions must either commence or continue the process of mapping, cleaning, and preparing data to take advantage of new AI capabilities. This includes ensuring that procurement contracts retain institutional access to the data. For example, recent guidance from the US Office of Management and Budget included the following directive to agencies on “Maximizing the Value of Data for AI”:

In contracts for AI products and services, agencies should treat relevant data, as well as modifications to that data—such as cleaning and labeling—as a critical asset for their AI maturity. Agencies should take steps to ensure that their contracts retain for the Government sufficient rights to data and any improvements to that data so as to avoid vendor lock-in and facilitate the Government’s continued design, development, testing, and operation of AI. Additionally, agencies should consider contracting provisions that protect Federal information used by vendors in the development and operation of AI products and services for the Federal Government so that such data is protected from unauthorized disclosure and use and cannot be subsequently used to train or improve the functionality of commercial AI offerings offered by the vendor without express permission from the agency.¹

Construct a Legislature-Wide Data Map

A data map is a structured visual representation of data sources, characteristics, pathways, and stakeholders. It encapsulates data ownership, flow, integration, and formatting, offering a clear depiction of data dynamics within the legislature.

Legislatures should consider empowering a dedicated task force or working group to create a data map of available resources, to foster broad understanding of legislative data and facilitate decision-making on data use for AI applications.

Formulate an Institutional Data Management Plan

In addition to a data map, legislatures should create or update a Data Management Plan (DMP) to clarify access and protocols for data handling throughout its lifecycle. A DMP is a vital step towards ensuring data security, accuracy, and availability, aligning with the overarching goal of informed decision-making within the legislature.

Create Data Sandboxes

Data “sandboxes” allow internal technologists and trusted partners to explore potential applications in controlled environments where data management practices can be tested and refined. This involves creating isolated, controlled environments where institutional data can be safely accessed without risking the integrity of the primary data systems. Within these sandboxes, internal teams can test new ideas, analyze data, and develop applications without the fear of causing disruptions or breaches. Additionally, by granting access to these sandboxes to vetted external partners, the legislature can benefit from their expertise in app development and data analysis. This collaborative approach not only ensures data security but also promotes innovation by leveraging diverse skill sets. With clear guidelines and robust monitoring systems to manage access rights and track activities, these sandboxes can provide space for experimentation and innovation while ensuring compliance with legislative data policies and security protocols.

3. Issue Agile and Transparent AI Guidelines:

As with the examples from the US House and Senate, legislative bodies should issue initial institution-wide guidance on the responsible use of AI, with the understanding that these guidelines will be updated iteratively as technology and legislative needs evolve.

Develop Guidelines for Intelligent Experimentation and Innovation

By setting a foundational framework for AI use without undue restrictions, individuals in various roles within the legislative body can confidently experiment with these new technologies.

As with the House of Representatives’ Chief Administrative Office (CAO) AI Working Group, participating offices are able to share information about their experiences with these new tools while following security and usage guidance as directed by the CAO.

Revise Official Communications Guidelines

As AI is increasingly integrated into legislative workflows, institutions will soon need to address transparency standards for AI-generated content in official communications. For example, the US House of Representatives should consider updating its Communication Standards Manual,² to address AI-generated content:

  • Transparency: Propose a disclosure requirement for specific AI-generated content types in official communications, with an institution-approved insignia indicating AI involvement.

  • Authenticity Verification: Explore digital verification tools to confirm authenticity, countering deep fake threats.

  • Data Privacy: Establish data protection standards, avoiding manipulative micro-targeting through AI.

  • Accountability: Highlight clear accountability lines for AI-assisted content, with final responsibility on the elected legislator and staff.

  • Training: Pair guideline updates with comprehensive training, ensuring ethical, transparent AI usage.

  • Feedback Mechanism: Investigate feedback channels for constituents to voice concerns on AI-generated communications.

  • Updates: Ensure periodic reviews of the manual to keep AI guidelines current and effective.

These proactive adaptations can preserve trust, transparency, and promote ethical, efficient AI utilization in Congressional communications.

4. Promote Responsible Experimentation

Innovation in AI should be approached with an experimental mindset within legislative bodies. By promoting responsible experimentation, legislatures can explore the potential of AI to solve complex problems while ensuring that such explorations are conducted within a controlled environment to mitigate risks. This approach allows for the testing of AI applications in real-world scenarios, providing valuable data on their effectiveness and areas for improvement. It also prevents the establishment of overly restrictive policies that could stifle innovation and learning, aiding the legislative body in remaining at the forefront of technological advancements.

5. Foster Inclusive Dialogue

The adoption of AI in legislative processes should be an inclusive endeavor, involving a wide range of stakeholders from different backgrounds and areas of expertise. Legislatures should establish forums and working groups that enable these diverse voices to be heard and considered in the development of AI policies and practices. Such inclusive conversations can lead to more robust, equitable, and well-rounded AI strategies that take into account the varied interests and concerns of the community, fostering stakeholder buy-in. These forums also serve as a platform for knowledge exchange, where lessons learned can be shared and best practices can be identified and adopted.

6. Adopt a Phased Integration Strategy

A phased approach to AI integration allows for gradual implementation, which is essential for managing the complexities associated with these technologies. The creation of a phased integration strategy demonstrates an institution’s deliberate, responsible, and transparent adaptation to AI. Short-term phases may focus on small-scale pilots and foundational data management, while medium-term phases could expand AI use into more significant areas of legislative work as confidence and capability grow. Long-term phases might involve the integration of AI into core legislative processes, such as policy analysis and constituent engagement. By planning for these phases, legislatures can ensure that each step is manageable and that the institution is ready for the next level of AI integration.

7. Invest in Upskilling

As AI becomes integrated into legislative processes, the need for staff to understand and work alongside AI systems becomes critical. Legislatures should invest in comprehensive training programs that provide staff with the knowledge and skills needed to leverage AI effectively and be properly informed of institutional guidelines. This upskilling initiative should include technical skills as well as an understanding of the ethical implications and best practices in AI use. Change management programs can also help staff to adapt to new workflows and processes associated with AI tools, ensuring a smooth transition and minimizing resistance to change.

In the US House of Representatives, the CAO oversees the House Staff Academy and the CAO Coaches Program, both aimed at staff professional growth and enhancing institutional efficacy at the individual office level. By developing dedicated curricula that blend practical AI tool experimentation, these institution-supported offices can promote correct AI usage, disseminate best practices, guidelines, and restrictions to staff across all levels.

8. Customize AI Solutions

AI systems are most effective when they are tailored to the specific needs and workflows of the institution. Legislatures should seek out AI solutions that can be customized to their unique requirements, rather than adopting generic tools that may not align well with legislative processes. Customization ensures AI tools complement existing workflows, enhance productivity, and provide meaningful support to legislative staff and officials. It also allows for greater control over the data and outputs of AI systems, ensuring they meet the high standards required for legislative work while minimizing risks associated with staff using non-specialized tools for complex legislative functions.

9. Ensure Human Oversight

The integration of AI into legislative processes should not diminish the role of human judgment and oversight. Legislatures must establish transparent, well documented review processes and accountability structures that ensure AI systems are used ethically and in accordance with constitutional principles. Human oversight is crucial to monitor AI decision-making, catch potential errors, and provide a safeguard against biases or unethical outcomes. By maintaining a human-in-the-loop approach, legislatures can manage AI’s potential to enhance human capabilities rather than replace them.

10. Align AI with Public Service Goals

AI strategies should be directed towards making legislative processes more responsive, accessible, and transparent to the public. By focusing on the public benefit, legislatures can ensure that AI tools are used to enhance the democratic process, providing constituents with better access to information, more efficient services, and greater opportunities for engagement.

Establish Ethical Guidelines for AI Use

Some in the broader legislative technology ecosystem are beginning to explore standards for ethical AI adoption in parliaments.

The Hellenic OCR Team, in collaboration with a European Commission working group, released a draft version of "Guidelines on the introduction and use of artificial intelligence in the parliamentary workspace" at the April 2023 “LegisTech: the Americas” conference in Brasília. The draft guidelines state that AI should primarily serve as an instrument for upholding the rights articulated in the Universal Declaration of Human Rights.

Similarly, the Biden Administration OMB guidance to US agencies establishes a new framework with explicit provisions for “safety-impacting” and “rights-impacting” AI. The guidance further calls for AI to “align to national values and law:”

Agencies should ensure that procured AI exhibits due respect for our Nation’s values, is consistent with the Constitution, and complies with all other applicable laws, regulations, and policies, including those addressing privacy, confidentiality, copyright, human and civil rights, and civil liberties.

These are just a few examples of the evolving conversation on principles and approaches for the safe, responsible adoption of AI in democratic institutions.

11. Engage in Global Collaboration

AI is a global phenomenon, and legislatures can benefit greatly from international collaboration and learning. By engaging with parliaments around the world that are pioneering the use of AI in legislative processes, legislatures can learn valuable lessons and avoid common pitfalls. Global collaboration provides a platform for sharing experiences, strategies, and policies, fostering a collective advancement in the responsible use of AI in governance. This collaborative approach can accelerate learning and innovation, helping legislatures to adopt AI in ways that are informed by a diverse range of experiences and insights. International resources include:

Bússola Tech LegisTech Library

The LegisTech Library by Bússola Tech is a digital platform dedicated to supporting the global legislative community through the sharing of knowledge and experiences. It aims to foster collaboration and facilitate the modernization and digital transformation of legislative institutions. This library is more than just a collection of articles; it's a carefully selected compilation of studies that highlights the journey and advancements in legislative modernization. These stories serve to inspire and guide lawmakers and institutions towards effective modernization. Bússola Tech's role is to provide a neutral space for these important conversations, supporting visionary leaders and promoting the evolution of legislative bodies for the benefit of their members, staff, and the citizens they serve.

Inter-Parliamentary Union Innovation Tracker

The Innovation Tracker is a dynamic blog that spotlights the latest advancements in parliamentary processes, offering Members of Parliament and their staff a wellspring of innovative ideas to tackle the challenges of governance. Covering a wide array of topics from digital tools to broader parliamentary improvements, this platform is a go-to source for fostering efficiency and effectiveness in parliament.

Hellenic OCR Team

Established in late 2017, the Hellenic OCR team is a unique crowdsourcing initiative focused on the processing and analysis of parliamentary textual data. This initiative enables the conversion of parliamentary documents into formats like XML, which are suitable for computational linguistics tools and methods. The work of the Hellenic OCR Team facilitates interdisciplinary research by making parliamentary data more accessible for studies in fields such as history, political science, and linguistics.

International Legislative Modernization Working Group

A collaborative initiative focused on sharing best practices and experiences in legislative modernization from around the world. As part of POPVOX Foundation's Comparative Legislative Strengthening Project, this group aims to assist parliaments in becoming more effective, efficient, and transparent. It serves as a resource for legislators looking to enhance their work through modern techniques and technologies.


¹ OMB Director Shalanda D. Young, “Proposed Memorandum For The Heads Of Executive Departments And Agencies,” (November 2023)

² “The House of Representatives Communications Standard Manual,” House Communications Standards Commission (July 28, 2022)

Previous
Previous

Adoption Phasing and Timing

Next
Next

Sample Use Cases