AI in Parliaments is a Journey, Not a Switch
A conversation with Andy Williamson, Senior Researcher at the IPU Centre for Innovation in Parliament and one of the architects of the Maturity Framework for AI in Parliaments
BY BEATRIZ REY
Earlier this month, at the Inter-Parliamentary Union (IPU)’s Artificial Intelligence Conference in Malaysia, the Inter-Parliamentary Union officially launched the Maturity Framework for AI in Parliaments framework — a practical tool designed to help legislatures adopt AI, and especially generative AI, with greater confidence, coordination, and control. Rather than prescribing a single model for all parliaments, the Framework works as a diagnostic and planning instrument. It helps parliamentary leadership, senior managers, and ICT staff assess where their institution currently stands and plan realistic next steps, regardless of size or level of digital development.
Structured around six levels of AI maturity, the Framework maps a progression from ad hoc and informal use (Level 0) to advanced, leadership-oriented practice (Level 5). Crucially, it evaluates AI adoption across four dimensions:
governance,
technical capability,
organizational capability, and
democratic impact.
In doing so, it makes a simple but often overlooked point: AI in parliaments is not just a technical issue. It is institutional, cultural, and ultimately democratic.
In this conversation, Andy Williamson, senior researcher at the IPU Centre for Innovation in Parliament — one of the architects of the framework — returns repeatedly to a second, equally important insight: all parliaments, at all stages of maturity, have something to learn from one another.
Andy Williamson is a Senior Researcher at the IPU Centre for Innovation in Parliament and is one of the architects of the Maturity Framework for AI in Parliaments.
As he puts it, “every parliament should share its experiences, regardless of where it is on the maturity scale.” Learning, however, works best when it is lateral and contextual. “If you’re Vanuatu, talk to Fiji,” Andy notes, emphasizing that peer learning is most effective when institutions are on a similar journey, facing comparable constraints. More broadly, he reminds us that “the best place for a parliament to learn is another parliament.”
Finally, what made this conversation especially compelling for me is that it treats parliaments not as abstract organizations, but as institutions with personalities — distinct cultures, incentives, and ways of working that shape how AI is received, resisted, or embraced. Andy and I share this instinctive way of thinking about legislatures, and it runs throughout our exchange. I genuinely enjoyed this conversation, and I hope you will too.
Beatriz: How did the preparation of the Maturity Framework for AI in Parliaments unfold?
Andy: It came out of a meeting of our Parliamentary Data Science Hub in The Hague. We initially drafted it almost as an internal document, meant to provide guidance to some parliaments. However, it began to gain traction and the more feedback we received, the clearer it became that it was genuinely useful. Taking a parliament-specific approach was particularly valuable. The framework developed through a highly iterative design process, working with several parliaments as it evolved.
Beatriz: It sounds like the development was quite organic.
Andy: Yes, it was. In this case, it made sense because we did not really know what the framework was going to become. I think it is fair to say that it has exceeded my expectations. It has turned out to be genuinely useful.
Beatriz: It definitely is. Why does AI need its own separate framework? How is it different from other digital tools?
Andy: The reason it does, at least for now, is because AI is transformational. It is new, all-encompassing, and its impact will be extremely significant. In many ways, this moment is comparable to when organizations first developed “Internet strategies.” We are facing a paradigm shift in how we function — or how we will function. It’s a technology, yes, but it’s a technology that can be transformative.
If you think back to the introduction of digital tools into parliamentary and legislative practice, those changes were also transformational. Suddenly, tasks could be done better, faster, and at lower cost. If you look at the longer history of Parliament, the printing press itself was a major turning point. In the UK Parliament’s Archives, there are shelves of documents written by hand on goat vellum. Then we moved to printing, then to computers, then to putting materials online, and now to tools that help us draft them. There is a long trajectory here.
In that sense, there is nothing entirely new about AI. It is a paradigm shift and a transformational tool, but at the end of the day, it is still a technology. The author Douglas Adams captured this beautifully when he said that a technology is something developed in your lifetime; if you grow up with it, you no longer think of it as technology. Do you think of television as technology? AI feels like “technology” precisely because it is new. Over time, it will become normative and simply part of how we operate.
As that happens, the need for AI-specific strategies will diminish, just as we no longer talk about having a separate Internet strategy. The Internet is now assumed — it is folded into IT, digital, or business strategies. AI will follow the same path. But first, it has to reach maturity. We need to treat it differently because it represents a paradigm shift. That means focusing deliberate attention on how we use it properly, so that changes in process, culture, and function can be absorbed and normalized. Ideally, at some point from now, no one will be talking about AI strategies at all — just strategy.
Beatriz: Do you have any expectations about how long that might take?
Andy: No, because everything is changing so quickly. What I do expect, though, is that we will see this wave begin to slow. Right now, it is easy to be impressed by things like generating videos of dancing monkeys or Santa Claus or by having AI write essays. That is all essentially candy floss. It is entertaining, but it is not a serious application. When we reach genuinely meaningful use cases, innovation slows because we need to introduce human constraints and regulatory frameworks. The curve has risen very steeply, but it will level off. At that point, we will enter a more familiar technology adoption cycle, from early adopters through to laggards. Things will begin to decelerate, settle, and eventually become normative, absorbed into our broader technology and business strategies.
Beatriz: What problem were you trying to solve when you realized the Guidelines for AI in Parliaments were not enough on their own?
Andy: There are two ways into this question. The first is very practical. When we developed the AI guidelines, we could see that they contained a great deal of genuinely useful material: guidance on procurement, on AI literacy, on strategic governance, and so on. The problem was where to start. It was difficult to find a clear entry point into the guidelines.
It was a bit like walking into a sweet shop filled with jars of excellent sweets on every shelf. Everything looks appealing, but you immediately wonder: where do I begin, and why? The first step in unpacking that problem was recognizing that the answer is: it depends. What one parliament should do first, second, or third will not be the same for another. It depends on the institutional culture, the level of digital maturity, and the broader context in which a parliament operates.
If you simply drop a large, comprehensive set of AI guidelines onto someone’s desk and say, “Here you go, do AI,” the result is overwhelming. People think: where do we start? What do we prioritize? The framework emerged from that challenge. It asks: where are you on the AI journey? Where are you in terms of digital maturity? What are your cultural and constitutional constraints? Once you map those dimensions, you can say: if you are here, start with these specific elements. Do not worry yet about the more advanced issues; those will come later. Build up gradually.
The second way into this came from what began almost as an intellectual exercise — an attempt to make the guidelines more accessible. As we started testing the framework with different parliaments, it became clear that it was valuable in its own right. It gave parliaments a way to pause and take stock of where they actually were.
A good example is a workshop I ran with an African parliament last week. They had previously had a session focused on tools, and the discussion was very much, “We should use this tool, we should use that tool, we could have this, we could have that.” I asked them to take a step back: why do you need these tools? How will you manage them? What are you actually going to do with them? At that point, they realized things were becoming unclear.
I then showed them the maturity framework. They looked at it and said, “Oh, we’re at level zero.” And they were right. They were operating in a very ad hoc, tool-driven way. The question then became: what needs to come next? As they worked through the framework, they realized they needed to focus first on processes, rules, oversight, validation, and the role of humans in the loop. By the end of the session, they were much less focused on racing ahead with tools and much more focused on building the right processes.
That is exactly what the framework is designed to do. The rush to adopt AI creates risk. The framework helps introduce structure, confidence, and guardrails around how AI is adopted, rather than encouraging an uncritical race forward.
Beatriz: That is very interesting, because that moment of realization—“oh, we are at level zero” — is something I felt quite strongly in Malaysia on the first AI for Parliament day. The moderator used a very quick assessment tool to help participants see where they stood, and you could almost read it on their faces: “okay…” It made me think that, perhaps, they do not often take the time to reflect on how work actually happens inside the institution.
Andy: Yes. I actually designed the tool that Peter Reichstädter used for that assessment precisely because I wanted it to have that effect. I wanted people in the room to recognize where they were on the journey. In most cases, they were still at home, packing their suitcase, not even on the bus yet. They were at a very early stage. That realization was really important.
What we are often seeing in parliaments is a growing gap between individuals and the institution. Individual staff members — and Members themselves — are already using AI tools in their day-to-day work, while senior managers often have little or no visibility into what is happening. That leaves parliaments with a difficult choice: do you let this activity run free, or do you try to lock it down? In practice, when leaders do not know what is going on, it tends to run free by default.
As a result, most of the parliaments you encountered in Malaysia would be at level zero or level one: just beginning to think about process. And that is perfectly fine. That is exactly where we would expect them to be. When I first tested the six-level model with parliaments such as the UK, Canada, Brazil, and Ireland (institutions we typically think of as digitally mature) not one of them felt confident claiming they were beyond level two. Some were beginning to touch level three in specific areas, but overall, we are all still at an early stage.
Beatriz: In a sense, you have external validation for your experiment from me. That is exactly what I observed as well.
Andy: Yes. Simply putting the six-level diagram on the screen and saying, “Okay, let’s talk about what you are doing,” tends to trigger that realization. My CIP colleague, Avinash Bikha’s, feedback from Malaysia was exactly the same. There was a lot of “Oh my, where are we? What is this? This is complicated and confusing.” Then you show the framework, and suddenly it clicks: “Ah, I see. This is a journey.”
Beatriz: It really does feel like a journey, and like everyone is a little lost, trying to understand where they are and what they can realistically do.
Andy: And that is actually okay. We are working in a space that is traditionally very conservative and risk-averse, and we have suddenly introduced a technology that is radical and potentially risk-generating, or at least changes the appetite for risk.
This connects to something slightly tangential but relevant. My PhD focused on how democratic change happens, and one of the issues I examine is why some movements fail while others succeed. Often, change succeeds when someone inside the institution looks at what people on the outside are doing and says, “Wait a minute, I get this.” The person outside the barricade is labelled a rebel or a radical; the person inside, doing essentially the same thing, is called an innovator or a change agent. We assign a positive label internally and a negative one externally to very similar actions.
In a way, AI is being received by parliaments through that same lens. It depends on how they interpret the role it plays. Are they looking for agents of change who can innovate from within, or do they see AI primarily as risky and disruptive? We are still navigating where that line sits.
Beatriz: This brings me to my second question: how did you manage to develop a framework that respects the diversity of parliaments? There is a clear structure, with defined maturity levels, but there is also a great deal of flexibility built into the framework.
Andy: Within each level, there are four development areas: internal governance, technical, organizational, and democratic impact. The framework emerged organically from our thinking. The maturity levels made sense, and the development areas were quite logical. What only became clear about halfway through the process, however, was how differently parliaments were approaching AI in practice. As we looked more closely, we started to see distinct models.
Some parliaments adopt a strongly governance-led approach: they want to put governance structures in place first and only then begin experimenting with AI. Brazil is a good example of this. Others take a compliance-based approach: they are interested in using AI, but only within clearly defined guardrails. This aligns closely with the classic Gartner model of digital transformation, and the UK is the clearest example of that.
There is also a systems-oriented approach, where institutions focus on developing systems and rolling them out incrementally by application. That is closer to what we see in Chile, with a more iterative development of specific tools. Finally, there is a user-driven approach, where the starting point is what users actually want to do, and AI is then explored as a way to support those needs. Canada fits that model quite well.
Beatriz: Right, so that is why democratic impact is included as one of the pillars.
Andy: Exactly. When you are dealing with a parliament, if what you are doing does not affect democracy in some way, then the question becomes: why are you doing it at all?
Beatriz: Because the other three pillars are internal, correct?
Andy: Yes, they are — and they are important. You want strong governance, because governance is directly linked to trust. Technical capability relates to efficiency. Organizational capability is about skills, change management, and process control. All of those elements matter, and you want them to be robust.
But at the end of the day, the purpose of Parliament is democratic impact. At some point, we need to be able to assess whether AI is truly transformational. If it is merely a support tool — like a spreadsheet or a database that helps us do our jobs a bit more efficiently — that is useful, but it is not transformative. If it is a transformational tool, then we should be able to see a positive impact on the democratic system, even if that impact only becomes visible over time or in indirect ways.
And in many cases, you can see it. Take a select committee that runs a public consultation and actively promotes it. Suppose it receives 3,000 qualitative responses. How is a committee clerk expected to analyze 3,000 submissions in a week and produce something meaningful? With AI, that analysis can be done in half an hour, providing a clear initial steer. It will still take a day or two for a human to review, interpret, and refine the results, but qualitatively, you are already far ahead. That is a clear example of positive democratic impact: the institution becomes better able to absorb and respond to public sentiment. That is a real gain.
For that reason, it is essential that democratic impact is something we explicitly measure. One of the realizations that emerged from working on the AI guidelines is that, as they currently stand, they work quite well for parliaments that are primarily focused on strong governance. They work less well for parliaments that approach AI through a user-journey or compliance lens. Those institutions are operating with a different mindset. That is why we needed a maturity framework that could accommodate different governance models — different institutional approaches, or, if you like, different parliamentary personalities.
Beatriz: I like that term, “personality,” because I do think parliaments have personalities.
Andy: They absolutely do. In fact, it is quite striking. Having worked in the UK Parliament, I was at a select committee meeting this week near where I live in the north of Scotland and realized that back in 2010, I had written a report for the House of Commons arguing that, if Parliament wanted to reach the public, it should take select committees out to where people actually are. And here we were, doing exactly that.
Beatriz: That is pretty remarkable.
Andy: You can see that there are ways of reflecting on the culture of a parliament. When I wrote that report, I looked at the mission statement of the House of Commons and examined how it defined its stakeholders. Those stakeholders were listed as Members, parliamentary staff, civil servants, and “others.”
Beatriz: Who were the “others”?
Andy: Essentially, 99.9% of the population fell into that category. That framing tells you a great deal about how public engagement was being understood at the time. If you compare that with where the UK Parliament is today, the shift is quite striking. It shows that if you work with the culture of an institution, you can change it — but only if you first understand it.
With AI, the situation is very similar. We cannot simply drop AI into an institution and expect it to work. We have to understand the institutional culture in order to identify both the opportunities and the barriers. That is essentially what the framework is trying to do, in a very non-controlling and non-judgmental way. It is meant to prompt institutions to ask these questions themselves.
Beatriz: In that sense, the framework does that work directly. You also emphasize the importance of keeping humans in the loop. Humans are essential. How do you conceptualize the ideal division of labor — if such a thing exists — between humans and AI across the different maturity levels?
Andy: The short answer is that I do not think there is a single ideal division of labor. That balance has to be determined by each parliament, and it will vary depending on the business area, the specific process where AI is being used, how critical that process is, and the level of transparency required.
For example, if an MP asks a researcher to prepare a speech on post-it notes, and the researcher uses ChatGPT to get a basic overview and then copies and pastes it, that does not particularly concern me. But if you are drafting legislation and using AI to generate content that will shape the substance of a bill, then I absolutely want a legislative drafting expert sitting firmly in the loop. The difference lies in the criticality of the task and the maturity of the process.
What matters most in the framework is not drawing a precise line about who does what, or prescribing specific roles. We often talk about “humans in the loop” in a purely technical sense, but what is really critical here is humans in the loop when it comes to defining governance and regulation. AI should not be allowed to default into use without critical reflection. There needs to be active engagement from parliamentary staff and Members in deciding how AI will be used.
In that sense, “humans in the loop” is not only about supervising tools as they are used. It is also about humans being central to decisions about when, where, and why those tools are deployed as part of a parliament’s broader AI maturity.
Beatriz: We were all gathered in Malaysia to learn from one another. In that context, what criteria can parliaments use to know when they are ready to share their experiences, depending on where they sit along the maturity spectrum?
Andy: I think every parliament should share its experiences, regardless of where it is on the maturity scale. All experiences are useful, and all are different. One reason for this is that learning is most effective when it happens between peers who are relatively close to one another.
If a parliament is at level zero — take, for example, a small parliament in Africa, the Pacific, or the Caribbean, such as Bermuda, Vanuatu, Laos, or Guinea-Bissau — asking Canada, the UK, or Italy for guidance can be overwhelming. There may be a great deal of impressive work to look at, but the gap is often so large that it becomes difficult to translate those experiences into practical next steps.
In those cases, it makes much more sense to look laterally. If you are Vanuatu, talk to Fiji. If you are a small parliament just beginning to figure this out, find another parliament that is one step ahead of you and move forward together. The more we share honestly about where we actually are — both in terms of AI and in terms of broader digital maturity — the more useful those exchanges become. AI is not operating in isolation; it is embedded in a wider parliamentary culture around digital capacity. AI may be transformational, but it is still a technology, and all those surrounding conditions matter.
That is why it is often most helpful to find a parliament that is on a similar journey, or just slightly further along. I can support parliaments with advice and frameworks, but the people who really “get it” are those who are experiencing similar challenges at the same point in the process.
We see this not only at the institutional level, but also with individual members. The most effective way to help a Member of Parliament learn how to do something is often to have another MP show them.
Beatriz: That is a very interesting way of thinking about it. I had never thought about it that way. Why do you think that is?
Andy: Because people learn best from those they recognize as peers. They see someone who looks like them, sounds like them, understands their role, and faces similar pressures. When that person says, “Here is how I do it,” the advice is far more readily accepted than when someone like us comes in and says, “This is how you should do it.”
Beatriz: In this sense, a country like Paraguay, for example, should look to another parliament at a similar level to work things out together, rather than focusing on the UK or another country that is much further ahead.
Andy: Yes, exactly. And in practice, who is Paraguay talking to? Chile. Chile is quite a bit further along in terms of digital maturity, but it shares a similar political culture and way of thinking.
Beatriz: Just to make sure I understand correctly, the key is to look for partners with a similar culture, not necessarily the same level of maturity.
Andy: It can be both. Cultural similarity is very useful, but parliaments can also learn across differences in maturity. I have seen extremely effective inter-parliamentary exchanges between institutions you might not initially expect to work well together, but they do because there is a shared agenda. Parliaments are unique institutions; there really is nothing quite like them. For that reason, the best place for a parliament to learn is from another parliament. In many cases, that shared institutional culture alone is enough to make the exchange meaningful.
Beatriz: Coming out of the Malaysia conference and based on the feedback you have heard so far, are you already thinking about next steps, particularly in terms of bringing people together again to continue these discussions? What can we expect next year?
Andy: Now that the maturity framework is out, the immediate priority is to ensure it is properly disseminated. Releasing something just before Christmas is never ideal, so early next year we will focus on raising its profile and encouraging wider engagement.
We are also planning a meeting of the Parliamentary Data Science Hub, likely in May. This hub brings together what we refer to as Tier 1 parliaments. One thing we do not explicitly address in the maturity framework is digital maturity more broadly, which is where we categorize parliaments into Tier 1, 2, and 3. Tier 1 includes the more digitally advanced parliaments — the “big beasts,” if you like — although we are not prescriptive about membership. The hub is currently very focused on AI, but it also covers wider digital transformation issues. The plan is to hold that meeting in Rome in May.
In addition, we would like to run a series of webinars in the period leading up to that meeting.
Beatriz: To help disseminate the framework?
Andy: Yes, among other things. We are still in the process of planning our webinar series for next year, so the themes have not yet been finalized. I am sure the maturity framework will feature prominently, but we have not had a chance to map that out in detail yet because the focus has been on getting the framework finalized and published.
Beatriz: Is there anything I did not ask that you think is important to highlight either about the framework itself or about the Malaysia event?
Andy: Two things come to mind. First, the framework was not developed in isolation. It exists and works because of sustained input from parliaments themselves. Our model is very much “for parliaments, by parliaments,” and I think that came through clearly in Malaysia as well. The real value lies in having parliaments in the room, sharing their experiences, learning from one another, and collectively addressing challenges.
The second point is something we are seeing across much of our current work: AI cannot be treated as the responsibility of a single person or unit within a parliament. It cannot sit in an internal silo. This requires institution-wide change. That means leadership at the very top, on both the administrative and political sides. You need the Speaker involved, you need parliamentary staff engaged, and you need the Secretary General on board. AI adoption is an institutional issue, and it has to cut across the entire organization.
Beatriz: When you look at parliaments that are most advanced in their use of AI, do you see that kind of institutional embrace?
Andy: Very much so. The UK is a good example. There is an AI steering committee at the parliamentary level, and each House has its own AI committee chaired by the Speaker. Within the parliamentary administration, there are additional committees and working groups. The result is a clear central structure with coordinated branches, which makes the whole system coherent.
It is also interesting to see how priorities differ between the two Houses. The House of Lords is more focused on process and on integrating AI safely into parliamentary practice. The House of Commons, by contrast, is much more concerned with issues such as deepfakes, disinformation, and the misuse of AI.
Beatriz: Which is not a coincidence, given that those were the two tracks of the Malaysia event: policy and AI in Parliament.
Andy: Exactly. In the UK context, that difference reflects institutional roles. The House of Lords is appointed and therefore tends to focus on how it conducts its work, while the House of Commons is elected and is understandably more concerned with electoral risks.
Beatriz: Yes. Thank you very much. This was extremely helpful.
Modern Parliament (“ModParl”) is a newsletter from POPVOX Foundation that provides insights into the evolution of legislative institutions worldwide. Learn more and subscribe at modparl.substack.com.
