Meaningful Knowledge: Constituent Engagement as Qualitative Research

My interview with Chelsea Mauldin, Executive Director of The Public Policy Lab

When I’m talking to people without district- or state experience, I’ll often talk about the constituent-facing work that happens in these offices as “submerged” within Congress: the image in my head is DC on a hill (literally a Hill), surrounded by water, and all of the local work of casework, outreach, convening, coordinating, recognizing, witnessing, representing, etc happening under water. Folks in DC know that there is a vast and complex ecosystem underwater, but can often only see what rises to the surface, or a distorted, shadowy view of the depth.

I was thinking of this image again in my conversation this week with Chelsea Mauldin, Executive Director of the Public Policy Lab. Chelsea is a qualitative researcher and design expert, whose work focuses on government innovation and program design. Our conversation was initially about the power of qualitative data and research: the non-numerical information that captures the richness, context, and meaning behind people's experiences, opinions, and behaviors, and then ended up on the policymaking cycle, AI, institutional knowledge transfer, and much more. One thing that really struck me was her contrasting this rich data against some of our traditional quantitative metrics around democracy that flatten the constituent experience in ways that can rob it of meaning.

Many constituent-facing staff are already in the thick of this type of work, instinctively. You can’t help but learn through interacting with constituents in a really profound way, and those shower thoughts we talk about in this conversation — the “aha!” moments where you see connections between conversations — happen whether you’re looking for them or not.

But it’s actually surprisingly rare that we actually step back and think of constituent engagement as qualitative research in its own right, and also rare that the full depth of this type of insight makes it out of the water and up the hill to DC.

I think for Congressional teams, this is a potentially powerful reframing of some elements of constituent engagement. What would it look like to orient engagement around soliciting qualitative data, and learning from constituents? How would practices of engagement look different if that was the primary goal? (What is the primary goal today?)

That reframing also elevates the expertise needed to do a really, really good job of constituent engagement, taking in all of this incredibly rich data and distill it into insights that can actually inform policy — in that way, it also flattens the hierarchy of Congressional offices by elevating the work of district- and state-based staff.

So we’re unflattening data and flattening Congressional office hierarchies. Cool? Cool.

Thanks so much to Chelsea for joining me!

Chelsea Mauldin is a social scientist and designer with a focus on government innovation. In addition to directing the Public Policy Lab, she is an adjunct professor at Columbia University’s School of International & Public Affairs and a frequent keynote speaker and panelist.

Prior to co-founding PPL, Chelsea consulted to municipal and federal agencies, directed a community-development organization, led government partnerships at a public-space advocacy nonprofit, and served as an editor for publishing, arts, and digital media organizations. She is a graduate of the University of California at Berkeley and the London School of Economics.


Transcript

This transcript has been edited for clarity and may differ slightly from the audio recording.

Anne Meeker: Chelsea, thank you so much for joining us for Voice/Mail. I know you and I were connected a few months ago and just instantly had that kindred spirit around qualitative data for legislation and policy and all of those things. So I have been so excited to talk to you about all of this. Could you start by just introducing yourself and give us a little background on you?

Chelsea Mauldin: Thanks so much for inviting me to come and speak with you. I'm excited to talk, and I'm excited particularly to talk with the hopes that a bunch of people who are working with constituents are going to hear about these ideas. So I'm Chelsea Mauldin, I'm the executive director of the Public Policy Lab. We are a not-for-profit organization which is based in New York City, but we do ethnographic research, human-centered design, policy development and implementation support, and sort of operational scale support for government partners across the country, both at the federal, state and local level. We've been doing this for about 15 years.

Anne Meeker: And what was your path to the Policy Lab? How did you get here?

Chelsea Mauldin: I have a social science and design background, and I had been working in and around sort of strategic operations of government for a number of years. And then some colleagues in the design world here in New York City began to get together in 2008, 2009 to say, "You know what? There seems like the designers know a lot of useful things that perhaps the federal government doesn't know that we know. We know how to come up with new ideas and test them in a way that seems like it would be useful, particularly in the provision of social services."

We observed that there is often a process of engagement and prototyping and testing when it comes to the development of built environment — you would never build a new building without first having blueprints. But so often when major social service delivery programs are created, there's not a meaningful engagement with the people who are going to use those programs or deliver those programs. Nor is there the kind of prototyping and testing of those programs before they roll out at scale. So we created the Public Policy Lab as a way to support government partners with the kind of human-centered research and design skills that we thought they could benefit from.

Anne Meeker: Beautiful. And I know that that is just the holy grail for us in thinking about how can Congress legislate and just loop in constituent input that comes in through casework into this big, beautiful feedback loop? And then all of our problems will be solved.

Chelsea Mauldin: I think obviously people who work closely with members of the public, whether they're frontline service providers in a social service agency or constituent services staff at a legislator's office, they're going to hear stuff from members of the public all the time about what's working and what's not working. I think one of the things that the design professions can bring, and where we can be valuable, is that we know how to take that feedback and then prototype solutions based on it — solutions which can then be tested and evaluated so that you can actually build multiple potential ways of responding to the public's needs and try them out before committing to them on a city, state, or national scale.

Anne Meeker: Incredible. And I want to really, really dive in on that question about qualitative input and how is it helpful and why is it helpful, and how do we help legislative staff make sense of all of it in just a second. But before we get there, I would love to just get a little bit deeper on the work that you do. In particular, the People Speak platform that I think we've linked in a Voice/Mail newsletter before because I thought it was so cool.

Chelsea Mauldin: It's actually called the People Say.

Anne Meeker: The People Say, excuse me!

Chelsea Mauldin: It's at thepeoplesay.org. And this is a platform that emerges out of work that we've done over the past decade or more, where we have observed that our organization and many smart and committed folks who are working inside of public agencies are actually doing meaningful qualitative research with the public to understand their needs and requirements, but that that data ends up siloed and locked away. It lives on some SharePoint drive, and no one ever sees it other than the team who conducted the research. So we began to feel like it was really important that all of the research that we conduct in the public interest to benefit public programs and public policy delivery, we should try as much as possible to make it public and open.

In addition, we began to observe that a lot of the research that got delivered to other people was delivered in the form of some excerpted quotes — some flat quoting of something that somebody had said. But when you're a researcher and you go and you sit in somebody's kitchen, you meet with them in a church basement, you hear them describe what's going on in their lives, there's so much rich contextual information that comes from doing in-person research. People's demeanor, their tone of voice, their environment, the context in which they're operating — that gets lost when all that you convey to people who are absorbing your research output is some extracted quote. So we also began to feel like it was really important that we film the research that we're doing and capture it in real time, so that you can actually see and hear people talking about their lived experiences.

So, The People Say is a platform that tries to do both of those things. It tries to both collect multimedia qualitative research — members of the public talking about their policy experiences and policy needs — and then it puts it on this public platform with all of the data tagged and searchable. If you're, for example, curious about what New Yorkers or other Americans are saying about their experiences with Medicaid, you can go to our platform and there is a Medicaid tag. And then you can see a bunch of excerpts from policy design research of people talking about their Medicaid experiences and Medicaid needs.

We have focused so far particularly on older adults. So those Medicaid users are “dual eligibles:” people who are both eligible for Medicare and Medicaid. So we've been focusing on an older-than-65 population. Our wonderful partner in this work is a foundation out of California called the SCAN Foundation, which is focused particularly on older adults' health and wellbeing. But we are looking to expand the data which is on the platform to other populations as well.

Anne Meeker: So for any legislative staffer on Aging, on Social Security, on older adults’ issues — the ability to just log on and say, "Hey, my boss told me to come up with five new legislative ideas by the end of the week. Let me look for some of those tags" — that would be an incredible resource.

Chelsea Mauldin: And what we've done is, in addition to providing the entire public tagged database, which now has almost 3,000 different data units in it that you can search through, we also have ten sort of summarized policy insights. And that's really us, as qualitative researchers having conducted all of this research, asking — what big themes kept coming up over and over again? What things did people describe to us recurrently as issues or needs? And we've summarized those into ten areas where we think there are opportunities for policy improvement.

That's stuff that will not come as a surprise, probably, to most people watching this. It's transportation; it's health care; it's finances for basic needs; it's housing; other topics like that. And for each one of those policy insight areas, we've also collected playlists of video clips that substantiate those particular policy needs.

Anne Meeker: Amazing. So it's never just you're relying on the analysis of the research team, however wonderful that may be, but you can ground truth it and get back to — and this is exactly, as you said — the context in which this person said this thing.

Chelsea Mauldin: We also really wanted to make sure that people saw the relationship between qualitative and quantitative data sources. So on many of the data units, you'll see that there are these things that we call "out links," which are essentially connections to major quantitative survey studies funded by the federal government about the experiences of older adults. Because a lot of times we think that people will say, "Oh, well, this is one person talking about their experience. To what degree is this a representative experience?" Well, that's not the kind of research we do, but there are people who are doing these large, statistically representative surveys of older adults. And so wherever possible, when there's a correlation between the qualitative and quantitative data, we've provided links so that folks can sort of immediately click over and see, "Oh, here's survey data which is also speaking to the same issue or same point."

Anne Meeker: You just anticipated what I wanted to get into next, which is perfect. I remember that as baby former caseworker Anne — when you said "data," I thought numbers, right? So when I was asked to kind of pull out insights from the casework that I had in front of me, from our team's backlog of casework, like my first instinct was, "Okay, I have to count things, and I have to come up with statistics and percentages" — which is also hysterical because I am an anthropology major and I should have known better than that.

But I think that is kind of the bias in government: there is this natural bias for quantitative metrics. They're somehow seen as more authoritative. They're seen as kind of giving you the bigger picture for what's going on. And you have to start to make the case for qualitative data and why that should get equal billing, at least, for folks who are thinking about policy and thinking about how to improve things.

So if you were talking to a congressional staffer — a caseworker, or a policy analyst, legislative aide, whatever — how do you go about making that case for qualitative data?

Chelsea Mauldin: Well, I think that what we would always argue for is what is often called in the field "thick data" — data where one combines what you can learn from quantitative studies or tracking of administrative data with the kind of rich and contextual insights that are available when you do qualitative data collection.

So what a lot of, say, survey data will tell you is what people think about a given thing. Like, "Do I want to continue to receive Social Security benefits? Yes or no?" But it won't tell you why people are saying that, or how Social Security benefits are affecting someone's wellbeing and their ability to live a safe and secure older age.

If you actually want to learn about what people's motivations are, what the kind of contextual environment of their opinion-making is, or even the sort of implicit cultural beliefs or unspoken values that underlie their opinion statement, then you need to do qualitative research because you actually need to dig into "Well, why did you answer that? Or what does that mean to you?"

A lot of times people will tell you things about themselves and what's important to them without them coming out and saying, "This is what's important to me." They will describe a life which is rich in community connections, or they will describe a level of loneliness in their lives. And I'm thinking about a lot of this older adult research we've done — most of those folks don't come right out and say, "I'm lonely and isolated." But they describe a life in which they make reference to the fact that they don't have people to talk to very often, for example. That suggests something about a set of needs that you can derive from that information, which they would not have told you in a survey, for example.

I'm going to stake out a controversial position here, which is that I actually think that a lot of what passes for sort of sober evaluative metrics is, in fact, not meaningful. I'm going to be polite — I think that a lot of what gets measured is what is easy to measure, and a lot of what is most meaningful in existence as human beings... We're still human beings. We're not AI-enabled robots yet. These are exactly the kinds of things that are almost impossible to quantify.

So when you think about what so many policy-enabled programs do, they create safety and security for our society. We don't want to live in a society where people are living in unsafe living situations, where people don't have enough food, where people don't have enough heat. We want our population to be educated and to flourish. All of these characteristics of human experience are ones which are difficult to meaningfully quantify, I would argue, because they relate to my subjective experience as a human being, which is going to be different from your subjective experience as a human being.

And there are — to go back to people's implicit cultural values — things that are going to feel good for me. I live in New York City. I'm very comfortable riding a subway car full of dozens or hundreds of other New Yorkers. That feels safe and good to me. I know that I have rural relatives who would not be comfortable in an environment which is so densely populated; it would not feel good to them.

So there's something about the ways in which we flatten the specificity of human experience down to aggregated numerical frameworks that I'm deeply suspicious of. And I actually think it's part of the reason why our policymaking is not as effective as it should be. This idea that we will somehow be able to measure our way toward goodness seems... I'm suspicious of that, given my professional life talking to people about their policy experiences.

Anne Meeker: That suspicion sounds very well-founded and very much shared. But I think, coming from the Congressional side of things, there is something in — in a capacity-starved institution, it is much easier to look at metrics that, as you say, flatten a lot of things into one thing. And I know that really plays out, especially as we've mentioned, for those constituent-facing staff who do have the extraordinary privilege of access to a lot of that really rich, thick description and interactions with constituents about those kind of hard-to-quantify or hard-to-flatten elements of their lived experience.

But: when you are 17 people serving a district of 750,000, I think it's tough for offices to really figure out how to take the time to do that qualitative research. So I'd love if you could just walk through — if you were talking to a caseworker, you're talking to a district director or chief of staff about, "Hey, qualitative insight is important. Do you have access to it? Where do you start?" — kind of thinking about how to operationalize that.

Chelsea Mauldin: Sure. So I think people are familiar with the idea of statistically representative surveys. And there's a sort of relationship between what is the size of the population that you're trying to collect information about — okay, if it's going to be a population of 100,000 or 500,000, I need to talk to 1,000 or 3,000 people in order to get to a reasonable sort of margin of error around what they're telling me.

The same sort of benchmarks exist in doing qualitative research. So many decades of qualitative research and then post facto analysis of that research suggests that if you have a relatively narrowly scoped research question and you have a relatively specific population that you're trying to understand — how that question resonates in their lives — that you can begin to get to what is called "code saturation" at about nine people. And code saturation means you have at that point heard most of the major themes, sort of big topics that are going to come up.

So let's say you're curious about Medicaid work requirements, for example. If you talk to nine people who are currently working-age adults who are receiving Medicaid and ask them about their Medicaid access and how their work life responds to that, after you've talked to about nine people within some geographic catchment area or age band or income group, you'll begin to hear most of the major themes that are going to emerge by doing research with even a much larger version of that population.

Typically, we think that we will get to what's called "meaning saturation," as differentiated from code saturation, after about 24 respondents. After you've talked to about 24 people, you will have heard not only the major themes, but also a subset of nuances around those themes. It's not to say that if you talk to more than 24 people, you won't hear some new things. But after you get to about 24 people with a pretty tightly scoped set of research questions and a pretty tightly scoped population, you will have heard most of what you are going to hear.

So a lot of times people will say to us, "Well, why won't you talk to 150 people?" I mean, we can talk to 150 people, but a lot of times after we've talked to about 24, we will have heard a lot of what we are going to hear and enough that we can derive insights from it to help inform policy or program development.

So one thing I would say — this is the long-winded way of saying — when you and your watchers think about doing this kind of work, you don't have to go out and talk to 150 people or 500,000.

Anne Meeker: That's so reassuring!

Chelsea Mauldin: You can — and this is a perfectly appropriate and valid method in the human research field — you could talk to about 25 people if you do that well and professionally and learn quite a lot of things.

One thing to say about this kind of research is that, as I mentioned before, conducting a semi-structured interview with someone over an hour or an hour and a half is going to deliver so much more information. If you think of a survey and you ask someone ten questions, you are essentially getting ten points of data back from them — what their answers were to those ten questions. If you talk to someone for an hour or an hour and a half digging into a set of questions, you're going to get tens, hundreds of data points. You can think of it as every sentence out of their mouth is a new piece of information for you, plus also all of the nonverbal, contextual cues that they are conveying to you through the way that they're talking about a thing — what they're comfortable with, what they're not comfortable with, what makes them sad, what makes them happy, how they're interacting with their environment while they're describing whatever it is to you.

So you get so much more and richer information than you would if you put a ten-point survey in front of that same person. Therefore, you don't have to talk to as many.

Anne Meeker: Which is wild. And I think, yes, it is just going to be — first of all, which is going to be so reassuring to offices to hear that this kind of structured research is very doable within the bounds of just the ways that offices are already out talking to constituents. But I think if you were operationalizing this, if you had — if you did three casework office hours at Councils on Aging across your district, you know, with a little bit of nudging around to try to get a representative sample — you would easily be at 25.

Chelsea Mauldin: Totally. Here's where the sort of professional skill set comes in: it's in crafting research plans and guiding a research engagement which is non-leading. You want people to surprise you. So on some level, if you go in with a set of questions that you have predetermined and you just ask those questions, you will leave so much of the value of doing this kind of research behind, because what you actually want is for people to begin to tell you things that you didn't even know to ask — because you're not living their life. You don't have their lived experience. You actually want to get new content from them to inform your decision-making process.

So again, sometimes partners will ask us, "Oh, do you ask all the same questions to everyone who you speak to?" And we always say we would be under-delivering so much to you if we did that, because our job is to create a sort of research framework with somebody who we're talking to and then invite them to tell us all kinds of things that are germane to that topic, that are not necessarily what we would have known to ask them — so that we can come back with a much broader set of perspectives on whatever the problem space is.

The other thing is that after the fact, there's a job of tagging that data. There's a job of looking at everything that you have collected from people and then trying to identify: what are those things that are coming up again and again, and where are there interesting divergences? We will often specifically recruit research participants for what we think of as edge cases. If you're trying to design an effective policy delivery process or an effective program delivery, you actually don't want to only talk to the people who are sort of like the mass middle of that group of users. You want to talk to people at the edges — people who live far out of town, people who have vision impairments, people who are very low income, people who are very old. You want to talk to people who are at the edges of the user population and divergent edges so that you can understand: if we can design a policy to be effective for people on the edges, we will almost certainly meet the needs of people who are in the middle, right?

Anne Meeker: And can we go back to that idea of scoping? So again, breaking this down to the skills that a congressional office would need to have to be able to do this well — you have to talk about tagging, which sounds like a skill set in its own right. But even just how to scope that research question — do you have any examples of what is something tightly scoped that would work really well in this context?

Chelsea Mauldin: So we typically go through — I mean, we have a sort of ten-step model that we use to actually get from the point of an initial problem to something new implemented in the world. But let's just talk about the first four steps of that.

First, there's: okay, what are we trying to solve for? What is the intention of talking to people? What's it for? What do we actually — what do we think we want to make? How do we think we want to alter something in the world? And getting really clear on that, as opposed to just doing some kind of open-ended listening session. We as designers are actually attempting to make a thing. What is that thing that we are going to try to make? So getting clear on that.

Once you have some clarity on what you think you're trying to make, then you can say, "All right, who do we need to talk to to understand how to make a better thing for that?" And that's typically not only members of the public — it's the frontline staff who are going to deliver that policy or that service. It's people who are in operational roles who never interact with the public, but actually run the systems that ultimately serve the public. It's people in leadership and executive roles who are actually going to be the ones making decisions about how those programs run. We need to actually hear from all of the human beings who are involved in some kind of process or system.

And then we think, "Well, what kinds of things do we want to know from them?" And based on that, we create first what we call inquiry areas, which are basically just stuff that we need more information about. And then we'll say, "Okay, what kinds of questions might we ask in order to probe into those inquiry areas?"

I always tell my teams: I don't want you walking into an interview with a long script, because then you're just going to follow your script. I want you to write that script so that you can internalize it, and then I want you to just do really good listening in the research engagement, because the really good listening is going to allow you to go off on diverging pathways with different people.

Once you have the data, then you're going to collect it and tag it, and you're going to say what kinds of themes are emerging from it. And then it moves into the most delightful — and also in some ways mystical — component of the research experience, which is synthesis, where you sort of sit with all of the information that you've collected through research, and you ask yourself: what unexpected relationships are emerging out of this data? What intuitive senses do I have about, "Oh, this person told me this thing, and this other person told me this thing. I think those two things are actually related somehow, even though they didn't tell me that."

It's where you allow your mind — your human mind — to sift through everything that you've learned and begin to make intuitive connections between different pieces of data. And that's where the kind of creative component of the research work comes into play. That then leads you to a place where you have a sense of, "Oh, we think that here are three or five different ways that we could develop a new program, a new policy, a new product, a new tool that would respond to some of these insights that we've developed out of the data that we've collected."

Anne Meeker: I know it does sound like the fun part. Yeah. For the congressional offices here — that is not necessarily like you need to go to the top of a mountain and lay out all of your data and look at it. Sometimes that's — you've done all these interactions and that's the shower thought that's like, "Oh!"

Chelsea Mauldin: The shower thought is the best thought, you know. Yeah. Or sometimes it's just literally you lock yourself in a room with your colleagues who also did this research with you, and you just sit around staring at each other until something comes out of somebody's head that someone is shocked and amazed by. It really requires non-linear thinking. It requires letting your brain relax enough that you can, in fact, find your way to unexpected relationships and conclusions.

Anne Meeker: Absolutely. That is the fun part — I love this. I love this so much. So again, thinking about how does this fit for constituent engagement? I'd actually love your thoughts on this. There is probably a point in the policymaking process where this type of input is going to be more useful than others. And maybe that's kind of the initial, "Hey, we're aware of this issue in our district, in our state, right? What are we going to do about it?" — like the open-ended portion? Or maybe that's — where is it?

Chelsea Mauldin: You're making me want to pull up my little graphic of Lasswell's policy cycle and stare at it. I mean, we have long argued that at any moment in the policy cycle, you can benefit from human-led insight. There is going to be a moment where you are trying to say, "What is our policy agenda? What do we actually think we want to work on and what's important?"

I think that there are moments when you're actually generating responses to that. So if you're first doing classical agenda-setting — what's on the agenda — that's a place where you can learn from members of the public about what are their needs.

Then you can say, "Okay, now we're actually going to create some kind of policy," and you can validate that against what people are saying to you. You can explore with them what the effects or outputs of a policy like that would be in their lives.

You can use data from the public to generate authorization for change. If you need to be able to make the case, "Hey, the people need this," it's helpful to be able to have a bunch of the people's voices saying, "Hey, this thing is needed."

You can use qualitative data and insight in actual policy implementation in terms of saying, "Okay, we have this legislative content. Now, this is actually going to turn into something that turns into a program. How do we make that work?"

You can use human research to help you do evaluation — to that question of, "So we passed this legislation, what did it do?" One way of knowing what it did is go talk to people about how it altered their lives that this thing happened or didn't happen.

And then I think there's actually even this really important part of policymaking that I feel like often gets neglected, which is about decommissioning, which is about saying that “At some point, we made a decision that there should be policy X, but now we all know it's not serving anyone anymore or not serving anyone well enough. How do we kill it?” And then I think that there's an opportunity to engage the public in questions around: how do we decommission whatever this thing is and replace it with something better, in a way which is not disruptive? And again, can you get authorization for that change from the public?

Anne Meeker: I feel like I can just — in the future when this is live, when people are listening to it — I can just hear wheels turning for outreach folks for creative public outreach on decommissioning. I love that.

Chelsea Mauldin: Well, I mean, this is really — you know, this idea of the policy cycle is contested. Obviously, this is a mid-20th century concept, and it often is — it's never as tidy in reality as it is in theory. But there's definitely some kind of a flow that occurs from initial problem identification and agenda-setting through to authorization and implementation, through ideally to decommissioning. But one would hope that the public would be involved in all of those.

Anne Meeker: Right, and part of that issue is the tools, the training, the value of public input and being able to place the value — just the cultural value of public input within Congress and thinking about how to get that input into all of these different sections.

Chelsea Mauldin: Let me just say one more thing here. There's a sort of canonical design framework which is called the “Double Diamond.” And it is this idea that when you are initially seeking to design a thing, you need to sort of open yourself up to a whole set of potential possibilities around what the thing you design could be, and then you have to sort of narrow in and say, "Okay, of all those potential possibilities, we're going to try to design this thing."

And then when you go to design that thing, again, you have to sort of open up to possibility and say, "Oh, what are all of the different ways we could design that thing?" And then you have to say, "Okay, we're going to actually focus on this version of that," and then you have to close back in on to that particular idea.

So what we find in doing this kind of work in policy environments is that there's this kind of continuous sequence of opening up and narrowing in, opening up and narrowing in. Members of the public are not going to come and write the legislation with you. So there's going to be a moment where you collect a bunch of information from people, and then it's the job of professionals to translate that into something which is the actual work product that you need to make.

But then there's another opportunity for opening up to say, "Okay, now that we have made this product, what do people think about it? What kind of feedback can we get on it?" And then again, a closing down to say, "Okay, now we're going to onboard all of that feedback and alter what we originally made." And that opening and closing is often what in a design context is also referred to as iteration — that you don't just make a thing once; you make a thing over and over in an effort to get it to its most functional version.

So I've long wondered if there were ways to bring in a sort of process of iteration into policy creation in a way which is more meaningful.

Anne Meeker: Yeah. And I think we are kind of in this interesting moment for thinking about the future of policymaking and legislation, where there is an increasing awareness of some of these design techniques and best practices within Congress. But that comes back to questions about the operational capacity for whether it can sustain carrying these out?

And a huge piece of this, I think, comes back to thinking about our imaginary caseworker who's trying to make this case and use this stuff — it really comes back to institutional memory. And I'd love your thoughts on this, because it seems so much of this — no matter how much you do a beautiful job of tagging and analyzing all of this work and creating the final product — but still, so much of this expertise really kind of lives in the personal memory of the person that has done it, the people that have done this research. And so there's a huge possibility of loss when, again, caseworkers last four years on the job on average. So how can offices think about — if we're investing in qualitative data, we want it to play this big role in thinking about how we legislate — how could we kind of capture that and preserve that?

Chelsea Mauldin: Sure. Well, I have two almost diametrically opposed ways of thinking about this problem. One is that we build things like The People Say — we actually build repositories of qualitative data, and those repositories of qualitative data are — the data is tagged, the data is searchable, and the data can accumulate inside of that repository. So one of our intentions with this work is to continue to reengage the public, not just expand who we have talked to, but also return to people who we have talked to and say, "Hey, when we spoke to you last year you were 65 and you were having these experiences accessing Medicare. Now you're 70. How has that experience changed for you?" And really be able to learn from people over time.

So one way to deal with the institutional memory problem is to have better repositories of knowledge. I mean, this is the transformative thing about being humans, right? That we are able to actually write down the things that we know or collect the things that we know and be able to reference them at some later point. We don't have to solely rely on an oral tradition or just even the knowledge that I, as one living entity, can hold in my head. So I would push for trying to make the kind of qualitative data that's going to inform public policymaking, you know, public and open and reusable, and longitudinally building.

Then I think there's a different way of thinking about this, which is to say: despite all of our advanced knowledge systems, we remain human beings and like to learn and know things from other human beings. And that's how we get most of our professional training and professional capacity building — by being in jobs with other people who know how to do things. Most of the most useful things I have learned as a professional person have been from other professionals with whom I have worked. And so we should also just accept and embrace that — we all know stuff and we're going to learn stuff from our colleagues, and sometimes our colleagues are going to leave and they're going to go on to other roles. But if I have collected a bunch of knowledge and wisdom from them, I can use it. And then I can also pass it on to my new colleagues. And when I go on to some new role, they will have that knowledge and wisdom. So I think thinking quite consciously about how to design systems of knowledge transfer amongst people in institutions is actually super critical.

And one of the things we have learned is that insofar as we design system innovations, they're useless if we can't actually embed them in the brains of the people who are going to be operating those systems so that they actually internalize that knowledge and can use it, and also can transfer it to other humans.

Anne Meeker: Interesting. And that ties so much into some kind of really big picture questions that I know are starting to buzz around Capitol Hill right now with AI. Where is AI going and what will its impact be on the legislative workforce? And I think as we're talking about qualitative data and frontline staff here — there really is genuinely a hierarchy within congressional offices, where the people who do constituent-facing work are at the bottom of the totem pole, and the higher up you get up the totem pole, the less you talk to people.

So I think my worry — you know, our position at POPVOX Foundation is always we'd love to automate the stuff that keeps people from talking to constituents — but there is a lot of interest out there in automating a lot of the front-facing work. And understandably, in some cases — that stuff is hard! When you are getting death threats on the phone, you are not really thinking about the qualitative data that you are getting out of that interaction.

So I think this is just more of a cri de coeur than a question, but I would love to hear: how are you thinking about all of this?

Chelsea Mauldin: You know, we are thinking about whether we can use AI to be smarter humans. So, we have a focus particularly on working on projects that relate to various social safety net programs. Honestly, we don't need to ask anyone ever again, "Is it hard to apply for public benefits?" We have heard stories about that being hard enough times that that is no longer a question that anyone on my team needs to ask anyone.

So to take that idea to the next level, I think one of the things we are asking is: can we use AI-enabled tools to help us know what we already know? Can we train AI-enabled tools on corpuses of data that we have already collected, to say what is already known to us about this particular problem space? Now, can we spend our precious time in the field talking to people, trying to understand the things that we don't currently understand, trying to dig into aspects of people's lives or of a policy-enabled service that are not yet clearly defined?

So we are really looking at AI as a kind of copilot — as a tool that helps us be smarter humans as opposed to a tool that replaces what we do. So I guess that's the question for everyone who is doing constituent services: what can you use AI for that — to your point — takes other busywork off your plate or addresses things where the problem is already deeply known, so that you can spend your time on the more complex things? Spend your time on the things where the answer is not known, to be able to deliver value to constituents.

Anne Meeker: Right? And then that also kind of has the idea — in an ideal world — has the impact of also preserving a little bit of that institutional knowledge transfer.

Chelsea Mauldin: Right.

Anne Meeker: It could, it could. Yeah. And all of this obviously raises design questions around how is it implemented and deployed in a congressional context.

Chelsea Mauldin: Well, again, let me just come back to this: we're all human beings. We enjoy interacting with other human beings. Sure, there are moments where if you can just type some information into a chat box and get exactly what you want out of it, fine. Super. That's efficient. But if you're actually trying to dig into stuff around the meaning and experience of your life, many people would much rather do that with other human beings.

And I don't think we should try to aim for a future in which I never talk to you, I just talk...

Anne Meeker: Such a sad future. This has been so delightful!

Chelsea Mauldin: ...and you don't ever talk to me. You just talk to a robot, you know? Don't we actually want to use our technological tools so that we get to be more human, not less human?

Anne Meeker: And the kind of counterpoint — counterpart part to that, I think that gets overlooked for congressional offices also — we've definitely talked about this for Voice/Mail: as so many agencies and public services have become increasingly automated, I really want congressional offices to understand and take full account of how unique it is that they are one of the last places where constituents can go talk to a person in government — a person that is connected, and with all the different roles Members play — ambassador to the national government, a local dignitary when there are so few of those roles left — that kind of human quality of a congressional office and constituent engagement is truly unique. And from everything that we've talked about today, it sounds like that humanness does really kind of open up the opportunity to gain access to so much of this insight in a way that is not replicable anywhere else.

Chelsea Mauldin: Totally.

Anne Meeker: I'm always here to just cheerlead for congressional offices and how much the work that they do is really incredible.

Chelsea Mauldin: It's incredibly valuable. And I feel like — you know, I don't want to wade too deeply into political matters — but I think we can say sort of broadly that we see our political sphere becoming ever more polarized, and it feels as if there is less opportunity for bipartisan collaboration. I think that when you actually talk to people, you find that they're often quite sensible and reasonable and not ideologues.

And I expect that the folks who are in constituent services have stories to tell about that — can speak to the ways in which most people are just trying to live a decent life and be okay with their family, you know? So insofar as folks who are having that kind of interaction with the public can really speak to that and can really elevate the ways in which most Americans just want to live a good life and have everybody else live a good life, then that's a really powerful thing that they can do for our democracy.

Anne Meeker: So more qualitative interaction fixes democracy?

Chelsea Mauldin: Yeah, exactly. That's it.

Anne Meeker: You know, it seems like a recipe that's easy enough to follow. Chelsea, is there anything I haven't asked you that we should cover before we wrap up?

Chelsea Mauldin: I mean, here's the question that I would love to hear from everyone who watches this. And please just reach out to me and tell me the answer to this question.

My team — we have been doing this work for many, many years now, particularly with executive agencies, with federal, state and municipal agencies of all kinds. What we have not done is actually brought these kinds of human-centered research and design skills directly into legislatures to help legislative staff and electeds do policy creation. And I'd like to know how to do that. How can we help you with these sets of skills and capacities that we have to be able to make better policy for the public?

I literally am interested in: who do I call and how do we show up for you to be able to deliver this kind of meaning-making to your teams?

Anne Meeker: I cannot wait to hear about the responses you get to that question. Also, Chelsea, this has been genuinely so wonderful. I really appreciate this conversation.

Chelsea Mauldin: Absolutely.

Next
Next

What does the One Big Beautiful Bill Act mean for casework?