Rethinking Nonprofit Program Evaluation with Kayla Meyers

10/21/2025

People will say, ‘we don’t have an evaluation strategy’. I say, ‘I’m sure you do. You’re thinking all the time’. I ask, what data already exists… what are you measuring and what are the strengths of the data you are already collecting? Then how are we going to close some gaps?
— Kayla Meyers

In episode 134 of Nonprofit Mission: Impact, host Carol Hamilton welcomes guest Kayla Meyers, founder of Bridgepoint Evaluation, for a conversation about reimagining program evaluation in nonprofits. 

They discuss:

  • Why evaluation is not an audit or judgment as it has sometimes been in the past

  • It should be a collaborative, curiosity-driven process that opens communication, supports learning, and improves impact. 

  • How to create right-sized, useful evaluation practices rooted in strategy and driven by purpose. 

  • How evaluation can be a force for good—helping organizations tell fuller, more meaningful stories about their work and the communities they serve.

Episode highlights:

🚩 Evaluation’s Troubled History

[00:08:10]
The historical baggage of evaluation, rooted in U.S. government mandates and data extraction models. How traditional practices centered judgment and “expert objectivity.”

❌ From Punitive to Collaborative

[00:13:10]
How evaluations can feel like audits or fault-finding missions, especially when external evaluators are brought in. Instead, evaluation should be a tool for adapting protocols—not blaming people—when things don’t go to plan.

💡 Why Smaller Orgs Should Evaluate

[00:16:10]
Evaluation doesn’t need to be expensive or externally driven. Small organizations can benefit from thoughtful, strategic evaluation—especially when they’re preparing to expand or communicate impact to funders and stakeholders.

🛠️ Start with Strategy

[00:19:10]
Before collecting more data, nonprofits should start with an “evaluation strategy”: clarify what you want to learn, why it matters, and how you’ll use the insights. Focus only on data that supports decision-making to avoid overwhelm.

🔁 Formalizing Feedback Loops

[00:20:40]
How evaluation can formalize what many organizations are already doing informally—gathering stories, asking for feedback, and making improvements. Ways to turn those moments into consistent learning loops that inform programming.

🍽️ Make It Make Sense

[00:22:10]
How nonprofits can waste energy with isolated data collection efforts. With strategic evaluation questions, tools like surveys, focus groups, and interviews can be integrated into a meaningful, usable plan.

🗣️ Turning Data Into Stories

[00:26:10]
Kayla illustrates how she helped an organization show how telehealth improved access to mental health care in rural areas—through both quantitative reach and qualitative stories. Compelling, composite vignettes brought the human impact of the work to life.

❓The One Question Nonprofit Leaders Should Ask

[00:31:50]
The key question nonprofit leaders should ask when it comes to evaluation: “So what?” Every piece of data should connect to a decision, a story, or a strategy. That simple question invites deeper thinking, more strategic communication, and greater impact.

Guest Bio:

Kayla (Mueller) Meyers is a seasoned program evaluator with over a decade of experience in assessing and improving programs for nonprofits and public organizations. She specializes in evaluation capacity building and mixed-methods program evaluation, bringing a wealth of knowledge and expertise to her clients

Important Links and Resources:

Kayla Meyers

Bridgepoint Evaluation

Bridgepoint Evaluation Newsletter

Humphrey School of Public Affairs

Related Episodes:

E108: Nonprofit Success: Using Logic Models to Showcase Impact and Improve Organizational Alignment

E76: Getting clear on your theory of change

E17: Program evaluation for nonprofit organizations

  • Carol Hamilton: My guest today on nonprofit Mission Impact is Kayla Myers. Kayla and I talk about how we might reimagine program evaluation for nonprofits. I appreciated Kayla's acknowledgement of the problematic history of evaluation that turns off a lot of folks in the nonprofit sector from a very valuable tool.

    The role of the outside evaluator whose job it is or was to prove or disprove the validity of a program that is supported by government funds has had a lot of negative impact, and because of the possibility of punitive outcomes from such an evaluation, nonprofit leaders have feared it with good cause.

    Certainly being mindful of how our tax dollars are spent is a good principle. It is good in principle, yet the power dynamics between expert evaluators and the staff, and even more so between evaluators and those in the community that were served is hard to get past. Yet an organization can make good use of evaluation to demonstrate the impact of its work with a much more of a bottom up or collaborative type of approach.

    The goal is ongoing learning and improvement, not judging whether something is good or bad. And as Kayla points out, many folks don't realize that they have more of an evaluation strategy than they realize. Even if your organization does not currently have a formal evaluation process, you are thinking about your mission, your goals, and those you support all the time.

    You're paying attention, you're paying attention to signs, whether the current programs are doing what you set out to do. You probably already have more information than you realize, and you're probably tracking a lot of things. And you have a concept or a hypothesis that if you deliver X, Y, Z programs, certain things will happen as a result.

    Oftentimes working, oftentimes working with a consultant on program design like Kayla or myself, the first step is documenting what you already know and have in progress, and looking for the strengths and gaps to be addressed. And then mapping out that shared understanding of how your programs work and how they all contribute to your mission.

    A key piece that Kayla emphasizes is the importance of having an overall strategy guiding your evaluation efforts. Why are you gathering the information you're gathering? What are you, what are you learning from it? What is the bigger question you want to answer that will drive a more intentional effort?

    During my conversation with Kayla, we talked about why evaluation should not be an audit or judgment as it has been in the past, how it should be a collaborative and curiosity driven process that opens communication, supports learning, and improves impact. And how to create right-sized useful evaluation practices, rooted in strategy and driven by purpose, as well as how evaluation can be a force for good.

    Helping organizations tell furlough fuller, more meaningful stories about their work and the communities they serve. If improving your program evaluation processes is part of this or next year's work plan, this conversation will help you get started.

    Well, welcome Kayla. Welcome to Nonprofit Mission Impact.

    Kayla Meyers: Yeah. Thank you so much for having me. I'm so excited for this conversation.

    Carol: Yes, I've been looking forward to it as well. So I like to start every conversation with people's why, what, what motivates you to do the work? How, how did you kind of come to doing the work that you do?

    Kayla: Yeah, absolutely. And when, when I think about this question, what first comes to mind is like, oh, I kind of have just always been a bit of a pain in the butt with asking too many questions. Which is.

    Carol: it.

    Kayla: Right, which is the practice of evaluation. But, but truly, when I think about what, what brought me to this work, I just, I have truly always been really curious and I think, you know, it really kind of blew my mind when I was a teenager and found out that different people and communities and cultures have different ways of understanding the world and their place in it.

    That I just kind of dove into that so deeply. Really found evaluation, which is the practice of inquiry, like a really formalized inquiry and was like, oh, wasn't this great for me to just lead a life of asking questions and helping people ask good questions. And so yeah, so I think, you know, really deeply understanding the ways in which people and programs interact and engage with one another, and thinking about how to formally open that communication channel brought me to evaluation and where I am today.

    Carol: That makes sense. That makes sense. I love it. I, I, I think, I don't know, I don't know that my grandson is in this stage anymore, but for sure, while he was three, you know, why was the, the, the ongoing question and you know, why this, why that, why, you know, and. Many times we didn't have the answers. The questions have changed, but the curiosity remains and that's just awesome.

    Yeah.

    Kayla: Yeah.

    Carol: You talked about you, you do evaluation programs for nonprofits. And what would you say are some of the kind of mis big misconceptions about evaluation?

    Kayla: Yeah, I, I would say, and it, they're not even misconceptions, but they're conceptions based in evaluations history, which is a really troublesome one. And so I think all evaluators today have to kind of climb. This, climb this hill to, to prove themselves in a, in a justified way of you know, not being an audit, not being data extraction not kind of being this ivory tower judgment moment.

    And so I work with my clients a lot to kind of help them and their teams more so their teams and their communities get on board with, let's do this really collaboratively. Starting at the beginning. What are we, what. And why do we wanna know it and how are we going to use that information? And then how can we be really transparent about some of our values and how they show up in our work?

    Like ongoing consent. So if someone says, I'll participate in your interview, you can really clearly tell them at any moment you can call me and I will pull your information out of this report. You know, kind of, kind of writing some of those historical wrongs. Truly. I think most often people see it as an audit.

    They think someone's gonna come in and judge their work externally, which I don't want to do. That's not part of being curious and learning. That's not my orientation. Or people just see it as a pain in the butt, which I totally understand. Right? They're like, Ugh, that's the reason I have 19 spreadsheets on my desktop.

    I don't wanna engage in this work. And so, being able to kind of flip the script on that is really important to me.

    Carol: I, I would love you for you to like, flesh out a little bit. Just say a little bit more about the history and kind of what's been problematic about evaluation. It's interesting 'cause I, you know, most of the evaluation that I, I've been in some things I, I, I actually, I think. I've been in the nonprofit sector for almost 40 years now, and I've only been involved in one instance where we had an external evaluator. Every other time we were doing it, we were trying to build it from the inside. So those were two things that kind of caught my ear, but I'd love to hear just say a little bit more about what, what's that hill and why?

    Kayla: Yeah, absolutely. And I'll say like the history that I'm going to give is about government mandated. Evaluation in the United States,

    Carol: Okay.

    Kayla: because I, you know, all cultures have these beautiful ways of ongoing learning and, and curiosity and inquisition, and so I'm not talking about those. But the United States really started to pursue evaluation in the education sector to understand how and under what circumstances educational programs were working and how US government dollars.

    We're being invested in what that was resulting in. And, and now we're seeing like a different type of script around that exact same conversation. And so, and so it was truly rooted in this kind of, you know, justify these dollars motivation. And there's been various acts since then that are kids that are around, you know, when US dollars are funded, how are we going to measure their progress and their merit?

    And we've seen, you know, it's, so, on one hand it's the motivation for the evaluation. On the other hand, it's how the data's been used. And so I think there's this concept and you know, some may still, you know, debate this, but there's this concept of an evaluator is this objective observer, this, you know, expert with these expert methods.

    And so it's kind of been this, you know, this data extraction has been kind of like, you know. You, you user of the program have an experience, but it's not validated until I validate it. So give me information, let me use my statistical test, which only I have access to. You know, they're very expensive, or, you know, they're very convoluted and not, you know, shared.

    You know, that power isn't shared at all. And then I'm going to make a judgment call. And that's not at all how I do this work. That's just, that's not at all what I wanna contribute to. And so being able to kind of acknowledge when evaluation practice or traditional evaluation design is going down that route and saying, Nope, we're not going down that route.

    We're not, we're discontinuing that practice at List a bridgepoint we are. And so let's instead, you know, talk about how this is gonna show up differently in this world.

    Carol: Yeah, that man makes so much sense. And I, I always bristle a little bit when people say, oh yes, I'm being objective. And I was like, nobody is. We all have our biases. We all have our blind spots. We can only see the world the way that we see. It doesn't mean that our way of seeing the world doesn't have validity.

    It's just that is all that we have. And there is no. No one above everybody. A person who, magically has some other you know, everybody has a point of view. Yeah. Yeah. And, and interestingly I'm thinking about that instance when, when I was I did experience an or a system being evaluated from the, from the outside and it, and it did. I don't think their intention was to be punitive, but I definitely saw them seeing their role as, you know. Highlighting, you know, what they thought was going well, but also high.

    You know, definitely being critical as well. And it's not that people can't be critical, it's, but then because this was a foundation funded the Hope Report got deep six

    Kayla: Yeah.

    Carol: It wasn't all, you know, rosy, rosy wonderfulness. And I'm like, well that was a lot of money that you know, didn't have a lot of impact.

    Kayla: Right. Which I.

    Carol: like, wow, okay. This is really, really fascinating to watch this happening.

    Kayla: Exactly, and, and I was just, I've been on this project right now. That's been the whole time we've been saying we have to be careful because this could be seen as being punitive. This could be seen as being an audit. And so instead we know what's actually happening is leadership or you know, these folks at really higher levels are kind of like, things aren't going according to plan.

    Can you come in and figure out what's happening? We're gonna explore that, but never with an eye to, alright, who's doing it wrong?

    Carol: Right.

    Kayla: not calling the, and instead say, even as we're kind of reporting back, we're gathering information, we're understanding people's perspectives of the way this program's being implemented and actual real world not in the design stage, but when we're asking them, like, and saying, this is how it's being implemented.

    This is what we're hearing. It takes a really good evaluator, an evaluator who's trained in facilitation to be able to work with the people in the room and say they're not doing it wrong, for whatever reason, they're changing the way you think it needs to get done because they're applying it into context.

    And so it actually means the protocol's not working, doesn't mean the people aren't working, the protocol's not working. How do we use evaluation to change the protocol? To actually fit the real world 'cause something's not relevant or feasible, so let's change it.

    Carol: Right. I mean, you know, when you're, when you're cooking up a program, it all sounds great on paper or on the whiteboard or the flip chart. And then it meets reality and you've gotta be willing to, to you know, make those shifts. And, and as you're saying it could be that there's a disconnect between the people who are in the room when it all got mapped out on the whiteboard, but then the people who are in, who, who actually, you know, implementing the program and, and dealing with the daily realities.

    Kayla: Exactly. That's exactly it.

    Carol: Yeah, so for not, so on the flip side, when smaller organizations, they don't have a lot of resources, but, but they have programs. They have anecdotal information that says, you know, folks love what we're doing. We see, you know, when folks come back to us, we see the impact we're having. For those kinds of organizations, what are some of the things that they can do to get started and why would they even want to get started?

    Like what, you know, without someone telling them from the outside, a foundation or funder or whatever you need to do this, what? What would be the motivation to try to dig into this work?

    Kayla: Yeah, I love that. And this is why I created Bridgepoint evaluation. I have had my, almost my entire career has been in external program evaluation consulting. I was internal to a nonprofit for a couple years, but even, even that in a program evaluation management role. And so. I've seen this gap in services and traditional evaluation services of, okay, we just need, you know, this six figure budget in this 18 month to three year timeline.

    And then we'll do this evaluation of every small organization's like, cool. I'll call you when I get that. So, so how do we.

    Carol: happens to be bigger than my entire organizational budget.

    Kayla: Right, exactly. And I'm like, yeah, that's not gonna work. And so, I really see evaluation as a learning opportunity. But also as a way of opening the communication lines between a program or an organization and the people it intends to serve or the users. So it, when you, when you create a nonprofit or you create a program, you're saying, I intentionally want to interact with people.

    My goal is that it has this. Output or my goal is that it has this outcome, but I don't know, I don't know how the people who are using it are experiencing that interaction as well. That's what evaluation is. It's opening this really formal communication line and doing that with the intention of saying, here's the best way to hear back from folks.

    Here's the best way to hear back from a lot of folks or a few folks and be able to use that as an open line of communication, as a learning and engagement opportunity. And oftentimes the reason folks wanna do that is because they're looking to expand. So if you're doing something fantastic, great.

    How do you get, usually what you end up hearing is more users saying, can my friend access this? Like, I know they don't meet selection criteria, but how do I, how do we open, open up those doors to meet that, and with expansion comes things like talking to your donors and talking to, talking to the, the funders.

    And evaluation becomes really important there too, to say, we know our program's working, we know it's working for this really select group of people that were funded to serve. With your, with this contribution, we could open those dollars even wider. And so, when I, when I work with small organizations or you know, just generally a smaller resourced project.

    I always start off with a guiding strategy. What's going to guide this work? What kind of questions do you need to answer to be able to make decisions with the information? And, and being able to kind of create that, you know, evaluation strategy really intentionally, which oftentimes is about bringing together all the things you're already doing and then just doing a couple add-ons and being really.

    Formal about how you're analyzing the data and, and reporting it. And then also being able to be really intentional about the learning moments, right? So most organizations have a ton of data collection that they're doing, and then they find themselves trying to drink from a fire hose when they pull it together.

    So instead of being able to say, Nope, we have a formal strategy. Now we know that data sources like these are going to answer this broader question. We're gonna explore them on a quarterly basis. These others, you know, or our focus groups or interviews, those are gonna be looked at every year. We're gonna bring those up at our annual report, but also our annual routine retreats.

    And so just really getting intentional and getting formal is where I start with smaller organizations.

    Carol: So there were a couple things in there. Well, one from before you were talking about opening lines of communication. And, but I also wanna have you talk a little bit more about this notion of creating a str. Strategy before you dive in. When you say opening the lines of communication, can you just say a little bit more about what you mean by that?

    Kayla: Yeah, absolutely. I think of it as you're, you're already doing it. You're already hearing back. Most organizations are doing things like pre and post tests or surveys, and it's your way of saying, you know, is this, is this program achieving what we hope it achieves? Are you satisfied with it? Are you gonna come back?

    Being able to get really intentional about what you want that communication to have. You know, it's not just about are you satisfied, it's about like, what, what's changed as a result of having experienced this and how do we do that even better next time? So that's the, that's the communication line that we're trying to open up to be able to get that feedback.

    And then also hear other people's goals and wishes for that program. If they think like, you know what, this would be amazing if it could just. It takes place at my child's school. Like if this could be housed within the school, I would bring all my kids here if it's a community health workshop or something.

    And so being able to get those ideas and, and, and being able to see those opportunities that your community sees for you and your programs.

    Carol: And you talked about kind of setting a bigger strategy before you dive in. Can you say a little bit more about what that looks like and kind of again, the why behind it?

    Kayla: Yeah, absolutely. And this really comes out of my experience seeing what the mandates are for, for funding requirements of, okay, we just, you know, we want the number of people reached, we want the number of times they were reached. We want all of these numbers. We're gathering them all the time, or I'll work with the nonprofit and say, oh, do you have any data about this?

    And they say, yeah, yeah, yeah. We do all these pre-post tests. Let me send them to you. All of those are great strategies, but without an overarching guiding plan, they're just individual strategies. Right. It's like trying to make dinner with no recipe. You'll make it.

    Carol: expert to do that. You can't be a beginner

    Kayla: Yeah. It'll be something. It'll be something on the plate.

    But it is going to be. Right. Is it gonna be, you know, satisfying and, and full and fulfilling. So instead saying like, alright, when do we wanna do pre and post tests? How do we wanna do pre and post tests? When do we wanna do surveys? What do we wanna ask on those surveys? Or like, when are you just interacting with the people that you're working with?

    And they're telling you these incredible stories and you have a way to kind of record that story and capture that and be able to use that. Then when it comes to the annual report, being able to first start off by saying, what are our evaluation questions? There's broadly six types of evaluation questions, and these are just the buckets, right?

    Like this is kind of to say, this type of question is going to get at the process by which your program's being implemented and if it's working, this type of question's gonna get at outcomes. Are you achieving those outcomes? For whom are you achieving the, achieving those outcomes under what circumstances?

    And then you have a whole other list, like, is this an appropriate program? Is this a relevant program? Is it, is it cost bene, like doing some cost benefit stuff around there. So there's different types of evaluation questions. Being able to find the ones that provide you with the information that's going to be useful.

    I'm so focused on using it. How are you going to make a decision? How are you gonna guide your strategy? Who cares about this information? If you cannot answer those questions, you probably don't need to be asking it. It's just a fun fact. And so, being able to kind of paint that picture of when we know X, we will do Y.

    Here's how we're gonna know X and here's where we're gonna go about asking that. And then being really intentional about, okay, it does make sense to do pre and post tests. Once a year or twice a year or before and after every workshop? I don't know. It depends on the context. But being gonna get really intentional about like, yep, we are going to answer x.

    That's all we're gonna focus on. All efforts are gonna point towards X. And then when are we gonna have that conversation? Is it once a year? You're gonna have quarterly check-ins, but again, coming really into use, use of the information.

    Carol: Mm-hmm. Yeah, that makes sense. That makes sense. One of the things that when we've talked before, that you talked about was making use of that information. And I, I definitely have been, I've worked with, with clients when, when I'm working on this kind of project with them where, when we dig into it, they're actually collecting a lot of information already, but, but it's kind of. Going into some black hole that no one knows where it is. You know, they're collecting it all. Maybe they do manage to do whatever's required, right? You know, we have to report on this, this, and this. And so they scramble to pull that together. But. For the most part, they're not using it for what you're talking about, for making decisions, for learning. And, and one of the things you also talk about is kind of how to take all that information, all that data, and, and really build it into stories like telling the story of your organization.

    Kayla: Yeah,

    Carol: And I'm wondering if there are some examples you might be able to tell of like how you've helped organizations kind of work through that, those kinds of issues?

    Kayla: Yeah, absolutely. I, and I will, the thing, you know, people will say, oh, we don't have an evaluation strategy. I'm like, I'm sure you do. I'm sure there's some way.

    Carol: somewhere.

    Kayla: It is in there. You're thinking all the time. I bet you're thinking super hard. And so, and so being able to kind of say what, what data already exists.

    So oftentimes I start off with just a general strategy assessment. I used to call it an audit and I was like, this is not helping my cause. We're really figuring out what are you measuring and what are the strengths of the data you're already collecting? And then where are we gonna close some gaps?

    And it's really a matter of answering the question. So an example of this is I worked with an organization that, you know, is kind of we're Sunset, some of this COVID funding and they knew that the funding for telehealth, specifically for mental health services, was under examination for some of that sun setting initiative.

    And so the question was. How, or in what ways does Telehealth increase access to mental health resources in rural communities? There were a lot of ways to pull numbers together to tell that story, right? We looked at reach in terms of different geographic regions and we can overlay that on a map of actual mental health providers who were at brick and mortar in the community.

    We looked at you know, the longevity with which people were accessing those services, so were they being able to kind of come back. All of those were fantastic and we were able to report on like, you know, this number of people for this long in these communities. But it was really being able to do the qualitative, so a mixed methods evaluation, pairing that quantitative with the qualitative to say how does it actually show up?

    And, and when doing that, we're able to kind of gather these scenarios and then paint this overarching story that isn't representative of any one person. But that many people probably see themselves inside. So one of the stories that we were able to kind of put together is that of a teenage boy who is a member of a family that's new to the United States.

    They're new Americans. They're, they're, you know, he is the translator of the family to get everything from. UA or Spanish into English. Maybe you can find a Spanish speaking therapist. In some of these rural communities, probably not. But the fact of the matter is Spanish isn't even his family's first language.

    It's us. And so even trying to do Spanish is a language barrier. So in this circumstance, we're able to match a teenage boy who sure could get services in English. But instead match him with a therapist who is fluent in Quechua, who can talk to his parents and explain like, here's what mental health therapy is.

    Maybe they haven't accessed mental health therapy before. Here's what it is, here's how we're gonna go about it. Here's how I'm gonna make sure I can communicate with you parents and make sure that your son is safe and that they're getting the care that they need. And really just creating this relationship from the onset in everyone's first language, and then being able to kind of fulfill that mission of increasing access to mental health care.

    Another one of the kinds of vignettes that we were able to paint with this story were for folks who were experiencing. Really intense mental health symptoms that getting into a brick and mortar mental health shop was a barrier. It was one that was not going to be able to work for them. And so instead, being able to go off video and just call someone from your bed.

    And, kind of try and get to a place where you can fulfill those activities of daily living and then get into that brick and mortar shop is going to be, you know, a scenario that is otherwise impossible without telehealth. And so, so for me, it's really about being able to put together a story of breadth and depth, right?

    So the quantitative tells this story of breadth. Here's how many people across which geographies for how long. But then depth of here's what it actually looks like, here's what each of those numbers might represent, which is really impactful and, and a beautiful story.

    Carol, I lost your audio.

    Carol: Oh, I

    Kayla: I.

    Carol: off because there's a train going by and it was blowing its horn, so I had muted myself. See, I told you it would happen. So you, I, I love how you were talking about combining what you might be required to, to fulfill, like all through all of that funding of how many people have you reached, where have you reached them, but then to be able to humanize it some more of, with those, those stories and, and maybe those stories were.

    I would guess that some of them are kind of a synthesis or an avatar or a compilation of different experiences that then you can weave into one vignette of a, for instance, of how this makes a difference in the real lives of people.

    Kayla: Right, right. 'cause it's so much more, there's so much more than a data point. You know, it's, it's. It's beautiful, it's a beautiful exchange and a beautiful, you know, relationship between a person and the programs they're accessing. And so how do we paint the full picture of that?

    Carol: Right, and, and you know, it's all, I think it's always helpful to have both, right?

    Kayla: Yeah.

    Carol: tells a difference, gives you different information and gives you different context to make decisions or to convince others to, you know, continue something. As we close out, what, what's one question that you wish more nonprofit leaders would ask themselves to, to think more strategically about evaluation?

    Kayla: And it's really the, so what? It's truly it. I feel like most of my job is me saying, so what? And, and so you can, you can cut me out and just ask that yourself. But to really explore why they wanna know something or why something's being asked of you so that you can really respond to it.

    Right. So being brave and bold in those conversations where someone says, I would love to give, you know, X amount of dollars that you can expand services to a thousand people, kind of saying. Why you know, what, what are, what are you kind of being able to put, put those pieces together? And not necessarily in a defensive way, but just saying, you know, what are, what are you excited about?

    What change do you believe that we can make? And, and why a thousand? You know, what are you looking for? What do those thousand people represent to you? And being able to bring that back into your learning strategy. So what do we want to know? For what reason? What's going to help, you know, guide our decision making or guide the way we communicate about our work.

    We're not just handing out meals after school. So, you know, families have fresh food when they get home. We're creating shared meals in, in a family structure, and we know that shared meals are linked to increased mental health outcomes, increased academic resources, like being able to really get to that.

    So what of your work? In those conversations with funders and your conversations with your staff and your conversations with the community and being able to then say, okay, how are we gonna measure that? How are we gonna really try and, and pull that threat apart so that we can fully understand where we are being really strategically impactful?

    Carol: Absolute. Well, thank you so much. I definitely learned a lot today, and I always appreciate when I get to learn through a conversation, it's my favorite. So thanks so much for coming on and, and sharing your expertise and wisdom.

    Kayla: Yeah. Thank you so much for having me, Carol. I love listening to this podcast.

    Carol: All right.

LISTEN + SUBSCRIBE
APPLE PODCASTS SPOTIFY

Previous
Previous

Designing nonprofits for impact with Julian Chender

Next
Next

Creating Safer Nonprofit Spaces with Paula Brantner