Documenting the impact of human rights advocacy work can be difficult. Traditional strategies to track and evaluate progress are often ill-suited to monitor progress in a field where change can be hard to measure. Still, better understanding how and why organizations succeed can help inform and strengthen strategy.
Traditional programmatic evaluation falls short in responding to the unique challenges of evaluating advocacy initiatives. Advocacy evaluation emerged in response to this gap. Despite meaningful growth in this field, few organizations have the capacity or appropriate strategy to evaluate their work: one survey of more than 200 advocacy organizations in North America revealed that only a quarter had undertaken evaluation. Most organizations lack written plans for their advocacy work, much less evaluation plans.
The importance of monitoring and evaluation (M&E) in human rights work cannot be overstated. Through evaluation, organizations are more able to sustain or grow their work, and can better incentivize support from those willing to invest in human rights advocacy work.
This month’s online conversation features nine human rights advocates and experts in advocacy evaluation. A summary of interviews conducted is below. You can also listen to an audio conversation on the topic below.
Our conversation leaders:
Kirsten Anderson
Hardy Merriman
Washington, D.C., U.S.A.
Kathy Bonnifield
Zehra Mirza
Senior Manager of Impact and Learning, Amnesty International USA
Darine El Hage
Human Rights Lawyer, Freelance Consultant
Riyadh, Saudi Arabia
Rhonda Schlangen
Jennifer Esala
Andrew Wells-Dang
Advocacy Monitoring, Evaluation and Learning Manager, CARE USA
Washington D.C., U.S.A.
George Ghalis
Former Executive Director, ALEF
Why is M&E important for advocacy work?
“Through participating in monitoring and evaluation, activists gain valuable information about what tactics and approaches are most effective. They also learn what things to look out for in their context when planning and implementing a campaign,” explains Kirsten Anderson, Program Evaluation Advisor at the Center for Victims of Torture. “Without this valuable information, advocacy campaigns run the risk of repeating ineffective actions, or getting stuck in the planning phase of a campaign.”
Andrew Wells-Dang of CARE USA feels that sharing the impact of advocacy work is important for organizations. “We need to be able to tell the story of how the advocacy we are doing is benefitting people… There’s a growing body of evidence that shows that the strategic choices of people within [nonviolent] movements have a significant impact on their outcomes.”
Seeking knowledge and assessing impact is imperative to building and sustaining movements. Hardy Merriman, President of the International Center on Nonviolent Conflict (ICNC) notes that advocates don’t have a lot of infrastructure to tap into when developing practices.
Why is it so hard to monitor and evaluate work in this field?
Rhonda Schlangen, an Independent Evaluation Consultant in the field, reflects on a key hurdle for advocacy monitoring and evaluation (M&E):
“One of the sessions I helped organize at the recent American Evaluation Association meeting was focused on collecting ideas from evaluators about the recommendations that we keep making in our evaluations over and over again… those recommendations are in response to some of the challenges that programs and projects keep coming up against… some of these must be preventable. One of the universal mistakes that we noted, or things that programs keep stumbling into, is a lack of planning.”
Another barrier that human rights advocates face in conducting M&E is that the impact of advocacy work is rarely seen overnight; meaningful change takes time. George Ghalis, former Executive Director of ALEF, spoke of how affecting policy change took nearly a decade:
“The advocacy started in 2008… but the law was actually issued in parliament in 2018. That’s almost 10 years of advocacy with multiple staff changes, multiple focus, different resources allocated to this advocacy; it was always difficult to really see how to continue the evaluation.”
Andrew Wells-Dang notes that impact doesn’t often align with imposed reporting cycles:
“It’s a challenge for the way that we and other development organizations work, which is in project cycles. A typical project [at CARE USA] is 3 to 5 years. sometimes [the outcome] takes longer than that. There are unlikely to be impacts that we can measure.”
Additionally, advocacy groups often work in coalitions, or in tandem with other human rights actors. George Ghalis notes the difficulty in measuring organizational impact. “I have to recognize that in the beginning. we had a lot of challenges putting indicators in place, because it was always difficult for us to really measure what was our contribution, or the contribution of our coalition in creating the change that was needed.” Andrew Wells-Dang echoes this problem in conducting M&E within CARE USA. “[We aren’t] solely responsible for any of this… it’s what we’re doing together with partners.”
Complicating M&E even further is the fact that human rights work rarely happens in a vacuum. “The complexity to which we try to adapt and develop best practices. is also related to the political environment,” explains Darine El Hage, Human Rights Lawyer and Freelance Consultant based in Saudi Arabia. “When we work on sensitive topics like torture prevention or arbitrary detention. the security situation affects the way we monitor… the way we access the information and the way we report on it.” The mechanisms and indicators through which advocates conduct M&E can be difficult to develop.
Advocacy work is often lacking concrete indicators for measurement. Kathy Bonnifield, Program Officer, Piper Fund at Proteus Fund, stresses that “program evaluation [for advocacy work] is different than social service work. It isn’t about the number of people that are served, but it’s often about how people’s mindsets are changing or if there are advancements in bills that are proposed.” There aren’t always clear ways to measure impact.
What indicators do these advocates use to most effectively measure and evaluate their work?
Conversation leaders agree that M&E indicators vary based on the type of work the team is engaged in. At CARE USA, Andrew Wells-Dang focuses organizational evaluation at output, outcome, and impact levels. In addition to a robust set of indicators, assessment of human rights advocacy work must remain flexible. Zehra Mirza, Senior Manager of Impact and Learning at Amnesty USA, notes that M&E indicators “will change on a constant basis… for each quarter we have [staff] decide their actors, and we have them decide their outcomes. I think the outcomes are placed in a way that gives them a lot of flexibility.”
Balancing quantitative and qualitative measures is also an important part of advocacy M&E. “Although the quantitative information has its own value, for certain, I think it really needs to be paired well with a qualitative context," Zehra Mirza says; “Quantitative indicators only tell us so much… numbers can be very limiting if we don’t know ‘why.’” Andrew Wells-Dang utilizes both in his work at CARE USA. “The outcome level is often qualitative… but the impact numbers we’re really looking for are quantitative.” Hardy Merriman also recognizes the importance of blending data; “we quantify some responses, and we look at people’s comments as well. Both are very important.”
In many cases, advocacy groups develop M&E strategies and indicators around externally-imposed project deliverables. The problem with this approach, George Ghalis explains, “after the project cycle ends but the campaign persists, where there's no more framework to evaluate what's happening.” Often, grant reporting is the basis by which advocacy groups first monitor their work. Developing strategies for internal M&E, and resources to do so are frequently scarce. “Even when we became aware of the importance of measuring our work,” says Darine El Hage, “it was rarely prioritized [by] the donor, grant, or resources to allocate money for M&E."
Rhonda Schlangen suggests assessing indicators before tackling M&E:
“When posed with a question about what indicators should we use, I might ask ‘why?’ What’s the purpose of the indicator? We have lots of different types of indicators; when you unpack it, are you looking at indicators to tell whether or not you’ve delivered on the terms of a grant? Are you looking for key performance indicators? … Are you looking for indicators that tell you whether or not you’ve reached results and outcomes? … Are you looking for some kind of dashboards to signal progress?”
So, perhaps the most important question to ask ourselves as advocates, how can we do better?
“A fundamental starting point for me for thinking about evaluation for any program but particularly evaluation for advocacy is ‘how can the evaluation be best positioned and designed so that it is in service to more effective advocacy and social change work?’” Rhonda Schlangen shares. Jennifer Esala, Research Associate at the Center for Victims of Torture, agrees; “Advocacy evaluation should directly inform, support, and advance the work of activists.”
Andrew Wells-Dang suggests that organizations prioritize M&E work, regardless of project cycles or funding requirements. “There’s an idea which perhaps comes out of this project and donor system that monitoring and evaluation is something separate from programming, it’s something that we do because the donor comes in and requires it or because there’s an evaluation happening at the end of the project, and then some other group of people—evaluators or specialist staff—need to come in and count things… I think some of the change in approach that we have been spreading is to see monitoring and evaluation as much more of an ongoing part of everyone’s work. Reflecting and learning is part of a cycle of action that starts with planning, then implementation, then reflection and learning.”
Zehra Mirza agrees that it is helpful to engage staff in the M&E process. She is working to equip and encourage Amnesty’s team of actors to conduct data collection on their own; she has helped staff in the development of focus groups and interview guides to support independent qualitative analysis within programming. She also stresses the importance of frequent and strategic reporting:
“I'm trying to change [reporting] so that it happens on a quarterly basis, [including] 'after action reviews,' and we're evaluating what our progress has been in terms of outcome mapping framework… I think the reason I chose it to be quarterly board reports was because staff right now are not incentivized to be doing a lot of this MEL work. So, I embedded it into current reporting practices, and because they already report to the board on a quarterly basis it was better for me to integrate it into an existing practice.”
Kathy Bonnifield also reflects on the importance of finding appropriate times to engage with the groups that Proteus Fund supports. “Over time, the biggest thing I’ve learned is that having these regular points of conversation [is important], but also understanding that there are certain times of year that I should take a step back and let them work.”
Darine El Hage suggests utilizing measurement strategies set by external evaluators in the early stages of development for internal M&E. “Within ALEF, we tried to develop sustainable tools and capacity building cycles within the team to actually build this function in a more sustainable way,” she shares, “the M&E developed by external evaluators were kept internally.”
Still, indicators must be routinely reassessed to better serve advocacy work. Rhonda Schlangen feels that M&E should focus on “the questions that are really of interest… what are the outcomes, what changes are happening as a result of our advocacy work?” This line of thinking is becoming more common among M&E specialists in the field, and has led to a shift from logic models to theories of change.
Advocates must ask themselves the tough questions, Rhonda suggests. “Where are the leaps of logic we’re making here, in terms of thinking that if we do X, Y is going to happen? Are we driving to remove some of the barriers to the problem that we’re trying to address? Are we doing this in the most effective way? Who’s not in the room, who’s not at the table, whose voices need to be heard? . Having a thoughtful process from the start is more likely going to give you a more robust basis from which to then assess your progress and assumptions.”
CVT-New Tactics has developed a Human Rights Advocacy Evaluation toolkit with additional tips for conducting effective M&E. Jennifer Esala and Kirsten Anderson were key CVT staff members engaged in the development of these materials. “Included are tools to decide on outcomes and success markers for a campaign, to document movement of an advocacy target’s position on your issue, and to set up evaluation looking at the principles of your campaign. It also includes a way to assess your team’s advocacy evaluation capacity, with suggestions of where to go next to build your capacity.”
More information can be found at: