Development action with informed and engaged societies
As of March 15 2025, The Communication Initiative (The CI) platform is operating at a reduced level, with no new content being posted to the global website and registration/login functions disabled. (La Iniciativa de Comunicación, or CILA, will keep running.) While many interactive functions are no longer available, The CI platform remains open for public use, with all content accessible and searchable until the end of 2025. 

Please note that some links within our knowledge summaries may be broken due to changes in external websites. The denial of access to the USAID website has, for instance, left many links broken. We can only hope that these valuable resources will be made available again soon. In the meantime, our summaries may help you by gleaning key insights from those resources. 

A heartfelt thank you to our network for your support and the invaluable work you do.
Time to read
1 minute
Read so far

What's Different About Evaluating Advocacy and Policy Change?

0 comments
Affiliation

Harvard Family Research Project

Date
Summary

In this article from The Evaluation Exchange, author Julia Coffman cites four recommendations for advocacy and policy evaluators, which reflect how advocacy work differs from programmes and direct services. The differences she highlights are related to time frames, the pace of organisational work, and organisational size. Advocacy strategy evolves over time, while activities and desired outcomes can shift quickly, resulting from the fact that the programmes and services providing them can be affected by unpredictable variables. Additionally, advocacy organisations are usually small in size and capacity.


Her 4 recommendations are:

  1. Use “real time” feedback, meaning report regularly and not only at the evaluation's conclusion. The purpose of real time reporting is to position the evaluation to inform ongoing decisions and strategy. The reasoning is that advocates need timely answers to the strategic questions they regularly face, meaning that evaluators may need to reserve part of their evaluation design for “rapid-response research.”
  2. Give “interim” outcomes the respect they deserve by assessing advocacy for more than just its impact on policy change because, for example, advocates might build coalitions with other organisations or develop a network of community-based advocates who become active spokespersons. The author points out that these interim outcomes can be as important as the policy change itself. "Assessing a range of outcomes ensures that the evaluation does not unfairly conclude that the whole advocacy effort was a failure if the policy [or policy change] was not achieved."
  3. Design evaluations that advocates actually want to use and that have user-friendly results. The author focuses on a "clean and simple interface" offering users "what they want when they want it..." This can be facilitated by helping advocates focus on what they want to evaluate (as opposed to their whole organisation or programme) and how to track that data internally to inform their work.
  4. Be creative and forward looking. First, advocacy tactics are constantly changing and growing. For example, advocates are growing sophisticated in their use of electronic advocacy, resulting in a need to stay current on such techniques in order to evaluate them. Second, as advocacy tactics evolve, there is a need to make sure that the measures to assess them are meaningful. Coffman uses the example of questioning the use of tallying the number of emails sent (to a policy maker on an issue) as a significant indicator, when they may not get through or be read and counted (by the policy maker).


In summary, the author reiterates that advocacy differs from direct services and other programmes and, therefore, may require evaluators to make adjustments in order for evaluations to be relevant and useful within the advocacy and policy context.

Source

Email from Marcella Michaud of the Harvard Family Research Project to The Communication Initiative on March 20 2007 and The Evaluation Exchange Volume XIII, No. 1.