Making a Difference: M&E Of Policy Research
Overseas Development Institute (ODI)
This 66-page paper aims to advance understanding on how to monitor and evaluate policy research - research that is undertaken in order to inform and influence public policy. The author defines policy broadly to encompass both policy decisions and processes, including implementation. Each monitoring and evaluation (M&E) approach is represented diagrammatically.
From the Executive Summary:
"Conventional academic research is usually evaluated using two approaches: academic peer review and number of citations in peer-reviewed publications. For policy research programmes, these evaluation tools have proven too limited. They are not well suited to capture some of the broader aims of policy research, such as policy impact, changes in behaviour, or building of relationships. In short, policy research programmes need new monitoring and evaluation (M&E) approaches in order to know whether they are making a difference, not only in the academic world but also in the world outside academia.
The paper is written with research programmes and institutions in mind, rather than individual researchers. It presents examples and approaches on how to do M&E of policy research from the current experience of a range of research institutes, think tanks and funding bodies. The approaches have been divided into the following five key performance areas: (i) Strategy and direction; (ii) Management; (iii) Outputs; (iv) Uptake; and (v) Outcomes and impacts. Research programmes or institutes may wish to focus on only one of these areas, or may combine approaches across the areas to form a more comprehensive M&E plan.
This paper has five sections. Section 1 is a short introduction. Section 2 provides a survey of a range of possible new M&E approaches taken from the current experience of policy research projects, programmes and institutions. These are:
Performance Area I – Evaluating strategy and direction: Logframes; Social Network Analysis; Impact Pathways; Modular Matrices.
Performance Area II – Evaluating management: ‘Fit for Purpose’ Reviews; ‘Lighter Touch’ Quality Audits; Horizontal Evaluation; Appreciative Inquiry.
Performance Area III – Evaluating outputs: Evaluating academic articles and research reports; Evaluating policy and briefing papers; Evaluating websites; Evaluating networks; After Action Reviews.
Performance Area IV – Evaluating uptake: Impact Logs; New Areas for Citation Analysis; User Surveys.
Performance Area V – Evaluating outcomes and impacts: Outcome Mapping; RAPID Outcome Assessment; Most Significant Change; Innovation Histories; Episode Studies.
Section 2 also presents notes on institutions that have begun developing new models in the area of M&E of policy research. ... Further details on institutional evaluations are given in Appendix 1.
Section 3 then highlights a few additional concerns to bear in mind when evaluating entire institutions (rather than individual projects or programmes), and Section 4 concludes by presenting best practice checklists on how to design an M&E approach for a policy research project, programme, or institution."
In the conclusions section, a best practice section includes suggestions for combined approaches, and a 5-step strategy of choosing approaches:
- Choose a decentralised or centralised focus.
- Decide on a mixture of self-assessment and external evaluation.
- Note whether quality and uptake of outputs are monitored regularly.
- Capture impacts.
- Use the evaluation to draw up a revised strategy for the next phase.
It also includes a checklist for best practices in evaluating someone else's project, programme, or institution.
This paper is available in PDF format through the link below, or hard copies may be ordered for a UK£10.00 fee through the RAPID programme website.
Email from Ingie Hovland to The Communication Initative on July 26 2007.
- Log in to post comments