Are We Nearly There Yet? Leading a Journey Powered by Evidence
When I was a kid my parents would load my four siblings and me in the back of a truck and drive us all over southern Africa on camping holidays I really didn’t appreciate. On those long, dusty journeys we would press our faces against the window to the cab and moan pitifully “are we nearly there yet…?”
During my 30-odd year career as a Monitoring and Evaluation (M&E) professional, I have often asked myself the same question.
We know that evaluation and monitoring should be at the heart of any debate on what works. Robust M&E systems are essential for governments and institutions. They help them design effective policies, support decision-making, and create a clear sense of what progress is being achieved. Yet, many countries still lack effective M&E systems, making evidence-informed decision-making difficult.
M&E isn’t new. After decades of effort, we need to ask: WHY aren’t we nearly there yet?
A Challenging Road Ahead
There are a variety of reasons why many country governments lack effective M&E systems. One of the core reasons is the history of M&E and its relationship to country contexts. I remember many instances when the monitoring and evaluation work I was doing was mainly to satisfy donor requirements. Donors want to make sure that their financial contributions are being well-managed - this is completely appropriate - but this has had an undesirable side-effect. In many countries, M&E systems do not integrate with the rhythms and cultures of public institutions in a way that makes them useful to government decision-makers. They often feel like burdensome policing functions, rather than systems that provide information for learning, improvement, better policy design and better results. In addition, they are often project or donor-specific, resulting in systems that aren’t coordinated and that are siloed from each other.
An even more problematic issue emerges from inappropriately designed and siloed M&E systems. Important policymakers – such as those responsible for national budget allocations – often dismiss evidence produced through M&E systems as unreliable. They prefer to draw on their own personal sense of what is effective and what isn’t. Although this is understandable due to the lack of trust in M&E systems, it creates a culture where data and evidence are viewed as unnecessary for policymaking. It also fosters a lack of transparency and creates the potential for corruption.
Another obstacle relates to the supply of M&E skills. In many countries, there are not enough evaluators who fully understand the local context or who have an appreciation for the culture and history of the country. In addition, many evaluators who can appreciate context, may lack the skills necessary to evaluate at the level of complexity that exists in many countries. An added challenge is that even when they are qualified and experienced, they do not get fair access to M&E work opportunities because of how many donor and government procurement systems are structured.
Mapping Our Way Forward
Addressing these challenges is the reason why the GEI has been established. And unlike my childhood camping trips, the GEI is not embarking on a single journey. We are a global partnership with many travelers and diverse collective caravans, each operating in their own contexts and with their own approaches. However, our differences are our biggest asset, since we each bring a piece of what is needed, and we all share a vision of better evidence leading to better policies and, ultimately, to better lives. The Global Evaluation Initiative was established to finally get us “there.”
By bringing together a wide range of stakeholders, the GEI will link partner countries with global sources of technical skills and financing with four key objectives:
Strengthen the enabling regulatory environment and develop the M&E systems and capabilities of governmental institutions. Through its network of partners, GEI will strengthen the public systems that need to be in place for M&E to be implemented at the national level. This effort will involve a range of strategic activities including supporting governments with diagnostics to identify existing M&E capacities and mapping out opportunities for strengthening those capacities. Our efforts will also include policy advisory services that will tailor the best solutions to local contexts. By harnessing the on-ground experience, knowledge and relationships of our network of experts, GEI will help to construct approaches that are reflective of local needs and that provide the kind of evidence that will guide and inform decision-makers.
Support the development of individual skills to create a cadre of professional evaluators, M&E specialists, and other evaluation stakeholders. The GEI partnership will facilitate the delivery of training and development programs that target a variety of key stakeholders involved in implementing effective M&E systems in public institutions. We will expand the reach of existing programs being delivered by our network partners and conduct needs assessments that will help us deliver the most necessary training support. To make these programs more accessible, GEI will offer financial support to participants, where feasible.
Generate tailored M&E knowledge relevant to local contexts. Through curating and analyzing the knowledge of our network of local and international experts, GEI will be able to compile best practices that can then be adapted to better suit individual country contexts. For instance, the Monitoring and Evaluation Systems Analysis (MESA) tool will help GEI gather, structure, and analyze information on existing M&E country systems and identify successes to be shared with other governments. In addition, our knowledge platform will serve as a clearinghouse of information on M&E programs and stakeholders around the world that are involved in influencing and supporting government institutions.
Promote the sharing of M&E knowledge locally, regionally, and globally. GEI will create and support opportunities for convening and engaging around the most important M&E topics. For example, GEI will continue to curate the annual gLOCAL Evaluation Week, in which organizations across the globe host M&E knowledge-sharing events. GEI will also promote the sharing of knowledge among its consortium of partners to amplify each partner’s voice and will facilitate efforts to extend the conversation to audiences not always targeted for M&E knowledge exchange.
Let’s Journey Together
Developing a critical mass of knowledge, resources, and technical skills to close the M&E gap will require sustained efforts, at scale, across every region of the world. Doing this as quickly as possible can only be done through collaboration and cooperation amongst a wide range of actors.
If you would like to contribute your knowledge to this blog, we would be happy to work with you - please contact us at email@example.com.
You can also sign up for our newsletter here.
Stay in touch and let’s travel together: we have far to go.
Photo: Curt Carnemark / World Bank
Dear Dugan Ian Fraser and colleagues,
I thank you for initiating this important discussion and fully concur with the list of challenges provided. Hope that this discussion will help in raising greater awareness of the need to invest in M&E capacities and resources at local level, so that these are employed towards producing essential evidence and data that would inform effective policies and practices towards more sustainable development practices and initiatives. I am grateful for the opportunity for providing a few of my viewpoints in this blog for further discussion.
The 2030 agenda and its ambitious, yet vitally important goals of achieving sustainable and inclusive growth require integrated, multi-sectoral approaches that coherently work towards multiplying the expected benefits, while not jeopardizing those already achieved, and not creating new risks or unintended impacts. In moving forward this agenda, decision-makers need to have in place comprehensive and robust measurement systems that enable measuring the results of sustainable development initiatives, and understanding of what works in supporting the integrated nature of the Sustainable Development Goals and their inherent interconnections and linkages.
Robust monitoring and evaluation systems should serve as essential tools for measuring achievement of progress across multiple and interconnected sectors and for helping key actors in understanding intended and unintended impacts of various policies, strategies and programmes, inherent tradeoffs and potential for synergies and effective partnerships.
It has never been easy to create, further develop and sustain monitoring and evaluation systems at national and sub-national level, for various reasons, including stakeholders' interests and levels of their commitment towards results-based decision-making, level of local M&E-related knowledge and capacities and availability of resources for monitoring and evaluation.
The challenges of building and sustaining local M&E capacities at present, i.e. in the context of 2030 Agenda are even greater.
Lack of system-wide M&E approaches
Anticipated impact of multi-disciplinary development initiatives and approaches implies a wide range of economic, social and environmental benefits and trade-offs across multiple sectors. The interconnectedness of the Sustainable Development Goals (SDGs) necessitates evaluation to change its focus from assessing progress within the boundaries of individual UN agency, and move towards assessing development impact from the whole-of-UN support delivered jointly with development actorsYet, the current set-up of monitoring and evaluation functions and their scope are not configured towards new, more coherent approaches to measuring the complexity of the performance. In the new inter-connected world, the customary monitoring and evaluation approaches become obsolete, as these continue assessing what works in the context of individual policies, projects, initiatives or strategies, rather than measuring “system-wide” performance and coherence. Individual institutions and agencies that have the M&E or research related mandate have intrinsic nature of focusing on their perceived “comparative advantage” thematic areas that shape their mandates and missions. Quality of these analyses and measurement practices represent a major challenge as evaluations, research studies and/or assessments are carried out in a traditional ad hoc manner and rarely based on highly participatory exercises.
Institutional and legal frameworks
Institutional or procedural factors, such as absence of well-established organizational mandate and institutional platforms for M&E activities, continue to affect the quality and availability of M&E systems.
Length of evaluation processes
The M&E function needs to be responsive to the fast-changing global development context, delivering evaluative evidence in support of innovative and durable solutions to achieve sustainable development. Yet, traditional evaluation approaches prevail, taking significant time to plan and deliver. Long evaluation processes reduce potential for real-time assessment of achievements and challenges to inform immediate programmes’ adjustments. By the time evaluation is completed, development initiatives, programs and projects continue their next phases, which in most cases are not necessarily informed by the lessons learned from the evaluation. Concurrently, the effective systematic and use of evaluative evidence for decision-making is low. It does not match the level of effort that has been directed at establishing the evaluation mechanisms and systems to enhance the use of evaluation by the central evaluation units. Limited use of evaluation does not allow for enhanced, evidenced-based decision-making and improvement of practice. It thus limits the value of the function.
Capacities and skills
The quality and sophistication of monitoring and evaluation systems, methodological approaches and processes significantly varies across the globe. Institutions with limited budgets do not have proper quality assurance mechanisms, relying merely on the chances to receive quality evaluation products from evaluation practitioners and consultants hired on short-term basis. Sustainability of such systems is questionable, and creates additional challenge, as comprehensive and robust M&E systems should be based on continued presence and cumulative knowledge and evidence generated periodically and systematically.
Professionalism and skills of M&E practitioners varies across different countries and institutions, as there is an apparent diversity of methods and approaches used by M&E institutions and national offices, amidst limited availability of professional certification programmes.
Senior Evaluation Officer (FAO)
For the same reasons that countries struggle with evidence informed policy making. There are often technical constraints but the primary reasons are the political economy of incentives coupled with leaderships demand for, and ability to use, performance feedback. Evaluation has not been successfully institutionalised in most countries. Instead the evaluation function relies and champions who come and go, attention to evaluation waxes and wanes. Can you imagine if this was the case with public sector auditing?
Thanks for sharing these insightful thoughts. I would be interested in your thoughts on how to develop a demand for data and evidence gathered from M&E activities when this information may either directly contradict the agenda of government leaders or may not support the policies that leaders propound. From my experience, takeup and advocacy of M&E activities are often strongly correlated with its ability to justify and amplify pre-existing hypotheses.