Improving the M&E Narrative and other Opportunities for 2022: Interview with Gonzalo Hernández Licona
Gonzalo Hernández Licona is the Director of the Multidimensional Poverty Peer Network and a frequent consultant to multilateral organizations like the World Bank Group and the UN.
Hernández Licona founded the National Council for the Evaluation of Social Development Policy (CONEVAL), an independent public institution, whose main objective is to evaluate social programs and measure poverty in Mexico. Prior to leading CONEVAL, Hernández Licona served as the General Director of Evaluation and Social Program Monitoring for Mexico's social development ministry (SEDESOL).
He was one of 15 experts in charge of developing the 2019 Global Sustainable Development Report. He holds a PhD in Economics, University of Oxford.
From your perspective, has there been progress in the use of monitoring and evaluation for public policy development? And if so, how much?
Compared with 25 or 30 years ago, there is much more M&E practiced today, more interest in the subject, and more institutionalization. Thirty years ago, very few countries had M&E clearly incorporated into their planning processes. Today we see that a lot of countries are conducting evaluations in the areas of planning, budgeting, or even in the presidential office.
What are the most important challenges today?
The pandemic aside, we should be looking to make progress on our methodologies and approaches. In addition, there should also be more focus on combining evaluation with monitoring in the analysis of public policy. I think that sometimes evaluation and monitoring end up in separate corners and don’t speak to each other. Most importantly, I think what needs to be addressed is communication around M&E. Public policy evaluators need to improve their narratives - the “selling” of their products. Evaluators are usually not good salespeople. The more academic and detailed we are, the more complicated it is for the general public to understand our work.
So how can the narrative be improved?
The evaluators — who often have a master’s or a doctorate degree — sometimes think that doing good technical work is going to be enough for it to be considered when public policy is being made. But that doesn’t work. Developing public policy is much more complicated than just reading an analysis paper and applying it to a policy or program. One of the most important roles for the evaluator is to figure out the best way to convince people of the benefits of a given evaluation process or system. Don’t be pedantic – be curious and humble – that will help with the narrative. It may sometimes even be worth contracting third parties who know the best ways to advocate a particular M&E approach to the public.
Government officials often face resistance within their own governments when implementing M&E efforts. What advice would you give to these public servants?
It’s very easy to find resistance in governments because evaluation is sometimes seen as a judgment, when, in fact, the most important point of evaluation is ongoing improvement. That’s what opens up possibilities - seeing what we are doing well and what we can improve. To overcome resistance, it’s necessary to work on convincing people, especially in government, that the evaluations will be tools for improving policies. I would also recommend starting the evaluation task with ministries and secretariats that are more open to the concept and that are going to cooperate better. If these efforts are successful, other ministries and secretariats will more willingly participate. As I mentioned before, the issue of the narrative comes up here as well – that is, being able to persuade, to show that evaluations are meant to help, and using accessible examples rather than equations or highly sophisticated models to communicate. That’s how to make progress.
So, can you talk a bit more about why the narrative is so important?
Every day we are subjected to an increasing bombardment of information coming to us via cell phone, television, radio, and newspapers. We are constantly making decisions and looking for information to help us make those decisions. The downside is that this information can be flawed, or even false. As a result, critical data and knowledge from monitoring and evaluation efforts must compete for people’s attention much more than before. Meanwhile, an evaluation expert may be saying things in complicated ways that are difficult for most people to understand. If evaluation experts focused on better narratives, it would be easier to compete with things like a top government official saying, for instance, that swallowing a disinfectant is a way to treat COVID-19. Along with making progress on the technical side, we should develop the capacity to effectively convey messages about evaluation, information, and evidence.
Are multilateral organizations and development agencies working better with governments than they were before?
I would say yes. International agencies are working better with the countries. For example, the recently launched Global Evaluation Initiative (GEI) is a coalition of organizations that seeks to unite forces to improve country-level M&E frameworks. The GEI approach recognizes that it is necessary to closely collaborate with countries to improve these processes and systems. UNICEF and the World Food Programme (WFP) have placed country-led evaluations – that is, evaluations not done by UNICEF/WFP, but rather directly by the governments – high on its agenda. The World Bank Group and other agencies are also following suit. This shows that in order to improve what international agencies are doing to support countries, it is necessary to closely listen to the countries and regard them as partners. Previously, it was a very top-down approach, with M&E efforts directed by multilateral organizations and other donors.
What do you expect in the way of future trends?
Future trends will include incorporating new methodologies based on new data and new technologies that didn’t exist before or were even unthinkable, such as big data – i.e., how to use big data information in M&E processes. In addition, the pandemic has taught us how to change our way of generating information. That it doesn’t depend merely on a face-to-face interview. There are trends not just for improving the technical side – for example, in impact evaluations, where we always need to keep improving – but, also, in evaluations that look at a policy as a whole. In other words, how to generate information on causality using techniques that will need to go beyond randomized Trials.
This has been adapted from the original interview conducted in Portuguese by journalist Rodrigo Pedroso (@jor_pedroso on Twitter).
We welcome you to join the conversation! Comment on this blog or reach out to us on social media: LinkedIn and Twitter. What has been your experience? Share your stories with us.
If you would like to contribute your knowledge to this blog, we would be happy to work with you - please contact us contactgei@globalevaluationinitiative.org.