Educating Evaluation Professionals: Core Knowledge and Competencies

Jos Vaessen, Marcia Joppert, Reinhard Stockmann
15 February 2022
Educating Evaluation Professionals: Core Knowledge and Competencies
While we live in a reality where multiple evaluation models are being put into practice, one should ask the question, what are the core skills and knowledge elements that evaluators and other evaluation stakeholders should be equipped with?

Jos Vaessen is an adviser at the Independent Evaluation Group, World Bank Group and the Global Evaluation Initiative.

Marcia Joppert is an evaluation specialist and Director, Brazilian Monitoring and Evaluation Network; Research and Operations Associate, The Evaluators’ Institute; Ph.D. Candidate, Claremont Graduate University.

Reinhard Stockmann is Professor of Sociology at Saarland University and Director, Center of Evaluation.

Editing support provided by Maria Fyodorova, Communications Consultant for GEI.



Over the last few decades, evaluation has truly become a global field of practice.1 Across continents, countries, policy fields and institutional contexts, the practice of evaluation has risen to prominence. At the same time, significant gaps in the institutionalization of evaluation remain and the utility of evaluation, especially its ability to influence processes of accountability and learning, continues to be challenging.

As evaluation has spread across the globe, the demand for training in evaluation has markedly increased. Traditionally, evaluation professionals have been trained in one or more of the social sciences (e.g., economics, political science, sociology, anthropology), often in combination with a more sector-specific orientation (e.g., health, education, agriculture, development cooperation).2 Although this type of training path is still relevant for the majority of evaluation professionals, we have seen a notable increase in the growth of evaluation-specific training opportunities3 that can be summarized into three broad categories:

  • Courses on evaluation as a part of other (non-evaluation focused) academic programs, mainly at the graduate level and, to a lesser extent, at undergraduate and post-graduate levels.
  • Evaluation-focused academic programs mainly at the graduate level (as well as a range of diploma, certificate and even post-graduate programs).
  • Professional trainings. This category is the most diverse in terms of topics, audiences and organizations delivering the training (e.g., formal training institutions, universities, private consulting companies, governmental and non-governmental agencies).

The Global Evaluation Initiative (GEI) is a global network of institutions and individual experts that, as one of its strategic objectives, seeks to effectively meet the evolving demand for evaluation-related knowledge and skills development. In this blog – and subsequent blogs in this series - we will discuss key concepts and challenges that that we consider to be of importance in meeting this challenge:

  1. Definition of evaluation and its boundaries
  2. Core knowledge and competencies needed for evaluators
  3. Nexus of evaluation and the data revolution
  4. Institutionalization of evaluation in organizational systems and society
  5. Pedagogical issues in teaching evaluation
  6. Competition and collaboration in the field of evaluation training
  7. Standards for academic and non-academic training in evaluation

In our first blog we discuss the first two topics above.


How do we define evaluation and what are the boundaries of the field?

There is no single consensus around the definition of evaluation, although evaluation can - in its most common form - be defined as the process of determining the merit, worth, or value of an evaluand.4 The evaluand is usually a policy intervention, a set of policy interventions or underlying aspects or processes relating to the design, implementation, and results of policy interventions. In addition, almost all definitions currently in use include the element of decision-making, such as: "for the purpose of reducing uncertainty in decision making".5 Carol Weiss defines evaluation as “the systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy.”6 While some see evaluation as applied social science where the rules and standards of research apply, for others it is primarily an activity that involves formative aspects, or action research, or is simply viewed as an "art."7

Furthermore, there is no clear agreement on what is “in or out” when talking about evaluation. While for some evaluation is mostly conducted during or after the implementation of a policy intervention (interim or ex post), for others it also includes the assessment practices that occur at the design or appraisal stage of an intervention (ex ante). Boundaries also become an issue when defining evaluation in the context of other knowledge and oversight functions in the public realm (e.g., appraisal, monitoring, policy research, inspection, performance or financial audit).8 While most will agree that each has a distinct purpose and scope, in practice, there are often overlaps.

Another important discussion is whether evaluation is a field of practice, discipline or profession. At one end of a continuum there is the idea of evaluation as a divergent field of practices conducted by a variety of professionals with a variety of academic and professional backgrounds. On the other hand, there is a growing movement that considers evaluation as its own distinct profession.9 The latter view is buoyed by the growth in voluntary organizations for professional evaluators (VOPEs) across the globe.10 In addition, the increase of evaluation standards, evaluation-focused textbooks, and specialized peer-reviewed evaluation journals and knowledge events11 also support the latter opinion.12 On the other hand, the high level of heterogeneity in evaluative practices (in terms of purpose, scope, methods applied, nature of the evaluand, institutional context, etc.) and the fact that evaluations are conducted by a variety of experts with different backgrounds (e.g., members with specialized expertise coming together in one evaluation team) are more in line with the former view. A more intermediate perspective is found among those who consider evaluation to be a trans-discipline, spanning and complementing multiple professional disciplines (e.g., evaluation in the field of education, evaluation in the field of health, evaluation in the field of crime and justice, etc.).13


What knowledge and competencies do evaluators need?

As the practice of evaluation evolved over the last fifty plus years, first in the United States and in an increasing number of European countries and, mainly in the last two decades, across all continents, different evaluation approaches have emerged (e.g.,  including goal-oriented, management-oriented, learning-oriented, participatory and emancipatory approaches to evaluation).14 Some of these approaches have waxed and waned in prominence over time (e.g., randomized controlled trials movement) while others have persisted in particular (geographical or institutional) contexts.15 This variety in evaluation models - often underpinned by a variety of ontological and epistemological foundations16 - clearly indicates the rich diversity of evaluation research. Arguably, such diversity is needed to cover different evaluation exercises conducted during an intervention cycle, to accommodate the learning and accountability needs of assorted stakeholders and address the range of evaluative questions that may arise. On the other hand, it has contributed to an ongoing fragmentation in evaluation training offers and made it more difficult to establish a standardized knowledge and competency learning framework for evaluation professionals.

While we live in a reality where multiple evaluation models are being put into practice - some complementing each other, some being at odds with each other - one should ask the question, what are the core skills and knowledge elements that evaluators and other evaluation stakeholders should be equipped with? A number of VOPEs such as the American Evaluation Association, The European Evaluation Society and the International Development Evaluation Association have developed useful competency frameworks. The same goes for professional networks such as the United Nations Evaluation Group and the Evaluation Cooperation Group (of the multilateral development banks).  Based on a comparative analysis of several of these frameworks,17 the following main competency areas for evaluation practitioners emerge:

  • evaluation expertise,18
  • methodological expertise,
  • management expertise,
  • communication and interpersonal skills,
  • integrity and ethics,
  • institutional expertise,
  • substantive expertise, and
  • contextual expertise.


Other frameworks (often developed by VOPEs) exist and are broadly in line with the above.

Each of these competency areas merit careful consideration. However, recognizing that institution- and culture-specific circumstances always influence how evaluation should best be practiced, context should guide which competencies merit more attention. In any evaluation training program - especially those that target audiences from specific institutional or (sub-)national contexts - it is important to reflect on the following question. How can we teach widely accepted and applicable standards and models for evaluation research while at the same time instilling a thorough understanding of the context-specific organizational processes, norms and belief systems that make evaluation work in practice?


We welcome you to join the conversation! Comment on this blog or reach out to us on social media: LinkedIn and Twitter. What has been your experience? Share your stories with us.

If you would like to contribute your knowledge to this blog, we would be happy to work with you - please contact us

You can also sign up for our newsletter here.



  • AEA (2018) AEA Evaluator competencies.
  • Alkin, M., & Christie, C. A. (2004). An evaluation theory tree. Sage.
  • Barrados, M. & Lonsdale, J. (Eds.) (2020). Crossover of audit and evaluation practices. Routledge.
  • Cronbach, L. J. (1982). Designing evaluations of educational and social programs. Jossey-Bass.
  • Fitzpatrick, J., Sanders, J., & Worthen, B. (2012). Program evaluation: Alternative approaches and practical guidelines. Pearson.
  • Furubo, J. E., Rist, R. C., & Sandahl, R. (2002). International atlas of evaluation. Transaction Publishers.
  • Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Sage.
  • IEG (2019). Evaluation competency set. Independent Evaluation Group, World Bank.
  • Jacob, S., Speer, S., & Furubo, J. E. (2015). The institutionalisation of evaluation matters: Updating the international atlas of evaluation 10 years later. Evaluation, 21(1), 6–31.
  • LaVelle, J. M. (2014). An examination of evaluation education programs and evaluator skills across the world [Ph.D. Thesis]. Claremont Graduate University.
  • LaVelle, J. M., & Donaldson, S. I. (2015). The state of preparing evaluators. New Directions for Evaluation, 2015(145), 39–52.
  • Leeuw, F. L., & Furubo J. E. (2008). Evaluation systems: What are they and why study them? Evaluation, 14(2), 157-169.
  • Mertens, D. M. (1998). Research methods in education and psychology: Integrating diversity with quantitative and qualitative approaches. Sage.
  • Pawson, R., & Tilley, N. (1997). Realistic evaluation. Sage.
  • Rodríguez-Bilella, P., & Lucero, M. A. (2016). Evaluation as a global phenomenon: The development of transnational networks. In R. Stockmann & W. Meyer (Eds.), The future of evaluation (pp. 66–80). Palgrave Macmillan.
  • Rosenstein, B. and Kalugampitiya, A. (2021). Global mapping of the status of national evaluation policies. Global Parliamentarians Forum.
  • Scriven, M. (1991). Evaluation thesaurus. Sage.
  • Scriven, M. (2008). The concept of a transdiscipline: And of evaluation as a transdiscipline. Journal of MultiDisciplinary Evaluation, 5(10), 65-66.
  • Stockmann, R. (2011). Competing and complementary approaches to evaluation. In: Stockmann, R. (Ed.): A practitioner handbook on evaluation. Edward Elgar.
  • Stockmann, R., & Meyer, W. (2013). Functions, methods and concepts in evaluation research. Palgrave Macmillan.
  • Stockmann, R., Meyer, W., & Taube, L. (2020). The institutionalisation of evaluation in Europe. Springer  Nature Switzerland/Palgrave Macmillan.
  • Stockmann, R., Meyer, W. & Szentmarjay, L. (Eds.) (2022). The institutionalization of evaluation in the Americas. Palgrave Macmillan.
  • UNEG (2016) UNEG Evaluation competency framework.
  • Vaessen, J., & Leeuw, F. L. (Eds.). (2010). Mind the gap: perspectives on policy evaluation and the social sciences. Transaction Publishers.
  • Vaessen, J., & West Meiers, M. (2020). Defining evaluation. In: Barrados, M. & Lonsdale, J. (Eds.): Crossover of audit and evaluation practices (pp. 41-53). Routledge.
  • Vedung, E., 2010. Four waves of evaluation diffusion. Evaluation, 16(3), pp.263-277.
  • Weiss, C. H. (1998). Evaluation: Methods for studying programs and policies. Pearson College Division.


[1] Furubo et al. 2002; Jacob et al. 2015; Stockmann et al. 2020; Vaessen and West Meiers 2020; Rosenstein and Kalugampitiya 2021, Stockmann et al. 2022.
[2] Stockmann et al. 2020: 496, Stockmann et al. 2022: 467.
[3] LaVelle 2014; LaVelle and Donaldson 2015.
[4] Scriven 1991.
[5] Mertens 1998: 219.
[6] Weiss 1998: 4-5.
[7] Cronbach 1982.
[8] For examples, see Barrados and Lonsdale 2020; Stockmann 2011: 62.
[9] Stockmann and Meyer 2013.
[10] According to the International Organization for Cooperation in Evaluation (IOCE) VOPE Directory, there are currently 152 VOPEs with about 40,000 affiliated members [ ; accessed 18 January, 2022].
[11] For example, conferences organized by VOPE’s and the gLOCAL Evaluation Week.
[12] Rodríguez-Bilella and Lucero 2016.
[13] Scriven 2008; Vaessen and Leeuw 2010.
[14] The number of approaches across organizational systems and countries is highly diverse and there have been many attempts to develop comprehensive overviews (see for example Guba and Lincoln 1989; Alkin and Christie 2004; Fitzpatrick et al. 2012). See also Stockmann and Meyer 2013: 108ff.
[15] See Leeuw and Furubo, 2008; Vedung 2010.
[16] See Pawson and Tilley, 1997.
[17] AEA (2018), UNEG (2016) and IEG (2016).
[18] This includes a broad understanding of the purposes, institutionalization, processes and approaches of evaluation.