There are varied definitions of evaluation, and policymakers may be guided by national definitions, approaches or guidance. The following definition may prove useful for those new to the concept:

Glossary
evaluation

the systematic and objective assessment of an ongoing or completed project, programme or policy, its design, implementation and results.

Source

IOM, 2011b.

Monitoring may ask questions such as “what is the current status of implementation? What has been achieved so far? How has it been achieved? When has it been achieved?” Evaluation helps in addition to understand “why and how well it was achieved” and bears a judgement on the worth and merit of an intervention or strategy. Evaluation allows for a more rigorous analysis of the implementation of an intervention, also giving an answer to why one effort worked better than another. Evaluation enriches the learning processes and improves services and decision-making capability. It also provides information not readily available from monitoring, which can be derived from the use of evaluation criteria such as consideration of impact, relevance, efficiency, effectiveness, sustainability, and coherence.

Article / Quotes

Evaluation analyses the level of achievement of both expected and unexpected results by examining the results chain, processes, contextual factors and causality using appropriate criteria such as relevance, effectiveness, efficiency, impact and sustainability. An evaluation should provide credible, useful evidence-based information that enables the timely incorporation of its findings, recommendations and lessons into the decision-making processes of organizations and stakeholders.

Source

UNEG, 2016.

Why evaluate?

At its simplest, the objective of the evaluation stage of the policy cycle is to take a close look at the policy being implemented and establish whether things are working in line with expectations, and indeed “how good [a policy] is, and whether it is good enough” (Davidson, 2004).

Evaluation can also initiate a discussion of causality. Evaluation allows for a more rigorous and comprehensive analysis of the policy. It may provide insights into why one effort, programme or intervention of the policy worked better than another. Evaluation provides practitioners with the required in-depth evidence-based data for decision-making purposes, as it can assess whether, how, why and what type of change has occurred as a result of the policy initiative. Evaluation enriches the learning processes which in turn improve both decision-making capability about policy implementation, and other future implementations of policy. Evaluation helps to identify whether a policy needs adjustments, whether it is no longer needed and should be terminated, whether it should be completely redesigned. Importantly, evaluation can be used to underline accountability for the use of public or donor funds, as well as for compliance with relevant national laws and international standards. Evaluation provides information not readily available from monitoring, because it allows for in-depth analysis and consideration against present criteria such as relevance, coherence, efficiency, effectiveness, impact and sustainability, among others (OECD/DAC, 2020).

Despite its considerable value to the policy cycle, evaluation is often overlooked. Once the heavy lifting of implementation has been concluded, new priorities arise and policymakers can find themselves focused on the next policy challenge. One frequent reason for inadequate attention to evaluation is that it is not properly integrated into the implementation plan:

Article / Quotes

… evaluation may not be sufficiently built into policy design. Again, systemic pressures often undermine good intentions. Early in the policy process, civil servants are under pressure to deliver; evaluation can be seen as a problem for another day.

Source

Hallsworth, Parker and Rutter, 2011.

Other impediments include expertise and cost as well as institutional or governmental commitment. Not securing time and resources to conduct an evaluation of the policy and its implementation risks repeating mistakes, and missing opportunities to adjust and improve future interventions.

Figure 3 sets out how evaluation can be used to answer questions about the value of a policy and its interventions. The approach, timing and methodologies employed will differ depending on the key evaluation questions that are of interest, including those for which the policymakers and implementers are directly or indirectly accountable.

Image / Video
Figure 3. Questions to ask during evaluation
Source

Adapted from IOM (2020). 

Note: Although these criteria are rights-neutral, it is important to consider rights for each one of these objectives. See how in IOM’s, Right-Based Approach to Programming Manual, p. 70)

An important part of evaluation is articulating the key questions for the enquiry. These will be based in part on the target or objective of the evaluation, and its scope. As noted at Figure 3, evaluation can focus on a number of criteria.

Policy Approaches
Considerations for planning an evaluation

STEP 1. DEFINE THE PURPOSE AND FEASIBILITY OF EVALUATION ​

Purpose: What does the evaluation strive to achieve? Who will use the findings? How will the findings be used? What type of evaluation is most appropriate to answer the evaluation questions? Which evaluation criteria are most relevant to answer the evaluation questions?

The following example evaluation questions come from Davidson (2009), who underlines the importance of working with stakeholders to ensure that what is important to know and understand is brought to the surface early:

  • What was the quality of the programme’s content/design and how well was it implemented?
  • How valuable were the outcomes to participants? To the organization, the community, the economy?
  • What were the barriers and enablers that made the difference between successful and disappointing implementation and outcomes?
  • What else was learned (about how or why the effects were caused/prevented, what went right/wrong, lessons for next time)?
  • Was the programme worth implementing? Did the value of the outcomes outweigh the value of the resources used to obtain them?
  • To what extent did the programme represent the best possible use of available resources to achieve outcomes of the greatest possible value to participants and the community?
  • To what extent is the programme, or aspects of its content, design or delivery, likely to be valuable in other settings? How exportable is it?
  • How strong is the programme’s sustainability? Can it survive/grow in the future with limited additional resources?
    Note: Only ask evaluation questions if you are ready to take action based on the answers. Only use the criteria that relate to the questions you want answered. 

Feasibility: Are there enough funds? Are the costs and feasibility worth the likely benefits of the evaluation?

STEP 2. PREPARE THE TERMS OF REFERENCE (ToR) FOR THE EVALUATION 

Ensure the following elements are included and clarified in the ToR:

  • Evaluation context: Briefly describes the political, economic and social environment​, the project, its objectives and intended results.
  • Evaluation purpose: Explains the main objective of the evaluation​, identifies the intended audience for the evaluation and how the evaluation will be used. ​
  • Evaluation scope: Specifies the time period, geographical coverage and other relevant components.
  • Evaluation criteria (relevance, effectiveness, impact, efficiency and/or sustainability) and questions: Lists selected criteria and related questions.
  • Methodology section: Describes the suggested type of data collection and analysis methods.
  • Deliverable section: Specifies outputs such as reports at inception, initial findings, draft and final report.
  • Time schedule: Specifies tasks and milestones by date and responsible party.
  • Specifications on roles: Specifies roles of stakeholders involved, including the managerial role of steering committees and the participatory approach taken to engage stakeholders.
  • Budget: Takes into account internal or external evaluators and the costs of data collection (such as field visits or data analysis costs).
  • Cross-cutting themes: Lists the relevant themes.
 
Source

Davidson, 2009, slides 10–11 [text modified for clarity]; IOM, 2020.

Types of evaluation: Considerations of timing, purpose, scope and evaluators

An evaluation is often conducted to establish whether a given policy is doing what it was intended to do and how well it is doing so. However, the focus can be narrower, on specific aspects such as the process, efficiency of effort, cost effectiveness, beneficiary and community satisfaction, consistency with rights and a rights-based approach, or why the policy implementation did, or indeed did not, work. The important thing is to formulate the objective in the clearest possible terms then formulate evaluation questions that will elicit the information needed to make such judgements. The approach taken to evaluation, and the investment in it, will be dependent on:

  • What is needed to be understood about the policy, including how effective it is;
  • How high the stakes are for the agency or government;
  • Whether the policy itself is considered to have a high social or economic impact;
  • The influence of stakeholders seeking to ensure certain evaluation criteria are included (for instance, trade unions may want to ensure that a labour entry programme protects nationals and avoids social dumping);
  • How experimental the policy has been (policy pilots for example);
  • Availability of time, money and capability to invest in the evaluation process;
  • Ethical considerations which can be particularly relevant when vulnerable migrants are beneficiaries.
Policy Approaches
Choosing the most appropriate type of evaluation

To choose the type of evaluation, key considerations need to be made regarding:

a) Timing, purpose, scope and nature of the evaluation.

TIMING OF EVALUATION: 

  • Before implementation to assess validity of its design.
  • At the early stages of implementation to provide instant feedback to managers about an ongoing operation (this approach is mostly used in emergencies).
  • During implementation, for the sake of improving performance.
  • At the end of implementation, for the benefit of stakeholders not necessarily directly involved in the management of the implementation (such as parliamentary groups, civil society).
  • After the activities to assess results and related short- and long-term changes.

PURPOSE OF EVALUATION:

  • Formative evaluations are conducted during implementation to adjust the intervention. They are intended primarily for programme managers and direct actors.
  • Summative evaluations are conducted at the end of implementation to provide insights on its effectiveness. They identify best practices and are often interesting for donors as well as civil society, and other stakeholders with an oversight role.

SCOPE OF EVALUATION:

  • Process evaluation: Focuses on activities and outputs, to assess the systems and practices used for implementation​.
  • Outcome evaluation: Focuses on outputs and outcomes, to assess the extent to which a project successfully produced change.
  • Impact evaluation: Focuses on impact, to determine the entire range of long-term effects of the project.

EVALUATORS:

  • Internal evaluations: Conducted within the government/administrations/institutions by actors who were not involved in the conceptualization or implementation of the intervention.
  • External evaluations: Commissioned to external entities, such as national evaluation societies, international organizations, civil society organizations, and national human rights institutions (NHRIs). Benefit from greater impartiality.
  • Joint evaluations: Developed jointly by both government representatives and external evaluators.

b) Whether to go for an internal or external mode of evaluation include (Conley-Tyler, 2005):

  •  Cost;
  •  Availability;
  •  Knowledge of programme and operations;
  •  Knowledge of context;
  •  Ability to collect information;
  •  Flexibility;
  •  Specialist skills and expertise;
  •  Objectivity and perceived objectivity;
  •  Accountability for use of government funds;
  •  Willingness to be constructive and to bring in new perspectives;
  •  Utilization of evaluation (for instance, internal evaluators may be better placed to have findings accepted and promote their use over the long term);
  • Dissemination of results;
  •  Ethical issues;
  • Organizational investment.

Some State authorities and international organizations make public their approach to evaluation. Such a policy can be useful to clarify accountabilities for evaluation at the institutional level. It may include key criteria for evaluation, and how stakeholders could be expected to be engaged. The value of an evaluation policy is that it provides a framework for thinking about institutional capabilities and capacity, including skills and knowledge and decisions regarding reliance on external evaluation or investments in in-house capacity.

Example
Evaluation approaches and policies by migration authorities

Canada

To inform ongoing policy development and programme design, Immigration, Refugees and Citizenship Canada (IRCC, formerly Citizenship and Immigration Canada, CIC), has an internal evaluation policy and function. IRCC plays a strategic role in providing objective, timely and evidence-based findings, conclusions, and recommendations on the relevance and performance of programmes, policies and initiatives. IRCC Evaluation Division conducts evaluations independently and makes them available to ministers, central agencies and deputy heads to support policy and programme improvement, expenditure management, cabinet decision-making, and public reporting.

Source: CIC Evaluation Policy.

OECD

The Development Cooperation Directorate of the Organization for Economic Co-operation and Development (OECD), through its Development Assistance Committee (DAC) Network on Development Evaluation, assesses development programmes across the world. The Network is composed of Member States and multilateral organizations, who established a set of evaluation criteria and principles for using it.

Source: DAC Network on Development Evaluation.

One form of exploratory policymaking is the policy pilot which can, if effectively conceived, serve as a policy experiment providing a useful opportunity for gathering evidence while limiting cost and effort.

The role of evaluation is critical for pilots. Evaluation will determine next steps and can, where appropriate, be used to advocate for and/or justify full roll-out. The two evaluation examples below are both pilots. The evaluations have been made public. The evaluation reports set out the objectives and the evaluation design.

Example
Policy pilots – evaluation objectives and design

Example 1: New Zealand – the pathway student visa pilot

Immigration New Zealand, in consultation with the international education sector, developed the Pathway student visa for international students wishing to study more than one course or programme of study, at one or more providers, in New Zealand. The objectives of the Pathway visa are to: provide efficiency gains for Immigration New Zealand and the international education sector; offer education providers an additional advantage when promoting New Zealand as a study destination; increase the retention of high-quality international students.

The Pathway visa was piloted from 7 December 2015 to 30 November 2018 with an interim evaluation completed in August 2017. The Ministry of Business, Innovation and Employment has its own in-house evaluation capacity. The evaluation was based on findings from surveys of international students and immigration advisers, interviews with education providers, immigration officers and advocacy groups, and analysis of administrative visa data. The report articulates the evaluation methodology and its objectives.

Evaluation objectives and questions

  • The purpose of the evaluation was to assess the level of interest in the new visa, how well it is working and the extent to which the objectives were met. The key evaluation questions were:
  • What has been the student, and eligible provider, uptake of the visa and what are the barriers to uptake?
  • What are the characteristics and pathways of students using the visa and those not using the visa but eligible to do so?
  • How well is the policy working from a process perspective, including the establishment of pastoral care arrangements, and what could be improved?
  • To what extent are the objectives of the policy being met?
  • What are the immediate/intermediate outcomes of the visa, including any unintended consequences?

Evaluation method

This evaluation used a mixed-methods approach, including analysis of administrative data, online surveys, and interviews with immigration officers, education providers and advocacy groups (representing providers in different education sectors). Where possible, the methods were chosen to allow triangulation of data to meet evaluation objectives.

Source: New Zealand Government - Ministry of Business, Innovation and Employment (MBIE) Hīkina Whakatutuki, 2018.

 

Example 2: The evaluation of the "Blue Birds" circular migration pilot in the Netherlands

This is a similar example of how a pilot was implemented and evaluated, this time in the Netherlands, to gain better understanding of policy potential. In this example there were poor outcomes, hence the focus of the evaluation was on both what went wrong and what would be the circumstances needed for such a pilot to be successful. The following text is taken from the report’s introduction:

Evaluation objectives and questions

The purpose of the evaluation was to understand the process of the “Blue Birds” circular migration pilot and the challenges that arose as well as the lessons to be learned for future possible circular or temporary labour migration projects or programmes. The main question that the study sought to answer was this: Why was the HIT Foundation unable to reach its target of 160 migrants working in regular vacancies within the Netherlands in shortage sectors after one year?

In order to answer that main question, the study asked and answered these further questions:

  • To what extent did the assignment framework (choice of countries, limitations regarding length of stay, education level, exclusion of high- and low- skilled migrant workers, exclusion of health sector workers, focus on employment shortage areas) influence the fact that set goals were not reached?
  • To what extent did external factors like the economic crisis and changes in parliament during 2010 influence the implementation process of the circular migration pilot?
  • To what extent did the quality of the implementation process directly lead to not reaching goals?
  • What lessons can be learned from the pilot?
  • Under what conditions would a new circular migration pilot have a chance to succeed?

Evaluation method

To make its assessment, the review team undertook a thorough review of literature, government documents and project documentation, as well as interviews with 50 key stakeholders during a two-month period. Interviews with representatives from other countries (such as GIZ in Germany) were conducted when necessary to understand key learnings from their circular/temporary migration projects. A mix of people from different sides of the project were interviewed to ensure triangulation and to understand the needs and perspectives from all sides. Interviews were conducted with several members of the HIT Foundation that implemented the project. Interviews were also done with members of the governmental steering committee, including representatives from each of the relevant ministries, and the pilot project advisory board members. Interviews were also conducted with recruiters, companies and migrants involved in the project.

Source: Siegel and Van der Vorst, 2012 [text modified for clarity].

Managing an evaluation

Finally, after the planning stage is done, the evaluation process must be put in place and managed. At the end of the process, lessons drawn from it are applied. Below are some useful points to bear in mind when proceeding:

Policy Approaches
Managing and using an evaluation

1. MANAGING AN EVALUATION 

  • Supervise the creation of a workplan based on a ToR:
    • Oversee the workplan while maintaining communication with key stakeholders;
    • Determine how the findings will be reported, as both senior management and project stakeholders should review the evaluation report ahead of finalization.
  • Ensure quality evaluation​: Ensure institutional norms, procedures and technical standards are being met.

2. USING AND FOLLOWING UP ON AN EVALUATION 

  • Using evaluation: Develop a matrix for senior management and stakeholders to review the findings and indicate whether they do, do not or partially accept recommendations; describe follow up actions to be taken with concrete timelines and allocation of responsibilities.
  • Following up or monitoring implementation of recommendations: Define a communications approach to disseminate findings and recommendations among stakeholders, bearing in mind potentially competing perspectives and the best timing to act on recommendations.
 
Source

IOM, 2020.

To Go Further
Related forms of enquiry, review and oversight relevant to this stage of the policy cycle

While the emphasis in this chapter has been evaluation, there are approaches other than evaluation that shed light on the value of a policy. These include research and study by academics, assessments, audits and other oversight reporting. These additional forms of feedback can be of a high quality, methodologically sound and provide useful insights to refine policy settings. For instance, there can be collaboration between government and academia to enable scrutiny from a viewpoint that is independent of the actual policymaking process, without vested interest (further details in Data, research and analysis for policymaking).

It is useful to understand how these other approaches may be similar to evaluation in some aspects, but fundamentally differ in function and purpose. Policymakers will need to decide what form of enquiry is required and is feasible.

Audits: Audits focus on adherence to established procedures and financial accountability, whereas an evaluation looks more at the merit of an intervention based on set standards (see Types of evaluation: Considerations of timing, purpose, scope and evaluators above). Auditors may focus on performance reviews or on how budgets have been expended and whether value-for-money has been achieved. Like evaluations, audits may be conducted within government by independent audit units or agencies that have defined legal and political authority.

Research: Research and evaluation are closely related but quite distinct concepts. Evaluation is perhaps best understood as a specific type of research exercise; it focuses on assessing a specific policy with the purpose of both fulfilling necessary accountabilities and learning from the results of implementation to improve future policymaking. The research process has a much broader remit. It can investigate and seek to understand any aspect of the policy and its implementation, and may inform all stages of the policymaking cycle. One short-hand distinction is that evaluation is about learning and accountability while research focuses on the learning. Further, in what concerns learning, evaluation is designed to improve something, while research is designed to prove something.

Key messages
  • Evaluation provides insights into the merit of a policy initiative, which can then lead to adjustments or improvements; or, if necessary, to the redesign or abandonment of the policy.
  • Decision-making regarding evaluation includes consideration of scope, purpose, timing and who will conduct the evaluation.
  • Practical considerations when planning, commissioning, managing and using an evaluation include money, resources and expertise.
  • In any case, monitoring and evaluation should be considered early in the policy cycle, during the formulation stage, so that sufficient resources are allocated and lessons can be drawn from them.