Skip to main content

Evaluation methodological approach

Group
public
50
 Members
2
 Discussions
213
 Library items

Table of contents

Focus the evaluation on key questions

The structure of the section is as follows:

WHAT IS ABOUT THE PURPOSE?

There are technical limitations that make it impossible to answer multiple questions or, more precisely, to provide quality answers to an excessive number of questions. This guide recommends a maximum of ten questions.

How to choose the questions?

- Identify questions

A first version of the evaluation questions is proposed on the basis of:

  • The analysis of the intervention logic.
  • The analysis of the intervention rationale.
  • Issues that justified the decision to launch the evaluation.
  • Issues to be studied, as stated in the terms of reference.
  • Questions raised in the ex ante evaluation, where relevant.

In a second version, the list and wording of the evaluation questions also take into account:

  • Issues raised by key informants at the start of the evaluation.
  • Expectations of the members of the reference group.

- Assess the potential usefulness of answers

Assuming that a question will be properly answered, it is necessary to assess the potential usefulness of the answer, by considering the following points:

  • Who is to use the answer?
  • What is the expected use: knowledge, negotiation, decision-making, communication?
  • Will the answer arrive in time to be used?
  • Is the answer not already known?
  • Is there not another study (audit, review) underway, likely to provide the answer?

If the choice of questions has to be discussed in a meeting, it may be useful to classify them in three categories of potential utility: higher, medium, lower.

- Check that nothing important has been overlooked

Experience has shown that it is most harmful to the quality of the evaluation if the following type of question is left out:

  • Questions on efficiency and sustainability.
  • Questions concerning negative effects, especially if those effects concern underprivileged groups.
  • Questions concerning very long-term effects.

- Assess the feasibility of questions

The feasibility (evaluability) of a question should be examined, but always after its usefulness. For this purpose the following should be consulted:

  • The service managing the intervention.
  • One or more experts in the field.
  • One or more evaluation professionals.

If the choice of questions has to be discussed in a meeting, it may be useful to classify them in three categories:

  • Strong probability of obtaining a quality answer.
  • Average probability.
  • Weak probability.

If a question is potentially very useful but difficult to answer, check whether a similar question would not be easier and equally useful. For example, if a question concerns a relatively distant or global impact, its feasibility could probably be improved by focusing on the immediately preceding impact in the intervention logic.

- Discuss the choice of questions

The choice of questions is discussed at the inception meeting. 

The selection is more likely to be successful if potential users have been consulted and have agreed on the selected questions, and if no legitimate point of view has been censored.

REASONS FOR SELECTING A QUESTION

Because someone raised it

Someone who proposes a question tends to cooperate in answering it and in accepting the conclusions. 

It is therefore preferable to select questions clearly requested by the actors concerned, for example:

  • Authorities or services of the Commission, especially those who participate in the reference group.
  • Key informants consulted by the evaluation manager or the evaluation team.

An actor may ask a question primarily with the intention of influencing or even obstructing the action of another actor. The potential usefulness of this type of question has to be examined carefully.

Because it is useful

A question is particularly useful if:

  • The intervention or one of its aspects is innovative and several actors expect a validation.
  • A decision is going to be taken and the conclusions may arrive in time to help in taking that decision.
  • A public debate is planned and the conclusions may be ready in time to feed into the debate.

Because the answer is not known

A question is useless if:

  • Another evaluation, an audit or a study has already answered it.
  • It has already been asked in many other evaluations and has always had the same answer.

It may nevertheless be useful to ask the question again if the answer requires verification.

Assessing the overall intervention through a limited number of questions

Focusing on questions does not prevent one from drawing conclusions on the intervention as a whole. On the contrary, it makes it possible to formulate an overall assessment which builds upon professional data collection and analysis, and avoids the risk of being superficial and impressionistic. 

This can be explained with the analogy of oil exploration. One cannot discover oil by just looking at the surface of the earth. Oil seekers need to explore the underground. The same applies to an intervention being evaluated. The surface of things is visible through reporting, monitoring information, and change in indicators, but what needs to be discovered remains invisible, e.g. the EC's contribution to changes, sustainability, etc. 

Evaluation questions can be compared with the oil seekers' exploratory wells. Each evaluation question provides a narrow but in-depth view into what is usually invisible. By synthesising what has been learnt by answering the questions, it becomes possible to provide an overall assessment of the intervention. The process can be compared to that of mapping oil fields after an exploratory drilling campaign.

QUESTIONS AND COMPLEXITY OF EVALUATION

Why work with a limited number of questions?

Focusing an evaluation on a few key questions is all the more necessary when the intervention concerned is multidimensional and when the evaluation itself is multidimensional. In that case, if one wanted to evaluate all the dimensions of the aid and all the dimensions of the evaluation, the work would be extremely costly or very superficial. It is therefore necessary to make choices.

Multidimensional interventions

An intervention is multidimensional if it concerns several sectors, applies several instruments, and targets several objectives, population groups and/or regions.

Example of multidimensional intervention (EC aid at country level)
Sectors Education, transport, water, agriculture, health, trade.
Instruments Global financial support, sectoral financial support, projects, etc.
Objectives Reduction of poverty, ……, fair access to primary schooling, ….., development of public management capacities, …. etc.
Target groups Pupils, firms, farmers, communities, women, etc.
Regions The entire country, poor regions, underprivileged urban areas, etc.

Multidimensional evaluations

An evaluation is multidimensional if it refers to several families of evaluation criteria and covers several cross-cutting issues and/or neighbouring policies.

Example of multidimensional evaluation (EC aid at country level
Criteria Relevance, effectiveness, efficiency, sustainability, impact, Community value added, coherence/complementarity.
Cross-cutting issues Gender, environment, good governance, etc.
Related policies Other EC policies (refugees, trade, agriculture, fishing, etc.)
Policies of the partner country and other sponsors in the sectors concerned.