Skip to main content
Evaluation methodological approach

Evaluation methodological approach

Group
public
40
 Members
3
 Discussions
199
 Library items

Judgment

This section is structured as follows:

CONCLUSIONS AND LESSONS

What does this mean?

Conclusions provide clear answers to the evaluation questions. They incorporate value judgements. Lessons are transferable conclusions to subsequent cycles of the same intervention or of other interventions.

What is the purpose?

  • To ensure that the report provides true answers to the questions asked.
  • To clearly identify the points at which the evaluation involves value judgements, and to ensure that those judgements are not implicit or badly justified, as this would be a weakness.

How should they be formulated?

Conclusions and lessons stem from the preceding steps as follows:

Conclusions answer the questions

The questions asked at the beginning of the evaluation find their answers by means of the conclusions. A conclusion may answer several questions and several conclusions may answer a single question. 

Provided that all the questions asked have been answered, the evaluation team can present additional conclusions to take into account unexpected and important information and results.

The conclusions follow from data and findings

Upon writing a conclusion, what is being judged is one aspect of the intervention, for example: a strategic guideline (Is it relevant?), a practice (Is it efficient?), an expected effect (Was it obtained?), or an unexpected one (Is it positive?). 

Thus, conclusions stem from collected data and evidence, from analysis and interpretations performed, from findings and new knowledge generated.

Conclusions are based on judgement criteria

To formulate its conclusions, the evaluation team applies the judgement criteria (also called "reasoned assessment criteria") that were agreed upon in the first phase (desk) of the evaluation. Data collection and analysis are structured according to these criteria. As long as this is possible, the findings are compared against targets.

  • Example: The support has contributed towards increasing the number of qualified and experienced teachers by 30% in the areas where ethnic minority X concentrates (finding), which has allowed that area to catch up with the country average (target).

At the stage of the draft final report, the evaluation team may have to refine its judgement criteria and targets. In such a case, the issue is discussed with the reference group.

A lesson is a transferable conclusion

A lesson is a conclusion that can be transferred to subsequent cycles of the same intervention or to other interventions.

  • Example: By connecting its disbursements to specifically designed performance indicators, the EC can successfully contribute towards improving the capacity to enrol pupils from disadvantaged groups.

How should they be presented?

One chapter of the report introduces the conclusions relative to each question, as well as the conclusions that emerge from points not raised by the questions. The conclusions are organised in clusters in the chapter in order to provide an overview of the assessed subject. 

The chapter does not follow the order of the questions or that of the evaluation criteria (effectiveness, efficiency, coherence, etc.) 

It features references to the sections of the report or to annexes showing how the conclusions derive from data, interpretations, analysis and judgement criteria. 

The report includes a self-assessment of the methodological limits that may restrain the range or use of certain conclusions. 

A paragraph or sub-chapter picks up the 3 or 4 major conclusions organised by order of importance, while avoiding being repetitive. This practice allows to better communicate the evaluation messages that are addressed to policy makers within the Commission. 

The conclusion chapter features not only the successes observed but also the issues requiring further thought on modifications or a different course of action.

Suggestions

  • Drafting a good conclusion requires that attention be paid to clarifying and justifying value judgements. It is therefore preferable to focus on a few key conclusions rather than on a large number of poorly justified ones. The evaluation team and the people in charge of quality assurance are therefore advised to carefully reread the final report and to eliminate any unessential and/or unintended value judgements.
  • It is difficult to calmly discuss the judgement criteria and the targets with the reference group at the end of the evaluation because everyone immediately sees the influence of the discussion on the conclusions. That is why it is preferable to go into as much detail as possible in the explanation of criteria at the inception stage.
  • Whenever possible, the evaluation report states the findings (which follow only from facts and analysis) separately from the conclusions (which involve a value judgement). This approach demands explicit judgement criteria and enhances quality.
  • The evaluation team presents its conclusions in a balanced way, without systematically favouring the negative or the positive conclusions.
  • If possible, the evaluation report identifies one or more transferable lessons, which are highlighted in the executive summary and presented in appropriate seminars or meetings so that they can be capitalised on and transferred.
RECOMMENDATIONS

What is this?

The recommendations are derived from conclusions. They are intended to improve or reform the intervention in the framework of the cycle under way, or to prepare the design of a new intervention for the next cycle.

What is the purpose?

  • To optimise use of the evaluation in the form of feedback.
  • To create a positive approach and easier take-up when the evaluation reveals problems.

How to draft and present them

The recommendations must be related to the conclusions without replicating them. A recommendation derives directly from one or more conclusions. 

The recommendations must be clustered and prioritised. The report mentions the addressees of the recommendations, e.g. EC Delegation, services in charge of designing the next intervention, etc. 

The recommendations are useful, operational and feasible, and the conditions of implementation are specified. Wherever possible and relevant, the main recommendations are presented in the form of options with the conditions related to each option, as well as the predictable consequences of the implementation of each option. 

The recommendations are presented in a specific chapter. This chapter highlights the recommendations derived from the three or four main conclusions. 

How to promote them

The recommendations are valuable as far as they are considered and, if possible, taken up by their addressees. 

To promote their take-up, the manager drafts a fiche contradictoire in order to:

  • List the recommendations in a shortened form
  • Collect the addressees' responses
  • Inform on actual follow-up to the recommendations, if any.

Advice

  • " If a recommendation does not clearly derive from the conclusions, it probably reflects preconceived ideas or the tactical interests of one of the stakeholders. Its presence in the report could then discredit the evaluation. The evaluation manager must therefore be careful to refuse any recommendations that are not based on collected and analysed data, and on the conclusions.
ETHICAL PRINCIPLES

What is this?

The conclusions include value judgements on the merits and worth of the intervention. This dimension of the evaluation exercise is particularly sensitive and the evaluation team has therefore to respect specific ethical principles.

What is the purpose?

  • To guarantee an impartial and credible judgement (also called "reasoned assessment").
  • To ensure that the judgement does not harm anyone.

What are the main principles?

- Responsibility for the judgement

The conclusions are primarily a response to questions. Members of the group are partially responsible for the judgement in so far as they orientate it through the evaluation questions they validate. 

The external evaluation team also intervenes in the preparation of the judgement by making proposals to define the questions, clarify the judgement criteria and set the targets. 

In the synthesis phase, the evaluation team applies the judgement criteria agreed on, as faithfully as possible, and produces its own conclusions. The conclusions are discussed within the reference group but remain the entire responsibility of the evaluation team. 

As part of the quality assurance process, the evaluation manager can require sounder justification of a judgement, or better application of an agreed judgement criterion. By contrast, he or she cannot require the removal or amendment of a conclusion if it is methodologically sound.

- Legitimacy of the judgement

The questions and criteria take into account the needs and point of view of the public institution that initiated the evaluation. 

The members of the reference group contribute different points of view, which reinforces the legitimacy of the evaluation. 

During the desk phase the evaluation team holds interviews, which may enable it to identify other points that were not expressed by the reference group members. It makes them known in reference group meetings and may take them into account in the judgement criteria. 

More generally, the evaluation team has a responsibility to bring to light important findings and judgement criteria which have arisen during the evaluation process, even if they are not covered by the evaluation questions, provided that such points are legitimate. 

A point of view is legitimate if:

  • It is expressed by stakeholders or in their name.
  • It expresses an aspect of the public interest and not the individual interest of one person or the private interest of an organisation.
  • It is compatible with basic human rights.

- Impartiality of the judgement

The impartiality of the judgement concerns the entire evaluation, that is, the choice of questions and judgement criteria, the determination of targets and the formulation of conclusions.

The entire process is exposed to risks of partiality, for example:

  • The evaluation team favours its own preconceptions.
  • The evaluation team implicitly favours the point of view of one of the stakeholders.
  • The evaluation team does not hear, understand or take into account the point of view of one of the stakeholders.
  • The evaluation team systematically focuses on the negative or positive conclusions.

When there are differences in the way of judging, in the judgement criteria or in the target levels, impartiality consists in:

  • Making sure that evaluation team members are familiar with and respectful of beliefs, manners and customs of concerned groups.
  • Respecting all cultures and standpoints, whilst conforming to universal values as regards minorities and particular groups, such as women. In such matters, the United Nations Universal Declaration of Human Rights (1948) is the operative guide.
  • Being aware of asymmetrical power relations, and correcting the biases arising from them.
  • Making sure that all the opinions are heard, even if they are not conveyed by the loudest voices or the majority.
  • Reporting on differences in the reports and at reference group meetings.
  • Explaining the choices transparently (Who made the choice? Why? What were the alternatives?).

In case of divergence, a solution may consist in judging in relation to several criteria and/or formulating several conclusions that correspond to different points of view. This solution has the drawback of diluting the conclusions and thus of making the evaluation less conclusive. 

It is often preferable to make choices and to explain them transparently.

- Protection of people

The conclusions concern the merits of the evaluated intervention, not the people who implement it or benefit from it. 

Individuals' names are cited only when this enhances the credibility of the evaluation. The evaluation team must respect people's right to provide information in confidence and ensure that sensitive data cannot be traced to its source. Before citing a person or organisation, the evaluation team or any other evaluation actor anticipates and avoids the risks involved for that person or organisation. 

Evaluations sometimes uncover evidence of wrongdoing. Bad professional practices are never reported in a traceable way. However, the evaluation team member who encounters illegal or criminal acts deals with them as any other citizen should do. In the later case, the issue should be discussed with the evaluation manager.