Skip to main content

Evaluation methodological approach

Group
public
46
 Members
2
 Discussions
210
 Library items
Share

Table of contents

Utilisation

Share

This section is structured as follows:

.

USERS OF AN EVALUATION

.

Who do we mean?

Evaluation is intended for a variety of users:

  • policy-makers and intervention's designers
  • managers, partners and operators involved in the implementation
  • institutions that granted funds and to which the managers of the intervention are accountable
  • public authorities that conduct related or similar interventions
  • actors in civil society
  • experts.

What is the purpose?

To optimise the usefulness of the evaluation for the various partners, and especially:

  • To ensure that the evaluation meets the expectations of the targeted users, in a way and at a time that fulfills their needs
  • To ensure that the evaluation has the required credibility vis-à-vis the targeted users.

Policy-makers and designers

Policy-makers and designers use the evaluation to prepare the launching of new interventions, the reform of existing interventions, the choice of strategic orientations, and decisions on allocation of budgetary, human and other resources, etc. 

They need information that:

  • is directly exploitable in the decision-making process,
  • arrives on time
  • answers their questions, clearly, concisely and reliably.

They are interested in strategic issues, external coherence and global impacts, which constitute the ultimate goal of the intervention.

Managers, partners and operators

The managers are responsible for the implementation and monitoring of the intervention, from headquarters to the field. The actors closest to the public are the operators. Field level operators may either belong to the EC or to partner organisations sharing the responsibility of implementation. 

They use evaluation findings about the results of their action as a feedback. 

They need information that arrives as early as possible to remedy problems or validate changes. They are able to interpret complex and technical messages. 

They are interested in the direct results of the intervention, in the needs and behaviour of the targeted group, and in interactions between partners.

Other actors

The institutions that funded an intervention expect accountability. This applies to Parliament or the Council of Ministers, but also to all the co-funders. The taxpayers and citizens are also addressees of an evaluation. 

The public authorities that conduct related or similar interventions are potential users of the evaluation, especially in the form of transfer of lessons learned. The same applies to the expert networks concerned by the intervention. 

Finally, an evaluation is likely to be used by the actors in civil society, especially those representing the interests of the beneficiary groups.

Recommendations

  • From the launch phase, draw up an inventory of the potential users and classify them in the above categories.
  • Question key informants to understand the users' expectations.
  • Choose the expectations to focus on, especially in relation to the challenges and responsibilities of the institution that initiated the evaluation.
  • Draw up a communication and dissemination plan suited to the main users targeted.
  • Take into account the different levels of information in relation to the users: strategic for the decision-makers, technical and operational for the managers, general for outside actors.

.

TYPES OF USE

.

What does this mean?

An evaluation can be used:

  • As an aid in decision-making.
  • As an aid in making judgements and informing public opinion.
  • To promote knowledge and understanding.

These three types of use are not mutually exclusive. It is necessary to understand in order to judge, and to judge in order to decide.

What is the purpose?

To optimise the usefulness of an evaluation, that is:

  • To choose and draft the questions in relation to the expected use.
  • To adjust the schedule and dissemination in relation to the expected use.

Assisting decision making

The evaluation may be undertaken for the benefit of those who have to decide or to negotiate an adjustment or reform of the evaluated intervention. In that case it is used to adjust the implementation, to design the next intervention cycle or to redefine political orientations. 

To facilitate this type of use, also called feedback, the evaluation questions must be asked in relation to the decision-makers' expectations and to their planned decision-making agenda at the time the report is submitted. 

The evaluation can aid decision-making in two different ways:

  • By formulating conclusions independently, and then proposing recommendations. This type of evaluation is referred to as "summative".
  • By favouring the involvement of the decision-makers concerned, or at least their close collaborators, with a view to creating a phenomenon of take-up or direct feedback. This type of evaluation is referred to as "formative".

Evaluations may assist decision-making in different ways, depending on the context of the decision:

  • Recommendations may be made to the managers and/or designers of the intervention. Evaluations that favour this type of use are referred to as "managerial".
  • Recommendations may be made to all the partners and co-funders of the intervention. Evaluations put to this type of use are referred to as "partnership".
  • Finally, an evaluation may be conceived as an aid to negotiation and problem-solving between a wider range of stakeholders, including interest groups and actors in civil society. Evaluations used for this purpose are referred to as "pluralistic".

Assisting the formulation of judgements

The evaluation may help users to shape their opinion on the merits of the intervention. 

The formulation of an overall assessment is particularly useful for accountability purposes. In this case, the evaluation examines the merits of the intervention in relation to the different points of view (summative evaluation). It answers questions that are important for the funding institutions. The report is accessible to the general public. The independence of the evaluation and the transparency of the judgement are highlighted. 

In this instance, particular attention is paid to the definition of judgement criteria (also called "reasoned assessment criteria"). Yet the judgement itself is definite only when the final report is submitted and its conclusions are discussed. Using the evaluation for accountability purposes therefore means having to wait for the end of the process.

Knowing and understanding

Apart from assisting in making decisions and formulating judgements, which are the two main forms of use, the evaluation may also enable users to learn from the intervention, to better understand what works and what does not, and to accumulate knowledge.. Indirectly, it contributes to transferring knowledge thereby acquired, to the benefit of professional networks that may not have a direct link with the evaluated intervention. 

Unlike feedback, which directly concerns those responsible for the evaluated intervention, the transfer of lessons is an indirect process that takes place through networks of experts both within and outside the European Commission. 

Capitalising on knowledge often starts during the evaluation process, through the experts who belong to the evaluation team or reference group. However, the transfer of lessons learnt may only occur after the final report has been delivered. A key step in this perspective is presentation of the evaluation in specialised networks, in the form of seminars or technical articles.

Recommendations

From the outset, the evaluation manager should prioritise one or more types of use, and then optimise the evaluation process in order to make it as "user friendly" as possible, for instance in adjusting:

  • Membership of the reference group.
  • Evaluation questions.
  • The dissemination strategy.

.

EVALUATION AND DECISION MAKING

.

What is this about?

Evaluation provides feedback and thus facilitates decision-making, for instance in adjusting the implementation of the intervention, designing the next cycle, or helping to redefine political priorities. In this context the formulation and follow-up of recommendations are key steps in the process.

What is the purpose?

Facilitating future decision-making is generally the main type of use of evaluation. Decision-makers' needs have, therefore, to be taken into account throughout the evaluation process to increase the chances of the evaluation being useful to them. 

What is the link between evaluation and decision-making?

Some evaluations are designed primarily to provide information for management decisions or for reform of the evaluated intervention. They are intended for operational actors in the field, management services, and the authorities responsible for the intervention or their partners. In this perspective, mid-term evaluation is to be preferred and careful attention needs to be paid to the formulation and follow-up of recommendations. These evaluations are referred to as formative. 

Other evaluations are designed primarily to learn lessons from the experience and to serve decision-making in other contexts. In this perspective, ex post evaluation is the most appropriate and careful attention must be paid to the formulation and transferability of the lessons learned. These evaluations are referred to as summative. 

Yet one has to be realistic: decision-makers will not necessarily follow the recommendations and lessons. The decision-making process almost always involves many actors and multiple factors, evaluation being only one of them.

Advice for performing a decision-making oriented evaluation

  • Hold targeted interviews, from the outset, in order to establish exactly what decision-makers expect and to anticipate the decision-making agenda.
  • Target the questions and determine the evaluation schedule in relation to the decision-making process.
  • Involve individuals in the reference group whose point of view is close to decision-makers'.
  • Organise quality assessment, the drafting of documents and their dissemination with a view to decision-makers' needs.
  • Emphasise the quality of recommendations and verify their operational feasibility from the decision-makers' point of view.
  • Stick to the schedule to allow timely feedback in the decision-making process.
  • Follow-up recommendations by asking the decision-makers concerned to respond rapidly to them and after one year to report on decisions made accordingly.

.

EVALUATION AND KNOWLEDGE TRANSFER

.

What is this about?

Evaluation is a learning-intensive exercise in so far as lessons learned from experience can be capitalised on, and knowledge acquired can be transferred and reused by actors who have no direct link with the evaluated intervention. In this context the identification of good practices and transferable lessons is a key step.

What is the purpose of knowledge transfer?

  • Without an effort to capitalise on and manage knowledge, organisations tend to forget the lessons learned and to repeat costly efforts.
  • Evaluation reveals and validates certain knowledge of the actors involved in the interventions, which may be of use to people working in other countries, institutions or sectors.

Draft the report to promote the accumulation of knowledge

The only pages of a voluminous evaluation report that are oriented towards knowledge transfer are those concerning good practices and lessons learned. It is therefore important to identify them and to draft them with that in mind. 

Identifying a good practice means judging that it produced the expected effects in a particularly effective, efficient or sustainable way. 

Identifying a transferable lesson means judging that a given practice generally succeeds (or fails) to produce the expected effects in a given context. 

The summary of the evaluation report highlights the main lessons learned with a view to facilitating their accumulation and transfer. For each lesson, references to the body of the report serve to specify:

  • The exact content of the lesson learned.
  • The soundness of the conclusions on which it is based.
  • The context in which knowledge was learned and the factors of context that make it transferable or not.

Mobilise available knowledge

Certain public institutions or professional networks accumulate knowledge in Intranet or Internet databases. Knowledge may also be accumulated informally through expert networks that build on lessons learned in a particular sector or on a particular topic. In the latter case, the members of the network try to validate the knowledge before capitalising on it, through meta-evaluations or expert panels. 

Not only does evaluation contribute towards the accumulation of knowledge, it also facilitates the circulation of the acquired knowledge. The evaluation team mobilises available expertise and documentation to provide an initial partial answer to the questions asked. If the transferable lessons have been learned through other evaluations and properly capitalised, the evaluation team identifies and reuses them. 

.

DISSEMINATION OF THE EVALUATION

.

What does this mean?

Dissemination concerns the final evaluation report, as well as all other means of publicising the conclusions, the lessons learned and the recommendations. Dissemination activities target the services of the Commission, European Institutions, external partners, networks of experts, the media and the wider public.

What is the purpose?

The dissemination process promotes the use of the evaluation if it is done well. It serves to:

  • Transmit the key messages of the evaluation to decision-makers, designers, managers, operators and partners concerned by the evaluated intervention (feedback).
  • Report to the authorities and institutions concerned (accountability).
  • Further knowledge within professional networks.
  • Influence opinions throughout society at large.

Which measures need to be taken?

  • Plan dissemination when drafting the terms of reference, especially by specifying how the report will be published and what the evaluation team's role will be in that phase.
  • Throughout the process, keep in mind the quality of the evaluation and its products (report, summary, annexes), and formally assess the quality at the end of the process.
  • In the last evaluation reference group meeting, identify the main messages delivered by the evaluation and the targeted audiences.
  • After approval of the report, finalise the communication plan by choosing the messages to be highlighted and the most suitable information channels for each audience.
  • Ensure the necessary cooperation for the implementation of the dissemination plan and divide the work and responsibilities between the evaluation manager, the team (check that this is part of its mission and is specifically mentioned in the terms of reference) and the members of the reference group.

Which channels for dissemination?

The evaluation report is disseminated on the Internet and is thus accessible to all audiences. More active dissemination is also undertaken for specific audiences:

  • The report and/or its summary are sent to the services concerned and to the partners.
  • A one-page summary is written specifically for the hierarchy of the service that managed the evaluation. It highlights the main conclusions and recommendations.
  • A summary is also published on the relevant Intranet sites, with a link to the report.
  • A two to three pages abstract is sent to the OECD for publication on its website. This summary is intended for the international development aid community. It highlights the lessons learned if they are transferable.
  • One or more articles may be written for the general public or specialised networks.
  • Finally, the report may be presented in meetings, workshops or seminars.

Recommendations

  • Draw up the rules for quoting names in the report in relation to the intended dissemination.
  • Draw up a budget for the communication plan at the launching stage of the evaluation (mention whether the external evaluation team has to include it in its proposal).

.

DOCUMENTS PRESENTING THE EVALUATION

.

What does this mean?

Since full-length evaluation reports are available on the Internet, it is advisable to disseminate one or more shorter documents that are suited to the different audiences and present the evaluation in an attractive and accessible way.

What is the purpose?

These documents are intended to:

  • Inform decision-makers of the main conclusions and recommendations.
  • Inform the wider public, experts and the media of the existence and content of the evaluation report.
  • Facilitate knowledge transfer, particularly best practice and lessons learned.
  • Promote accountability and democratic use of evaluation.
  • Encourage people to consult the report on the Internet.

Formats and uses of the different documents

The executive summary is one of the starting points for drafting presentations. It is usually three pages long and no particular audience is targeted in the content. Beyond the executive summary, various documents are or may be produced:

  • A one-page note focused on conclusions and recommendations at a strategic level and intended for policy-makers.
  • A two-page abstract published by the OECD (EvInfo) to inform the development aid community. As far as possible, this abstract mentions transferable lessons learned.
  • A paragraph presenting the evaluation in synthesis documents.
  • A short article intended for a particular audience.

Advice for drafting a presentation document

- Choose the orientation of the document

  • Define the target: is the document for the public at large or for specialists and, if the latter, which ones? 
  • Select a few messages to highlight in the title and sub-titles, in relation to the targeted audience. 
  • Adjust the length of the document and the style to suit the targeted audience.

- Describe the evaluated intervention

  • Who initiated and financed the intervention? When?
  • Budget and typical outputs.
  • Expected effects and rationale.

- Describe the evaluation

  • Who initiated the evaluation? Why?
  • Evaluation scope.
  • Main evaluation questions asked.

- Main messages

  • Findings and new knowledge acquired.
  • Conclusions.
  • Transferable lessons and replicable good practices.
  • Recommendations.

- Method

Describe the methodological design in a non-technical way and make a statement about the soundness of the main messages. 

If one aspect of the method is of particular interest to the targeted audience, add a more technical boxed-in section.

Recommendations

  • Avoid the technical jargon of evaluation, the specialised vocabulary of the sector of intervention, and administrative acronyms, especially if the document is intended for the general public.
  • Focus on a few key messages and propose hyperlinks towards the executive summary and the report available on the Commission's website.

.

PRESENTING AN EVALUATION IN A SEMINAR

.

What is the purpose?

In an environment with an over-abundance of written information, an effective way of reaching potential users consists in presenting the evaluation results in meetings or seminars. 

An oral presentation of the evaluation helps to:

  • Facilitate knowledge transfer, especially best practice and lessons learned in professional networks
  • Inform people who will relay messages to decision-makers or to a broader audience
  • Draw the attention of the institutions to which one is accountable and reinforce the democratic use of evaluations
  • Discuss the conclusions.

Practical advice

A short presentation (10 to 20 minutes) is enough for the main points but more time needs to be left for questions (20 to 40 minutes). 

The presentation covers the following points:

- The evaluated intervention

  • Who conducted the intervention, when, with which resources? What were the outputs, objectives and rationale?
  • Main lines of the intervention logic

- The evaluation

  • Who decided on the evaluation and why?
  • Who carried out the evaluation and who is responsible for the conclusions?
  • Which part of the intervention was evaluated and which questions were asked?

- Messages resulting from the evaluation

A few particularly important messages from the point of view of the people participating in the meeting or seminar (key data, findings, conclusions, transferable lessons and/or recommendations).

- Strengths and weaknesses of the messages

Explanation of the methodology employed and reasons for which a particular message is sound (valid) or fragile. Recommendations for using messages if they are fragile.

Recommendations

  • Adapt the level of language to the audience, e.g. without jargon for a mixed audience, more technical for an audience of sector experts.
  • Indicate how to access the final report and the summary, especially via Internet.