Skip to main content

Evaluation methodological approach

Group
public
46
 Members
3
 Discussions
211
 Library items
Share

Table of contents

Toolbox

Share

This section is structured as follows:

.

What type of tools?  

.

The evaluation's four tasks

Evaluations are usually divided into four tasks which are not strictly sequential: the evaluation's organisation (based on the evaluation questions wording leading to the overall assessment), the collection of quantitative and qualitative information, the analysis of the information collected, and the evaluation questions assessment leading to the formulation of conclusions and recommendations.
Objectives of these four tasks:

  • The evaluation's organisation contributes to the selection (or definition) of evaluation questions, judgement criteria, and indicators for these criteria, so as to determine the evaluation's methodology.
  • The information collection gathers all the primary and secondary data (quantitative and qualitative) available for answers to the evaluation questions.
  • The analysis of the information is used to assess the assistance policy contribution to the observed evolutions of indicators.
  • The assessment of each evaluation question leads to the formulation of an answer and conclusions for the evaluation.

The elaboration of an evaluation methodology based on the evaluation questions (which are designed for the development of an overall assessment) is crucial in the selection of tools used for each of the evaluation's four stages.

 

Importance of the documentary stage

Although no tool presented in this methodology is specifically dedicated to this purpose, the collection of information from the European Commission services and on-site (particularly the information collection using CRIS database) is a component of the evaluation process.
The following indicative list sets out the main documentary sources (secondary data) to be consulted for a country evaluation:

  • Overall agreements, bilateral/multilateral and sector-based/thematic agreements (trade, investment, science and technology, etc.), agreements of co-operation, partnership, association, and conclusions of bilateral and multilateral conferences
  • Country/Regional Strategy Papers (CSPs and RSPs) and National/Regional Indicative Papers (NIPs and RIPs)
  • The available annual reports and mid-term reviews
  • The European Court of Auditors' reports
  • Governmental official papers (such as the Poverty Reduction Strategy Papers (PRSP) and available sector-based strategies) and papers produced by multilateral and bilateral co-operation agencies (strategy papers, project papers, etc.)
  • The available thematic or sector-based project and programme evaluations
  • Reports of monitoring (ROM) on projects and programmes in progress
  • The European Commission's general regulations and all regulation documentation and political agreements covering the evaluation's period
 

Presentation of the tools

Thirteen tools have been developed. They are usually familiar to the evaluators. Their specificities are described here.

Objectives diagram and impact diagram

The objectives diagram displays the classification of the objectives to be achieved for the strategy implementation, from the European Union's global objective to activities carried out for operational programmes. The impact diagram displays the classification of activities, outcomes and expected impacts. The expected impacts are the objectives in terms of results.
The tool reconstitutes the intervention rationale and the expected impacts; as such, it plays a crucial role in the organisation stage of the evaluation's complex interventions through the wording of evaluation questions.

Problem diagram

Projects and programmes in development assistance aim at satisfying priority needs through the resolution of a range of issues. It is theoretically possible to construct a diagram taking the shape of a tree, with the trunk (the core problem), roots (the causes) and branches (the consequences and impacts).
The problem diagram, associated with the impact diagram, validates the relevance of a project, a programme or a strategy by relating expected impacts to the problems they should be contributing to solve.

Decision diagram

The decision diagram displays the process during which the strategic objectives and the overall co-operation policies with developing countries, which are defined by the European Union's assistance agreements, are converted into short-term and medium-term bilateral co-operation decisions.
Complementing the objectives diagram, the decision diagram facilitates the analysis of the strategy in terms of internal coherence (logical succession of the choices) and external relevance (contextual elements and position of stakeholders).

Interview

The interview collects information from stakeholders and beneficiaries throughout the evaluation stages: facts and verification of facts, opinions and points of view, stakeholders analyses and suggestions.

Focus group

The focus group is a means of discussing information, opinions and judgements already collected. The tool explains why opinions have been expressed (and the analyses supporting them) and checks their consistency. Focus groups are frequently used to collect the beneficiaries opinions concerning their participation in a programme and what they had drawn from it (positive and negative aspects). They are an alternative to interviews. Whatever their usage, the focus group's specificity is a collection of opinions which have been moderated by an in-depth discussion rather than a collection of spontaneous opinions.

Survey

The survey collects comparable answers from a sample of the population. When the sample is representative, the survey displays statistical measures which can be useful for quantified indicators.

Case study

Case studies are the preferred evaluation tool when "how" and "why" questions are being posed, because they allow a detailed examination of the actual elements in line with the evaluation goals. In contexts allowing or requiring it, the case(s) can be selected to yield general conclusions for the overall evaluation.

Expert panel

The expert panel is a group of independent specialists, recognised in at least one of the fields addressed by the programme under evaluation. The panel yields a collective assessment which is nuanced, argued and supported by the knowledge and experience of the experts.

SWOT

SWOT analysis combines the study of the strengths and weaknesses of an organisation, a geographical area, or a sector, with the study of the opportunities and threats to their environment. Frequently used in ex ante evaluations, it can also be used in ex post evaluations to assess the orientations taken.

Context indicators

The tool ranks a country through the comparison of its context indicators with that of other countries. A context indicator is a datum which produces simple and reliable information describing a variable relative to the context. The tool evaluates development dynamics through the comparison of the level and evolution of a country's main indicators with that of other countries with similar contexts.

Multicriteria analysis

In ex ante situation, multicriteria analysis is a decision-making assistance tool. In ex post evaluations, it usefully contributes to the formulation of a judgement based on a range of heterogeneous criteria.

Cost-effectiveness analysis

The tool identifies the economically most efficient way to fulfil an objective. It compares the efficiency of projects or programmes with comparable impacts. It usefully contributes to the formulation or validation of a judgement on the selection of the most efficient projects and programmes.

Cultural and social analysis

In country evaluations, the cultural and social analysis identifies the constitutive components of social, ethnic, religious and interest groups. It also highlights all the values shared in a society as well as its internal divisions.

 

Absence of statistical tools

In the context of assistance to developing countries, statistical data are often difficult to collect and their relevance is limited by the delays in their publication and a weak reliability. The available data are general and descriptive, and only allow for straightforward analysis. Thus, it is common practice to develop tools based on easily available data.

Rules for the methodology's application

Generally speaking, the feasibility of field work or the limitations of such a task should be checked. Indeed, specific contexts within the country where the study is to be carried out, such as logistical and implementation costs constraints, can constrain the choice of tools.
Prior to the presentation of the selection criteria for the most appropriate tools for the methodology to be applied, rules for the methodology's application should be recalled:

  • No tool can answer a question or fulfil one of the four stages of the evaluation by itself, which means that evaluations need a combination of tools.
  • Each of the tools is adapted to a specific stage, sometimes several.
  • Several tools are used concurrently in the same stage with different approaches in order to facilitate the triangulation of information.
  • The selection of evaluation tools depends on the tasks to be achieved and the context: the degree of difficulty of the intervention, quality of the available expertise, nature of the information to be collected, multiplicity of the interlocutors, etc.
  • The tool selection must be guided, except for specific reasons, by the homogeneity of the detail and the degree of precision of the information required for the analysis.

In short, the evaluation team should use several tools and choose the most efficient combination of tools for the specific context of the evaluation.

 
When to use which tools?

The tools and the evaluation's four functions

A first selection can be made when the tools are classified within the four tasks of the evaluation.
The table below sets out the tasks for which each tool is normally used and other tasks where it could be usefully incorporated. Most tools have a main function and one or more secondary functions. This list is indicative only.

1420534573_tools1.png

Organisation tools

The organisation of the evaluation can usefully be supported by a series of tools called organisation tools. In homogeneous project or programme evaluations, the normal organisation tool is the logical framework which describes the objectives (overall and specific) of the intervention, the issues to which the evaluation answer and the expected outcomes. A problem diagram can usefully complement the logical framework. Evaluations where the scope includes a range of heterogeneous objectives and activities (such as geographic evaluations), theoretically require three tools, the objectives diagram and the impact diagram being the core organisation tools. In complex programmes or strategies, the problem diagram brings precision on the relevance of objectives, identifies the goals and issues of the interventions, as well as the problems neglected by the evaluation's objectives. The decision diagram complements the objectives diagram with information about the reasons for the programme's orientations, and especially the basis for the rejection and negotiation of options. These tools are also useful for the analysis and judgement stages (polyvalent nature of the tools).

Collection, analysis and judgement tools

During these three evaluation stages, a large range of tools is available, complementary to each other and/or polyvalent. As a consequence, numerous factors are taken into account to make an efficient choice. Although no rule securing an optimal choice exists, a logical process can facilitate the development of a homogeneous methodology which will provide well-grounded answers to the evaluation questions.

Polyvalent nature of the tools

The resources allocated for an evaluation are not sufficient for the implementation of all the tools mentioned previously. Choices must be made with respect to the evaluation's priorities and the maximisation of the use of resources. The wider the scope and questioning, the greater the risk of dispersion, which means that the evaluation team must ensure that their observations and analyses provide answers to the most essential issues of the evaluation. The evaluation team should remember that several tools are usually relevant for the same evaluation stage and can be used to confirm the information, and that the tools are often polyvalent and can answer several questions at the same time.

Selection criteria for the tools

Besides the tools specific functions and their ability to be implemented in one of the four stages of the evaluation, other selection criteria should be examined.
The selection criteria set out below should guide the evaluator through a series of choices needed for the development of the methodology.

  • Knowledge of the techniques. Although the majority of the tools presented are easily implemented, some may require prior experience. This is particularly the case for tools necessitating specific group moderating skills, and complex tools such as multicriteria analysis, cost-effectiveness analysis or surveys.
  • Need for specific data. The implementation of some tools requires the collection of specific data without which their conclusions would be ill-founded. For example, the implementation of cost-effectiveness analysis is impossible without indicators of effectiveness measuring comparable projects. Their availability and reliability must therefore be checked before the cost-effectiveness analysis is used.
  • The prerequisites for the tool's usage. This issue is particularly important for tools whose implementation takes place during the field stage in the partner country. As the implementation of such tools often generates high costs, their relevance within the overall methodology, cost-effectiveness and efficiency (particularly for the data collection) must be secured.
  • Implementation time. Some tools, such as the surveys and some types of focus group investigations and expert panels, need a preparation stage before their implementation on-site
  • The availability of qualified local experts, capable of conducting specific tools in the local language. This issue is particularly crucial for tools requiring group moderating skills (focus groups, etc.) for which available and skillful experts are sometimes hard to find.

The table below grades each tool using the five criteria and indicates those demanding particular attention. Each criteria is awarded a grade from 1 to 3. Grade 1 means that the criteria does not constitute a particular problem for the tool; grade 3 means that it is recommended that the feasibility of the tool's implementation should be checked with regards to this criteria.

The tools' specific requirements

Tools Knowledge
of the
techniques
Specific
information
Costs Delays Qualified
local
experts
Objectives
diagram and
impact
diagram
2 1 1 1 1
Problem
diagram
2 1 1 1 1
Decision
diagram
2 1 1 1 1
Interview 1 1 1 1 1
Focus group 2 1 1 2 3
Survey 3 1 3 3 2
Case study 1 2 2 3 1
SWOT 1 3 2 2 1
Context
indicators
2 1 1 1 1
Expert panel 2 1 1 2 2
Multicriteria
analysis
2 3 2 2 2
Cost-
effectiveness
analysis
3 3 3 2 1
Strong requirements (3) , medium (2), poor (1)

A grade 3 does not mean that the tool should not be used. Indeed, the priority should be given to the tools providing the best answers to the evaluation questions, and then, the evaluator can check the possibility of using them in the context and with the available resources of the evaluation.

Development of a homogeneous methodology

  • The methodology's development must target the organisation of a homogeneous and efficient technique. It is useless to schedule the implementation of sophisticated tools for one stage of the evaluation if the other stages are not addressed with the similar precision or logical process, or if the other tools cannot provide the required information or use the information collected.
  • The collection stage must be carefully managed because if it leads to incomplete findings, the organisation of a new information collection is challenging during the field stage. The same attention should be given to analysis and judgement tools to be implemented in the country under evaluation.
  • To do so, it is useful to assess the risk of an unsuccessful implementation for each tool, in order to plan alternative solutions if needed be and limit the impact of such a failure on the evaluation as a whole.

In essence, the methodology should be constructed with a range of available tools and take into account their advantages and limitations, the conditions for their implementation in the context of development assistance evaluations, the prerequisites for their implementation and the limits of their findings due to the context.

Analysis of the tools

Objectives diagram and impact diagram

Main function Organisation of the evaluation
Advantages Reconstructs the intervention rationale and the expected impacts.
Contributes to the wording of evaluation questions.
Plays a crucial role in the evaluation's organisation stage for complex programmes (such as country evaluation strategies).
Potential constraints Limited access to documentation.
Uncertain information.
Expected outcomes Wording of evaluation questions.
Reconstruction of the programme's intervention rationale.
Prerequisites Compulsory documentary research.
Expertise of the evaluator.

Problem diagram

Main function Organisation of the evaluation
Advantages Validates the relevance of a project, programme or strategy by relating the expected impacts with the problems they contribute to solve.
Potential constraints Limited access to documentation.
Uncertain information.
Expected outcomes Validation of the programme's intervention rationale.
Prerequisites Compulsory documentary research.
Complementary to the objectives diagram.
Expertise of the evaluator.

Decison diagram

Main function Organisation of the evaluation
Advantages Facilitates the analysis of the strategy in terms of internal coherence ((logical succession of the choices) and external relevance (contextual elements and position of the stakeholders).
Potential constraints Limited access to documentation.
Uncertain information.
Expected outcomes Explanation of the decision process and determination of strategic and/or political objectives. Analysis of the strategy's coherence and relevance.
Prerequisites Compulsory documentary research.
Complementary to the objectives and problem diagrams.
Expertise of the evaluator.

Interview

Main function Observation / Collection                     
Advantages Collects information from actors and beneficiaries throughout the evaluation stages.                                               
Potential constraints Limited availability and collaboration of the stakeholders.
Inappropriate selection of interlocutors (lack of representativity of all the stakeholders).
Subjectivity and spontaneity of the opinions expressed.
Poor degree of comparison between answers.
Expected outcomes Facts and verification of facts, opinions and points of view, analyses of the actors and suggestions.
Prerequisites Availability of the stakeholders.
Time planning required.

Focus group

Main function Collection / Observation
Advantages Highlights the reasons for the opinions expressed (including the analyses supporting them) and checks their consistency.
Potential constraints Limited availability and collaboration of the stakeholders.
Risk of inappropriate or arbitrary selection of interlocutors (lack of representativity of all the stakeholders).
Subjectivity and spontaneity of the opinions expressed.
Expected outcomes Opinions ponderated after in-depth discussion instead of spontaneous opinions.
Prerequisites Information collection prior to the focus group setting (the tools aims at debating information, opinions and judgement which have already been collected).
Local assistance to facilitate the organisation of meetings.

Survey

Main function Collection / Observation
Advantages Collects comparable answers from a sample of the population.
Potential constraints Limited availability and collaboration of the stakeholders.
Difficult analytical process of the information requiring a complex organisation.
Detailed knowledge of the techniques.
Expected outcomes When the sample is representative, it can yield significant statistical information which can be useful for quantitative indicators.
Prerequisites Required expertise for data processing.
Time planning required.
Costs.

Case study

Main function Analysis
Advantages Answers to "how" and "why" questions through a detailed examination of actual cases.
Potential constraints Too specific situations analysed (impossible to develop a general rule).
Risk of unjustified extrapolation/generalisation.
Risk of inappropriate/arbitrary selection of case studies (with no relationship to the evaluation's objectives).
Access to information.
Expected outcomes Answers to questions requiring in-depth study (how and why?).
Where possible, case studies can be selected to provide the conclusions with a general rule applying to all the evaluation.
Prerequisites Easy access of the sites and availability of the beneficiaries.
Time.
Costs.

Expert panel

Main function Analysis
Advantages Comprises independent specialists, recognised at least in one of the fields addressed by the programme under evaluation.
Yields a collective point of view which has been debated and reached through consensus.
Potential constraints Limited availability of the experts.
Subjectivity of the judgements/analyses.
Expected outcomes Collective, nuanced and well-argued judgements which are supported by the experts' knowledge and experience.
Prerequisites Accessibility of the selected experts.
Information collection prior to the organisation of the expert panel.
Knowledge of meeting moderating techniques.

SWOT

Main function Analysis
Advantages Used frequently in ex ante evaluations; can assess the orientations carried out in ex post evaluations.
Potential constraints Excessive general and sometimes incomplete information which does not allow the evaluator to undertake an in-depth analysis.
Risk of impartiality in the analysis of the strengths and weaknesses.
Absence of control over the tool.
Requires a consensual approach of the analysis (perception of the threats/opportunities).
Expected outcomes Quick diagnostic of an organisation, a territory, a programme or a sector form the analysis of the strengths and weaknesses combined with the analysis of the environment/context in terms of opportunities and threats.
Prerequisites Information collection prior to the analysis (SWOT analysis requires very precise information, otherwise it may be limited to an abstract and intellectual exercise).
Knowledge of the tool.

Context indicators

Main function Analysis
Advantages Assess development dynamics through the comparison of the level and evolution of the main indicators with that of countries having similar contexts.
Potential constraints Inappropriate choice of context indicators.
Lack of sufficient/relevant information needed to complete the indicators.
Expected outcomes Situation of a country through a comparison of its context indicators with that of other countries.
Prerequisites Requires a good data collection, implemented on a regular basis so as to measure the context indicators evolution.
May require the implementation of surveys.

Cultural and social analysis

Main function Analysis
Advantages Identifies the components of social, ethnic, religious and interest groups, as well as all the converging and diverging points at the basis of common values within a society.
Potential constraints Limited availability of the experts.
Subjectivity of the judgements/analyses.
Level of abstraction of the analysis; distance from reality.
Expected outcomes Approximate notion of social capital in a given territory.
Prerequisites Accessibility of the selected experts.
Information collection prior to the analysis.
Knowledge of meeting moderating techniques.

Multicriteria analysis

Main function Judgement
Advantages Decision-making assistance tool which usefully contributes to formulate judgements supported by a range of heterogeneous criteria in ex post evaluations.
Potential constraints Insufficiency/lack of precision of the information.
Lack of knowledge of the techniques.
Expected outcomes Judgements on evaluation questions.
Prerequisites Required information (prior collection and analyses to be provided).
Knowledge of the techniques.
Time.
Costs.

Cost-effectiveness analysis

Main function Judgement
Advantages Compares the effectiveness of projects or programmes targeting similar impacts.
Usefully contributes to the formulation or validation of the judgement on the selection of the most effective projects or programmes.
Potential constraints Insufficiency/lack of precision of the information.
Lack of knowledge of the techniques.
Expected outcomes Determines the activity achieving an outcome at a least cost.
Judgements on the selection of projects/programmes.
Prerequisites Required information (prior collection and analyses to be provided).
Knowledge of the techniques.
Time.
Costs.

.

Which combination of tools?

.

The combination rationale

Some tools require the implementation of other tools prior to their use. Such is the case when a tool yields useful information for the implementation of another tool, or when bringing a different viewpoint to the analysis, it strengthnens or nuances the conclusions reached which another tool. Before taking the decision to implement complex tools (such as judgement tools), the evaluator should check whether preliminary information which is capable of improving their performance are available, and find the optimum tool capable of yielding such information.
The most frequent cases of combinations of tool are listed in the table below. For example, the first line shows that objectives diagrams and impact diagrams require or may require the implementation of interviews and focus groups.

Tools required by other tools

The table is indicative and other combinations can be developed in particular contexts. Usually, collection tools (interviews, focus group and surveys) are the most frequently combined with analysis and judgement tools, because the latter require specific information for their implementation. Thus, special care should be given to analysis and judgement tools, so as to ensure the homogeneity of the methodology and the maximum performance of the tools throughout the evaluation.

The tools' frequency of use

The table shows that the interview is the tool the most frequently used by other tools, including collection tools (which is not surprising). Conversely, case studies are seldom used and belong to the category of meta-tools requiring the support of all the collection tools and, if needs be, analysis tools.
Although presenting an exhaustive table of the tool's combinations is not possible, recommendations should be made to ensure the homogeneity of the tools performance all through the evaluation:

  • Some tools are frequently used by other tools. For example, the interview is the collection tool the most often used by other evaluation tools.
  • Most of the tools are multifunctional. The decision to use a specific tool should thus be based on its conditions of use and on the constraints related to the implementation of additional tools.
  • Because of the polyvalent nature of the majority of the tools, criteria for rationalisation and optimisation must support their selection. For example, evaluation questions should be classified depending on the tools selected and vice-versa.

Example of combination

Testing of two country evaluation tools - the survey and the focus group - during the evaluation in Benin.
The test checked the possibility of providing elements to answer the following evaluation question through the implementation of a combination of two tools: "To what extent are the European Commission's sanitary strategy and interventions adapted to the fundamental needs of the population, and particularly the poor?"
The objectives of the test were to collect and analyse information and points of view from beneficiaries, in order to assess the global trends of the service provided by health centres. The test combined three surveys and four focus groups.
Three surveys:

  • Two structured questionnaires, one addressed to households and the other to the users of the health centres, which resulted in a total of 660 respondents
  • One open-ended questionnaire addressed to the local authorities, health professionals and people in charge of insurance companies and local associations, which resulted in a total of 100 respondents.

Four focus group investigations:

  • Two focus groups of beneficiaries (countrywomen) in two different villages.
  • One focus group with midwives and nurses.
  • One focus group with doctors from the private and public sector.

The last 2 focus group investigations were set up to complement the outcomes of the 2 focus groups with beneficiaries and that of the surveys. The goal was to confront different perspectives on a same situation where a few dysfunctional elements could be raised.
The confrontation of the survey's outcomes with that of the focus group contradicted the hypothesis at the start of the reorganisation of the public health system implemented by the European Commission in Benin. It also asserted that the access to health care in the public sector was very limited for the poor population.

What specific constraints and requirements?  

Limitations and risks depend on the constraints and requirements specific to each of the available tools, whatever their categories.

Possible constraints

Access to information and sources of information

Most of the tools require relatively straightforward access to baseline documentation (Objectives diagram and impact diagram, Problem diagram, Decision diagram, Case study, Context indicators). Some tools strongly depend on the representative nature of the interlocutors and stakeholders, on their availability and co-operation, such as the Interview, the Focus group and the Survey.

Quality of the collected information

The quality of the collected information is crucial, and can be influenced by:

  • The insufficiency and/or approximate nature of the information (SWOT, Context indicators, Cost-effectiveness analysis, Multicriteria analysis, Cultural and social analysis)
  • Bias in the opinions expressed (Survey, Focus group)
  • Partiality in the judgements or bias in the analyses (Expert panel, Cultural and social analysis)
  • Partiality in the analysis of the strengths/weaknesses (SWOT)
  • An ill-founded generalisation (Case study)
  • Answers difficult to compare (Interview)
  • An inappropriate selection of context indicators (context indicators)
  • An excessive level of abstraction in the analysis and too great a distance from reality (Cultural and social analysis)

Knowledge of specific techniques

A good knowledge of the tool is needed for all the tools, and especially for: the Survey, the Cultural and social analysis, SWOT, Cost-effectiveness analysis, Multicriteria analysis.
The availability and experience of acknowledged experts must be confirmed, particularly for: the Case study, the Cultural and social analysis, the Expert panel, the Focus group.
The processing of answers can quickly become unmanageable because of a lack of sufficient competence (Survey).
A lack of experience can lead to inappropriate choices in study cases, with no direct relationship with the evaluation's objectives (Case study).
The attempt at consensus seeking on the perception of threats and opportunities requires a specific knowledge (SWOT).

 

Prerequisites

Human resources

The availability of acknowledged experts must be guaranteed whatsoever. It is a key condition for: the Cultural and social analysis, the Expert panel, SWOT, Cost-effectiveness analysis, Multicriteria analysis and Focus group.
The evaluator's expertise must be assessed for the organisation tools: Objectives diagram and impact diagram, Problem diagram, and Decision diagram.
Expertise is required for the data processing analysis developed in Surveys.
Experience in meeting management and moderation is crucial for: the Cultural and social analysis, the Expert panel, the Focus group.

Costs

Implementation costs should be examined particularly for the following tools: Cost-effectiveness analysis, Multicriteria analysis, Survey, Case study.

Time span

An appropriate time schedule is important for all tools, and particularly for the implementation of: Cost-effectiveness analysis, Multicriteria analysis, Survey, Case study and Interview.

Impact of preparatory stages

When a tool requires a preparatory information collection, its time span, cost and human resources should not be under-estimated. If needs be, access to sites and beneficiaries availability must be checked (Case study).

.

 
Check list for the tool's implementation

.

Check list for the evaluation team

  • Does the selection of evaluation questions allow the use of tools to efficiently provide answers?
  • Will the selection of tool help in formulating an overall assessment?
  • Does the implementation of the tools selected provide relevant answers to the evaluation's objectives?
  • Can each tool be adapted to the constraints and opportunities related to the specific conditions of the evaluation?
  • Does the organisation of each of the tools take into account all the prerequisites for their implementation?
  • Were the prerequisites for each tool (such as the competences, number of working days, expenses) precisely assessed? Are they relevant with regard to the expected outcomes? To the outcomes achieved?

Check list for contract managers

  • Are the answers provided for each of the evaluation questions supported by the implementation of an effective combination of tools?
  • Does the use of appropriate tools support the overall assessment?
  • Is the choice of each tool and their combination clearly and convincingly explained?
  • Are the available resources for the tool's implementation (experts, budget, time span) being effectively used?
Toolbox and directions for use

The aim of the evaluation is to produce operational recommendations supported by solid conclusions which are based on clear judgement criteria, solid and concrete information and rational argumentation.
Conclusions and recommendations are a means of discusssing about the programmes or policies with their authors and the operators involved. They must result from a rigorous process (the expert's own point of view is insufficient).
The development of a rigorous ad hoc methodology and the use of appropriate tools are the crucial components of this process. This website presents a series of evaluation tools and directions for use of the toolbox which:

  • Integrates the tools within the evaluation's methodology
  • Presents selection criteria for the tools
  • Gives guidance on an appropriate combination of tools