Analysis
This section is structured as follows:
- Key elements
- Analysis strategy
- Counterfactual
- External factors
- Exploratory and confirmatory analysis
- Validity
KEY ELEMENTS | |||||||||
An analysis is required to convert data into findings, which themselves call for a judgement in order to be converted into conclusions. The analysis is carried out on a question-by-question basis, in the framework of an overall design cutting across all questions. |
|||||||||
Data, evidence and findings | |||||||||
Any piece of qualitative or quantitative information that has been collected by the evaluation team is called data, for instance:
A piece of information is qualified as evidence as soon as the evaluation team assesses it as reliable enough, for instance:
Findings establish a fact derived from evidence through an analysis, for instance:
Some findings are specific in that they include cause-and-effect statements, for instance:
Findings do not include value judgements, which are embedded in conclusions only, as shown below:
|
|||||||||
Strategy of analysis | |||||||||
Four strategies can be considered:
The first strategy is the lightest one and may fit virtually all types of questions, for instance:
The three last strategies are better at answering cause-and-effect questions, for instance:
The choice of the analysis strategy is part of the methodological design. It depends on the extent to which the question raises feasibility problems. It is made explicit in the design table. |
|||||||||
Data processing | |||||||||
The first stage of analysis consists in processing information with a view to measuring or qualifying an indicator, or to answering a sub-question. Data are processed through operations such as cross-checking, comparison, clustering, listing, etc.
Provisional findings emerge at this stage of the analysis. Further stages aim to deepen and to strengthen the findings. |
|||||||||
Exploration | |||||||||
The exploratory analysis aims to improve the understanding of all or part of the evaluated area, especially when knowledge is insufficient and expertise is weak, or when surprising evidence does not fit available explanations. The exploratory analysis delves deeper and more systematically into the collected data in order to discover new plausible explanations such as:
The exploratory stage may not be needed for all questions. When such an analysis is carried out, brainstorming techniques are appropriate. The idea is to develop new plausible explanations, not to assert them. |
|||||||||
Explanation | |||||||||
This next stage ensures that a sufficient understanding has been reached in terms of:
Depending on the context and the question, the explanation builds upon one or several of the following bases:
A satisfactory explanation (also called explanatory model) is needed for finalising the analysis. |
|||||||||
Confirmation | |||||||||
The last stage of the analysis is devoted to confirming the provisional findings through a valid and credible chain of arguments. This is the role of the confirmatory analysis. To have a finding confirmed, the evaluation undertakes a systematic self-criticism by all possible means, e.g. statistical tests, search for biases in data and analyses, check for contradictions across sources and analyses. |
|||||||||
ANALYSIS STRATEGY | |||||||||
Cause-and-effect analysis | |||||||||
What does this mean? Approach through which the evaluation team asserts the existence of a cause-and-effect link, and/or assesses the magnitude of an effect. |
|||||||||
Attribution or contribution - Attribution analysis Attribution analysis aims to assess the proportion of observed change which can really be attributed to the evaluated intervention. It involves building a counterfactual scenario. |
|||||||||
- Contribution analysis Contribution analysis aims to demonstrate whether or not the evaluated intervention is one of the causes of observed change. It may also rank the evaluated intervention among the various causes explaining the observed change. Contribution analysis relies upon chains of logical arguments that are verified through a careful confirmatory analysis.
|
|||||||||
Analytical approaches - Counterfactual The approach is summarised in the diagram below:
The "policy-on" line shows the observed change, measured with an impact indicator, between the beginning of the evaluated period (baseline) and the date of the evaluation. For instance: local employment has increased, as has literacy. The impact accounts for only the share of this change that is attributable to the intervention. |
|||||||||
- Case studies Another analytical approach relies on case studies. It builds upon an in-depth inquiry into one or several real life cases selected in order to learn about the intervention as a whole. Each case study monograph describes observed changes in full detail. A good case study also describes the context in detail and all significant factors which may explain why the changes occurred or did not occur. - Causal statements The approach builds upon documents, interviews, questionnaires and/or focus groups. It consists in collecting stakeholders' views about causes and effects. Statements by various categories of stakeholders are then cross-checked (triangulated) until a satisfactory interpretation is reached. A panel of experts may be called to help in this process. |
|||||||||
- Meta-analysis This approach builds upon available documents, for instance:
In performing meta-analyses, the evaluation team needs to (1) assess the quality of information provided by the reviewed documents, and (2) assess the transferability to the context of the evaluation underway. |
|||||||||
- Generalisation The first two approaches (counterfactual and case studies) have the best potential for obtaining findings that can be generalised (see external validity), although in a different way. Findings can be said to be of general value when all major external factors are known and their role is understood. Counterfactual approaches build upon explanatory assumptions about major external factors, and strive to control such factors through statistical comparisons involving large samples. Case studies strive to control external factors through an in-depth understanding of cause-and-effect mechanisms.
|
|||||||||
Recommendation The evaluation team should be left with the choice of its analysis strategy and analytical approach. |
|||||||||
Cause-and-effect questions | |||||||||
What does this mean? Cause-and-effect questions pertain to the effects of the evaluated intervention. They are written as follows:
These questions call for an observation of change, and then an attribution of observed change to the intervention, or an analysis of the intervention's contribution to observed changes. Causality and evaluation criteria Effectiveness and impact questions tend to be cause-and-effect questions in the sense that they link the evaluated intervention (the cause) to its effects.
The latter example involves causes and effects, but only in a prospective and logical manner. The evaluation team is not expected to assert the existence of cause-and-effect links and/or to assess the magnitude of actual effects.
Questions pertaining to the EC value added may be cause-and-effect questions if the evaluation team attempts to assert the existence or the magnitude of an additional impact, due to the fact that the intervention took place at European level. Caution! Questions which do not require a cause and effect analysis do nevertheless call for a fully-fledged analysis covering all or part of data processing, exploration, explanation and confirmation. |
|||||||||
Counterfactual | |||||||||
What does this mean? The counterfactual, or counterfactual scenario, is an estimate of what would have occurred in the absence of the evaluated intervention.
|
|||||||||
What is the purpose? By subtracting the counterfactual from the observed change (factual), the evaluation team can assess the effect of the intervention, e.g. effect on literacy, effect on individual income, effect on economic growth, etc. |
|||||||||
Comparison group One of the main approaches to counterfactuals consists in identifying a comparison group which resembles beneficiaries in all respects, except for the fact that it is unaffected by the intervention. The quality of the counterfactual depends heavily on the comparability of beneficiaries and non-beneficiaries. Four approaches may be considered for that purpose. Randomised control group This approach, also called experimental design, consists in recruiting and surveying two statistically comparable groups. Several hundred potential participants are identified and asked to participate or not in the intervention, on a random basis. The approach is fairly demanding in terms of preconditions, time and human resources. When the approach is workable and properly implemented, most external factors (ideally all) are neutralised by statistical rules, and the only remaining difference is participation in the intervention. |
|||||||||
Adjusted comparison group In this approach a group of non-participants is recruited and surveyed, for instance people who have applied to participate but who have been rejected for one reason or another. This approach is also called quasi-experimental design. In order to allow for a proper comparison, the structure of the comparison group needs to be adjusted until it is similar enough to that of participants as regards key factors like age, income, or gender. Such factors are identified in advance in an explanatory model. The structure of the comparison group (e.g. per age, income and gender) is adjusted by over- or under-weighting appropriate members until both structures are similar. |
|||||||||
Matching pairs In this approach a sample of non-participants is associated with a sample of beneficiaries on an individual basis. For each beneficiary (e.g. a supported farmer), a matching non-participant is found with a similar profile in terms of key factors which need to be controlled (e.g. age, size of farm, type of farming). This approach often has the highest degree of feasibility and may be considered when other approaches are unpractical. |
|||||||||
Generic comparison The counterfactual may be constructed in abstracto by using statistical databases. The evaluation team starts with an observation of a group of participants. For each participant, the observed change is compared to what would have occurred for an "average" individual with the same profile, as derived from an analysis of statistical databases, most often at national level. |
|||||||||
Comparative approaches Different forms of comparison exist, each with pros and cons, and varying degrees of validity.
|
|||||||||
Strengths and weaknesses in practice A well-designed comparison group provides a convincing estimate of the counterfactual, and therefore a credible base for attributing a share of the observed changes to the intervention. A limitation with this approach stems from the need to identify key external factors to be controlled. The analysis may be totally flawed if an important external factor has been overlooked or ignored. Another shortcoming stems from the need to rely upon large enough samples in order to ensure statistical validity. It is not always easy to predict the sample size which will ensure validity, and it is not infrequent to arrive at no conclusion after several weeks of a costly survey. |
|||||||||
Modelling The principle is to run a model which correctly simulates what did actually occur in reality (the observed change), and then to run the model again with a set of assumptions representing a "without intervention" scenario. In order to be used in an evaluation, a model must include all relevant causes and effects which are to be analysed. These are at least the following:
Complex models (e.g. macro-economic ones) may include hundreds of causes, hundreds of effects, hundreds of mathematical relations, hundreds of adjustable parameters, and complex cause-and-effect mechanisms such as causality loops. When using a model, the evaluation team proceeds in three steps:
Modelling techniques are fairly demanding in terms of data and expertise. The workload required for building a model is generally not proportionate to the resources available to an evaluation. The consequence is that the modelling approach is workable only when an appropriate model and the corresponding expertise already exist. |
|||||||||
EXTERNAL FACTORS | |||||||||
What are they? | |||||||||
Factors which are embedded in the context of the intervention and which hinder or amplify the intended changes while being independent from the intervention itself. External factors are also called contextual, exogenous or confounding factors. |
|||||||||
Why are they important? | |||||||||
|
|||||||||
Typical examples | |||||||||
Factors explaining participation in the intervention:
Factors explaining the achievement of specific impacts:
Factors explaining global impact
When dealing with such external factors, the evaluation may usefully consult the contextual indicators that are available on the web. |
|||||||||
How can they be identified? | |||||||||
In a given evaluation, external factors are potentially numerous and it is crucial to highlight the most important ones. The following approaches may help:
Identifying external factors is one of the main purposes of the exploratory analysis. |
|||||||||
Recommendations | |||||||||
Do not try to identify all possible external factors when clarifying the intervention logic in the structuring phase of the evaluation. They are simply too numerous. This task should be undertaken only when working on a given evaluation question, and only if the question involves a cause-and-effect analysis. | |||||||||
EXPLORATORY AND CONFIRMATORY ANALYSIS | |||||||||
Exploratory analysis | |||||||||
What does this mean? If necessary, the evaluation team delves into the collected data in order to discover new plausible explanations such as:
What is the purpose?
How to carry out the exploratory analysis The analysis explores the set of data (quantitative and qualitative) with a view to identifying structures, differences, contrasts, similarities and correlations. For example, the analysis involves:
The approach is systematic and open-minded. Brainstorming techniques are appropriate. Ideas emerge through the first documentary analyses, interviews, and meetings. The exploration may continue through the field phase. |
Confirmatory analysis |
What does this mean? Provisional findings progressively emerge during the first phases of the evaluation team's work. They need to be confirmed by sound and credible controls. That is the role of the confirmatory analysis. |
What is the purpose?
How is a confirmatory analysis performed? For a finding to be confirmed, it is systematically criticised by all possible means, e.g.:
Recommendations
|
VALIDITY |
What does this mean? |
Validity is achieved when:
|
What is the purpose? |
A lack of validity may expose the evaluation to severe criticism from those stakeholders who are dissatisfied with the conclusions and recommendations, and who will point out any weaknesses they may have found in the reasoning. Validity is part of the quality criteria. It should be given an even higher level of attention when the intended users include external stakeholders with conflicting interests. |
External validity |
Quality of an evaluation method which makes it possible to obtain findings that can be generalised to other groups, areas, periods, etc. External validity is fully achieved when the evaluation team can make it clear that a similar intervention implemented in another context would have the same effects under given conditions. Only strong external validity allows one to transfer lessons learned. External validity is also sought when the evaluation aims at identifying and validating good practice. |
Internal validity |
This is the quality of an evaluation method which, as far as possible, limits biases imputable to data collection and analysis. Internal validity is fully achieved when the evaluation team provides indisputable arguments showing that the findings derive from collected facts and statements. Internal validity is a major issue in the particular case of cause-and-effect questions. When striving to demonstrate the existence and/or to assess the magnitude of an effect, the evaluation team is exposed to risks such as:
|
Construct validity |
This is the quality of an evaluation method which faithfully reflects the changes or needs that are to be evaluated. Construct validity is fully achieved when key concepts are clearly defined and when indicators reflect what they are meant to. Construct validity is threatened if the evaluation team does not fully master the process of shifting from questions to indicators. Construct validity is also at risk when the evaluation team uses indirect evidence like proxies. |
Recommendations |
|