Skip to main content

Following earlier sessions on Evaluation Science with Michael Quinn Patton and Process Tracing with Professor Derek Beach, the third EvalVoices webinar turned to ‘Outcome Harvesting’ (OH) as an evaluation method, while also exploring how artificial intelligence (AI) may support it without losing its participatory and human-centred character. The session was led by Goele Scheers, an internationally recognised evaluation and learning expert based in Belgium, with over two decades of experience. She is one of the co-developers of Outcome Harvesting and worked closely with the late Ricardo Wilson Grau. She has supported organisations worldwide in applying the method to understand complex change, strengthen adaptive monitoring systems, and enhance learning in dynamic programme environments.


You can watch the webinar here or listen to it here.

Goele also joined us for a follow-up podcast episode on Spotify!


Understanding Outcome Harvesting 

Outcome Harvesting (OH) is a Monitoring & Evaluation (M&E) approach that identifies, formulates, verifies and interprets outcomes that have already occurred. Rather than starting from planned activities or indicators, the method begins with a different question: what has actually changed, and how an intervention contributed to that change. 

This shift in perspective is particularly relevant in complex development and policy environments, including contexts shaped by the Humanitarian-Development-Peace nexus. In such settings, change rarely follows a linear pathway from activities to results. Instead, it often emerges through interactions between actors, evolving relationships and gradual behavioural shifts that were not anticipated in the original planning framework. 

Outcome Harvesting, therefore, focuses on observable behavioural changes in social actors, whether individuals, organisations, communities or institutions. An outcome is defined as a significant change in behaviour that has been influenced by an intervention.  

The method emphasises contribution rather than attribution, recognising that change is shaped by multiple actors and contextual factors. By capturing these behavioural shifts, Outcome Harvesting helps evaluators reveal changes that might otherwise remain invisible and unpack the “black box” between outputs and longer-term results. 

Why Outcome Harvesting matters 

Outcome Harvesting is particularly valuable in complex and adaptive environments. In areas such as governance reform, peacebuilding, advocacy or social change initiatives, meaningful results often emerge gradually and unpredictably. For example, in conflict prevention work, it may be unrealistic to claim that a project directly resolved a conflict. However, changes such as communities reconnecting through shared markets, local leaders engaging in dialogue, or previously marginalised groups gaining a voice in decision-making may represent significant outcomes that signal progress toward longer-term change. 

Outcome Harvesting helps capture these types of developments and place them within a broader narrative of change. The method also centres the voices of those closest to the change. Stakeholders, partners and programme actors are directly involved in identifying and interpreting outcomes, making the process inherently participatory. 

This participatory dimension is central to the method. Outcomes are identified together with sources and actors, evidence is interpreted with substantiators, and learning is generated collectively with evaluation users. Participation in OH is therefore not simply consultation; it is a process of co-creating an understanding of how change occurs. 

Another key benefit is its potential to support adaptive management. Because the method focuses on real changes that have already occurred, it provides insights that can inform strategy adjustments and programme improvements. In this way, OH serves not only accountability purposes but also learning and decision-making.  

The five steps of Outcome Harvesting 

Outcome Harvesting generally follows five steps. 

  1. The first step is designing the harvest. This stage describes how the steps will be implemented, clarifies who the users of the evaluation are and how the findings will be used. Evaluation or monitoring questions are defined with these users in mind, ensuring that the process remains strongly utilisation-focused. 

  2. The second step is harvesting outcomes. Evaluators identify and formulate outcome statements through document reviews, interviews, workshops or other participatory processes. Each outcome statement typically includes three elements: a description of the change that occurred, an explanation of why the change is significant, and a description of how the intervention contributed to the outcome. 

  3. The third step is substantiation. Here, independent informants or evidence sources review selected outcome statements to verify their accuracy and deepen understanding of the changes described. This step helps ensure credibility and strengthens the evidence base of the findings. 

  4. The fourth step involves analysis and interpretation. Outcomes are organised and analysed to identify patterns, trends and connections, which are then interpreted to answer the monitoring or evaluation questions. 

  5. The fifth step focuses on supporting the use of findings. The purpose of OH is not simply to produce a report but to generate insights that inform decision-making, learning and programme adaptation. 

Where AI meets Outcome Harvesting 

As generative AI tools become more accessible, evaluators are increasingly experimenting with how these technologies might support qualitative analysis and knowledge synthesis.

In the webinar, Goele explained how Large Language Models (LLM’s) can assist evaluators in several stages of the OH process. For example, AI can help analyse large volumes of documents to identify potential outcomes, assist in drafting outcome statements, or suggest improvements to ensure clarity and completeness. During the analysis stage, AI tools can assist with categorising outcomes, identifying trends and generating visual representations such as outcome maps. These maps can illustrate relationships between different changes, helping evaluators better understand the pathways through which change unfolds. Goele also demonstrated these possibilities in practice by showcasing AI tools she developed herself, including the Harvest Helper and Harvest Analyst (custom GPTs built with OpenAI), as well as an interactive AI-powered outcome map developed using Claude. These examples illustrate how AI can move beyond theoretical potential to become a practical, hands-on assistant for evaluators throughout the OH process.

However, AI should be viewed as a support tool rather than a replacement for human judgment. Large Language Models have important limitations. They may for example generate incorrect information, lack deep contextual understanding and struggle to interpret cultural nuances or political dynamics. There are also risks associated with how people use AI. Evaluators may accept AI generated outputs uncritically, apply AI in contexts where human dialogue is essential, or prioritise efficiency over meaningful participation. For OH in particular, these risks are significant. The method relies on conversations with stakeholders, interpretation of lived experiences and collaborative sense making. These elements cannot be automated without losing the essence of the approach.

The key message here is the importance of preserving the participatory nature of OH. Listening to those closest to the change remains fundamental to the method. Building trust with stakeholders, engaging sources and substantiators, and collectively interpreting findings require human interaction and judgment. AI can support certain analytical tasks such as document review, data organisation and pattern detection, helping free up time for deeper engagement with stakeholders. However, the meaning of outcomes ultimately emerges through dialogue, context and reflection. Used thoughtfully, AI can strengthen OH practice, but human insight, critical judgement and ethical awareness must remain at its core. 

Looking ahead 

For evaluation practitioners and other stakeholders, including evaluation commissioners and users, OH offers a practical reminder that meaningful change often appears in ways that traditional monitoring systems struggle to capture. In complex development, humanitarian and peacebuilding contexts, progress frequently emerges through behavioural shifts, new partnerships, policy dialogue or gradual institutional change rather than through predefined indicators alone. Systematically identifying and documenting these outcomes can help make the value of programmes more visible, particularly at a time when international cooperation faces increasing scrutiny. At the same time, emerging technologies such as artificial intelligence can support evaluators in tasks such as document review, organising outcome statements or identifying patterns in qualitative data. Used thoughtfully, these tools can free up time for what matters most in evaluation practice: engaging with stakeholders, interpreting context and facilitating collective reflection on how change actually happens.

Key takeaways

  • Start from change, not activities. Ask who changed behaviour and how your intervention contributed.

  • Look beyond indicators. Many important outcomes emerge outside logframes and planned results frameworks.

  • Identify changes in social actors. Ask what individuals, organisations or institutions are doing differently because of your intervention.

  • Use AI as a support tool. It can help with document analysis and pattern detection but should not replace human judgement.

  • Keep people at the centre. The most valuable insights often emerge through dialogue with those closest to the change.


Further reading and resources on Outcome Harvesting:

More resources available for you:

 

Contributors

Related topics

Monitoring & Evaluation

Related countries

Worldwide