On 15 July 2025, nearly 300 participants tuned in live from around the world to engage with Michael Quinn Patton on "Evaluation Science.” Audience engagement was high, with 40+ live questions. To deepen the conversation, a follow-up podcast addresses many of them in more detail. The EvalVoices official page will be the repository of all relevant information, resources and news, and where visitors will be able to register to join upcoming webinars.
Watch the webinar here and listen to the podcast episode on Spotify! |
Evaluation Science? Why now?
Michael, in his papers, lectures, and voluntary work (he is the Founding Board Member at the International Evaluation Academy), makes a bold case: evaluation can and should stand as a scientific discipline—with rigorous inquiry, systematic peer review, and peer credibility rooted in method, not just functional procedure. His 2018 American Journal of Evaluation article “Evaluation Science” advocates for recognition of evaluation as a disciplined, evidence-based enterprise essential in a world plagued by misinformation and politicised evidence.
The distinction between evaluation and evaluation science centres on the emphasis placed on methodological rigour, scientific principles, and the role of evaluative inquiry within the broader scientific landscape. For example, evaluation science is framed as a systematic inquiry into how, whether, and why interventions to change the world work, deliberately highlighting the use of qualitative, quantitative, and mixed-method scientific designs to answer socially and contextually valuable questions. It stresses transparency, peer review, accuracy, and cumulative learning as hallmarks of scientific practice.
In contrast, general evaluation may encompass a broader, more applied range of activities, including informal assessments or politically motivated reviews, that don’t necessarily adhere to rigorous scientific standards. Michael reinforces this distinction by proposing evaluation science as a subset of evaluation that adopts a scholarly stance toward inquiry, aiming not just to judge merit or inform decision-making but to contribute to theory, build generalisable knowledge, and align with evolving scientific disciplines. He argues that evaluation science brings credibility and intellectual legitimacy to the field by integrating evaluative thinking with scientific thinking, and by situating evaluation among emergent transdisciplinary sciences such as implementation science and complexity science. Thus, while all evaluation involves some form of valuing, evaluation science elevates this process through methodological precision, epistemological transparency, and scientific accountability.
Emphasising evaluation science is increasingly essential in today’s complex, data-rich, and politically polarised world. Michael suggests that evaluation science enhances the credibility, utility, and impact of evaluation by grounding it in transparent, systematic, and theoretically informed methods. He warns that in a "post-truth" era where evidence is often politicised or disregarded, evaluators must align with broader scientific efforts to defend truth, objectivity, and methodological integrity.
Positioning evaluation as science:
-
strengthens its role in democratic accountability,
-
builds evaluative capacity across systems, and
-
fosters interdisciplinary collaboration with fields such as implementation science, systems thinking, and complexity science.
Michael’s vision goes beyond technical rigor: he advocates for evaluators to become scientist-practitioners who not only assess effectiveness but also generate knowledge, advance theory, and contribute to societal learning. In this sense, promoting evaluation science is not just about better methods; it is a strategic and ethical imperative to ensure that evaluation remains a robust, respected, and transformative force in shaping evidence-informed decision-making and sustainable change.
Key takeaways: What you can do now
1. Redefine evaluation as systematic, purpose-driven science
Adopt the mindset that evaluation is “systematic inquiry into how interventions work” with logic, evidence, and review, while staying adaptive to context. Understanding change comes from disciplined flexibility, not formulaic designs.
2. Integrate evaluation as both science and art
Combine scientific rigour with sensitivity: evaluators must cultivate judgment, listening, and contextual responsiveness alongside method. Blend structured inquiry and human-centered facilitation.
3. Embed evaluation throughout intervention design
Evaluation isn’t a postscript: it’s part of initial design. Align with intervention theory of change, co-create evaluation questions with stakeholders, and maintain a feedback loop from day one all the way to decision-making. Read more about it in an open access chapter on constructing a living theory of change that is also based on similar philosophy.
4. Embrace complexity with adaptive, systems-aware methods
Patton’s Developmental Evaluation approach could be a useful approach to work in fluid and unpredictable settings. It emphasises real-time learning, iterative adaptation, and responsiveness in complex, emergent change settings (not rigid pre-planned model testing).
5. Design for Use: Utilisation-focused evaluation
Plan evaluations in service of decision-makers. Engage intended users early, clarify decisions, negotiate expectations, and embed training or facilitation so findings actually drive action and do not just fill a report.
6. Raise evaluation’s visibility and institutional demand
Influence must come from both supply (rigour) and demand (usage). Encourage institutions and policymakers to value evaluation evidence, invest in capacity frameworks, and support evidence uptake, especially in resource-limited settings.
7. Communicate with intention
Move beyond dense reports. Use storytelling, infographics, podcasts, visual summaries, and dialogue formats to translate findings into meaningful action pathways for diverse audiences.
8. Safeguard ethical power dynamics & epistemic inclusion
Build evaluative designs that provoke normalisation of dominant knowledge: actively bring marginalised perspectives to the surface, honour Indigenous epistemologies, and structure evaluation to question power rather than reinforce it.
9. Navigate change with values-driven evaluative thinking
As artificial intelligence (AI) and data analytics proliferate, ensure evaluation science asserts human judgment, transparency, and ethical standards. Algorithms should support not replace evaluative insight.
10. Equip the next generation
Train aspiring evaluators not just in methods but in systems-thinking, values literacy, uncertainty navigation, and adaptive sense-making, so that evaluators can lead scientific, reflective, and contextually sensitive inquiry.
Call to Action: From principled thought to practice
Through this webinar, podcast and article, we invite evaluators, funders, implementers, and engaged citizens to take deliberate steps:
-
Embrace your identity as an evaluation scientist: ground your work in systematic inquiry and ethical integrity.
-
Influence stakeholders: advocate for evaluation’s role at all stages of program design and institutional decision-making.
-
Design evaluations for impact: structure questions, stakeholder involvement, and timing to inform real decisions.
-
Experiment with communication formats: test podcasts, visuals, dialogues to reach intended audiences.
-
Partner with communities: ensure relevance and bring diverse knowledge into evidence generation.
As we navigate an increasingly complex and data-rich world, the role of evaluation as a scientific discipline becomes paramount. Michael Quinn Patton's call to transform evaluation into a rigorous field of inquiry challenges us to ground our practices in systematic, ethical, and contextually relevant methods. By revising our lens from evaluator to evaluation scientist, we join a growing commitment: anchoring evaluation in evidence, context, ethics, while embedding it into action.
This transformation requires us to integrate both qualitative and quantitative approaches, foster interdisciplinary collaboration, and engage stakeholders from the outset to ensure evaluations are not only methodologically sound but also impactful.
By effectively communicating findings and upholding ethical standards that include marginalised voices, we anchor evaluation in evidence and action, ensuring it remains a transformative force in shaping informed decision-making and driving sustainable change. Let’s evolve together.
More resources available for you:
-
Presentation slides: download them here.
-
Michael Quinn Patton’s “Evaluation Science” (2018): seminal paper in American Journal of Evaluation – available online.
Log in with your EU Login account to post or comment on the platform.