When we talk about evaluation, we often focus on its approaches and methods, how to undertake an evaluation, and the report. This is all very good and useful, but it is not the whole story. Evaluation is essentially part of a wider process of learning.
In this week’s Voices & Views, Bridget Dillon, an Evaluation Manager at DEVCO writes a guest piece looking at how evaluation uptake can be improved. |
Many agencies have recently renewed their interest and focus on the use of the knowledge produced by evaluations. This after all, is why we evaluate in the first place. DEVCO commissioned a Study on Uptake in 2014 which generated considerable discussion within the organisation, and consensus about the importance of using evaluation knowledge. Many suggestions about ways to do this were made. In a recent interview, Henriette Geiger, Head of Unit for the Latin America and Caribbean region at DEVCO says she is 100% signed up to the key findings in the Study. She notes, ’We spend too much time on doing evaluations and not enough time on uptake; it should be the other way around’
The EU study and others recently undertaken by other agencies (eg UN, World Bank, DFID) together with our collective experience, point to the need for a strong learning culture in an organisation, sustained through corporate incentives, and continuously and publicly encouraged by Senior Management. Marcus Cornaro, DDG DEVCO wants, ‘senior management to systematically quote evaluations in major policy speeches.’ Resources must be allocated to emphasize and facilitate learning. As a step in this direction, DEVCO has recently put in place a Learning and Knowledge Management Strategy and an Evaluation Policy.
There is much each and every one of us can, and must do, to ensure knowledge from evaluations is better captured and productively utilized. In the EU, and in the field of international development, this is not academic; we owe it to the targeted recipients of assistance, and to the taxpayers who fund co-operation.
In the following video, colleagues from the European Commission and an evaluation expert share some thoughts how uptake can be enhanced.
The European Union has adopted the principle, ‘Evaluate First!’ to remind colleagues to evaluate experience first in the area/issue of their proposed intervention before they design anew. It also refers to building evaluation into the very design of an intervention. Banish any notion you may have that evaluation is something considered only at the end of an intervention – it is part and parcel of its very design. Again, approach and method are important, but they need to be actively shaped by clarity around who the users are, for what purpose the evaluation is being undertaken, and when and what decision-making process the overall evaluation findings will feed into. Key users need to determine the core information they want to know; on which questions they want to focus. Their ‘stake’ in the evaluation needs to be claimed at this early stage.
We should use the process of carrying out an evaluation to take opportunities to engage the evaluation stakeholders and key users so that they have a high level of ownership. Ownership has, time and again, proved to have a strong bearing on the subsequent utilisation of the knowledge from evaluations. Stakeholders’ views should be solicited at key stages along the way – e.g. at inception, when initial findings are presented, when draft Reports are submitted - to help nuance, give depth of perspective and accuracy to the evaluation. The conflict prevention and peace-building EU strategic evaluation (2011) is a good example. It involved a long preparation process, targeted presentations and discussion during implementation which built greater ownership, and many discussions after the report was published. The evaluation was a key element in informing subsequent guidance on EU’s role in this area of work.
It is important that evaluation is understood not as an exercise which is, ‘done to people,’; it is an organised, rigorous process which involves them, their knowledge and perspectives.
The most underplayed part of the evaluation process is that which follows the completion of the report. Many assumptions are made about ‘dissemination’ of evaluation ie whom it will reach, and what people will do with the knowledge. We tend to think everyone picks up the same messages, but they do not.
Audiences need to be targeted and evaluation messages tailored to the interests of their audiences, and presented in ways which particular audiences can ‘receive’ the messages. Large tomes of information – a common feature of an evaluation – score very badly on ‘message capture’. James McNulty (EU Delegation to Zambia) highlights from his experience that communicating evaluation knowledge in a large, often dense, report, written in the passive voice is a sure fire way to ensure messages are not heard. We need to craft the messages, or ‘translate’ them for particular audiences.
Furthermore, just getting the message across is not enough. Messages need to be actively placed or ‘brokered’ in decision-making fora by key stakeholders. Evaluation managers can be instrumental in getting messages into policy development processes. They can ensure the information is with, and understood, by appropriate people, in good time, in easily absorbed form. ‘Knowledge, Policy and Power in International Development’ – a practical guide’ (Jones et al, ODI, 2012) is a good source of further information on this.
Short summaries should be made available on every evaluation. Short means short. 1 -3 pages. No more! Think about it. Would you have the time to read more? No. Why then assume that anyone else would? Short and punchy is memorable – it suits policy makers. Longer and detailed - suits academics. Which is your audience?
Summarising key similarities and key differences between evaluations which cover similar terrain can be a very effective way of informing policy and practices. These summaries should be the stock in trade of the work of evaluation units.
The bottom line, is that knowledge sharing, improving policy and practice cannot be done at arm’s length; it is fundamentally a social business. Talking and sharing face to face, using a variety of different media to engage people – short videos, online chats, group discussions, webinars - are all at our disposal these days.
So there is a lot we can do. It is exciting, it is stimulating, and made much easier by the digital age we live in.
Nonetheless, it is up to each and every one of us to make it happen!
Gone are the days of evaluation tomes collecting dust on the shelves!
This article is published as part of an Evaluation Thematic week on capacity4dev.eu. For more information you visit the Public Group on Design, Monitoring and Evaluation, where you can view an introduction to the thematic week from Philippe Loop, Head of Unit for Evaluation at DEVCO. Find out more about the EC and EEAS development policy in the Voices & Views: Evaluation Matters. |
This collaborative piece was drafted by Bridget Dillon with support from the capacity4dev.eu Coordination Team.
(6)
Log in with your EU Login account to post or comment on the platform.
These contributions are really helpful. Thank you for sharing.
You are right. Use/uptake of knowledge generated by evaluation is certainly not a new issue. The record shows, however, that it is an issue which demands constant vigilence as organisations are prone to let it drift out of focus.
Whilst not losing sight of points already made, and particularly the imperative to build internal incentives for learning within organisations, another useful incentive may come from the growing movement for citizen feedback. The Feedback Labs Newsletters info@feedbacklabs.org are a good source of information here. Their October 28th 2014 circulation includes a post from the Centre for Effective Philanthropy (CEP). The CEP Report on beneficiary feedback provides some lines of optimism in that the philanthropic sector is making advances in not only collecting feedback, but also using it to inform what they do. Take a look at the major foundations initiative called, 'Fund for Shared Insight', established to try to catalyse a new feedback culture within the philanthropy sector. The President of the World Bank has recently said that robust feedback is needed from beneficiaries on all projects where there is an identifiable beneficiary.
Some of you may be thinking - hang on, this is getting a long way from where we are with our Evaluation Report in hand, and trying to make use of it. Careful of being quick to be sure. The power of feedback can be very strong. Once you have beneficiary feedback, you have another source of pressure to take action. Sharing evaluation reports, or 'translating' key knowledge from a report and sharing this with beneficiaries to get their views, then bringing those views back to the decision-making table, can be a productive way of helping to enhance the uptake of an evaluation.
On another note, given this is the Year of Evaluation, this is a good opportunity to raise awareness - at very least - on ways of enhancing use of evaluations in the many and varied get-togethers and conferences being planned. Don't wait for others, get your word in, and spread your good practice.
Bridget
Excellent approach!
I agree 100%, and this is not a new issue, for many years we have been talking about passing the messages of evaluations / ROM to the right audiences, to feed in the process of identification and formulation of new projects (avoiding the same mistakes and design flaws as per previous similar interventions), in other words: about learning!
I hope this initiative brings concrete and useful results!
Congratulations on raising this most important issue, on which is based the essential value of evaluation, i.e. use. It would be misguided however to focus on use of the evaluation report. For, as I explained to Commissioners Gradin and Liikanen in 1995, and have always said: the value of evaluation is in the process of evaluation and in evaluating process. They understood this perfectly and it was in this perspective that they penned the EC's first communication on the organisation and systems for evaluation in 1996. Not that results may not be meaningful, but you can't have these without process. And now that today we are increasingly focused on sustainable development it should be equally clear to us that processes are sustainable while results are not (and cannot be by definition).
It is telling, and unsurprising to note that, aside from the obvious "make your reports shorter and less boring" (i.e. don't confuse publication and communication) that suggestions for "uptake" deal primarily and quite naturally with process.
Unfortunately the development of evaluation in the EC has for too long been constrained by the accountability paradigm (leave that to the auditors as I explained to them during my advisory stint at the ECA in 1998), mad indicator disease and more recently the impact nonsense. Add to that the regulatory impact lens brought to so called "ex ante evaluation" (i.e. the fit for purpose and other checklist variations) by the SG, and we have this curious situation in which, what was clearly understood at the outset of the evaluation initiative in the EC almost twenty years ago, re-surfaces in the issues and suggestions this blog raises.
So thank you for allowing us to come up for air. A couple of decades is a long time to hold your breath.
Ian C Davies, Credentialed Evaluator (CE)
James, very true.
In my experience the chain of events, the list of institutions and partners involved in an initiative or project is very long. From the European office, to the European country office, to the European country's NGO/institution, to the Developing country's ministries, to the NGO/institution on the ground, to potential subcontractors, the list of players is long, and not all are in the game for the same good reasons, with the same development goals and objectives. The end result is often very poor, if not useless, and sometimes one cannot but wonder what went on along the way.
I believe that an approach such as the "Community-led Total Sanitation" organizations have taken up (see Unicef Field Notes, COMMUNITY APPROACHES to TOTAL SANITATION), or such as the "Sirolli Institute" /see http://sirolli.com/) has been practicing for a number of years will be a breath of fresh air.
I have found that a lot of initiatives do not deliver goods and services of any benefit to the communities in need. The ultimate beneficiaries are not consulted at inception, during implementation and much less at the end.
By the time an evaluation gets to reach the deeper levels of the chain you will find a web of lack of information, if not misinformation, that makes a useful evaluation impossible.
Regarding documentation - I tried to read the four manuals. Very long and tedious reading. I wonder how many field office staff members will ever read them and then actually apply them. And there, together with the motivation, lies a huge implementation problem. I ended up reading the evaluation only. I agree with the points made by the evaluation team. I just think that they were not made forcefully enough.
Mais, I think you make an important point here which, together with Bridget's article, are saying something really interesting about bringing project formulation and evaluation closer together.
If we are saying that we want participants to be more deeply involved, then that has loads of implications both for both for project design and the evaluation because it requires us to think quite holistically about whose values are at the heart of the intervention and any subsequent evaluation. Our values? Or our participants? It requires probably more engagement and more consultation with potential participants than many of us are used to.
So. Having a more participatory approach is great - as Bridget says there are many ways we can have closer engagement. But are we ready - culturally and technically - to embrace the methodologies and the inclusive approaches that are implied? If not, how do we get ready? It would be GREAT if we can have some more examples here from the community on participatory approaches used by us and by others.
I quote:
"Stakeholders’ views should be solicited at key stages along the way – e.g. at inception, when initial findings are presented, when draft Reports are submitted - to help nuance, give depth of perspective and accuracy to the evaluation."
unquote.
I suggest that at inception is a key time to evaluate the intended initiative. The people who are supposed to benefit from the proposed initiative should be consulted in a non-threatening way so they can say "no" if that may be the case.
The world at large is full of examples of useless interventions. People are not consulted. People are not asked "what do you need?" and fake solutions to fake problems foisted upon the people end up being the sad result way too often.