Skip to main content

Image
Willem Cornelissen, Erasmus School of Economics, Erasmus University Rotterdam

There are abundant stories about over-ambitious targets and badly chosen indicators in Performance Assessment Frameworks. So should we do away with PAFs as instruments for sector capacity development? No, says Willem Cornelissen from the Erasmus School of Economics, Erasmus University Rotterdam and trainer for the Aid Delivery Method Programme of EuropeAid, but there is ample room for improving their use in the social sectors and develop related capacities.

Sector support programmes make use of Performance Assessment Frameworks (PAFs) to express in a quantitative manner both the observed and expected performance of the sector over time. In recent years, experts working in the social sectors have gained vast experience in the practical implications of using PAFs. As a result, there are no lack of stories about over-ambitious targets, badly chosen indicators, lousy data or misleading reporting.

But despite the criticism, the solution is not to do away with the PAF instrument. Instead, there is ample scope for improving the performance measurement system, in particular in the social sectors.

A recent seminar on Performance Management in Health and Education sectors in Brussels, brought together social sector specialists from Delegations of the European Commission in developing and middle income countries and consultants from international organisations. Over four days in October 2009, the seminar shed light on the concerns and restrictions of current performance measurement, but also generated pragmatic suggestions for improvement.

Reasons for the increasing focus on performance measurement

The European Commission’s approach to aid delivery, supports recipient governments in implementing their policies, strategies and targets. The Paris Declaration and Accra Agenda for Action are clear when it comes to ownership over the policies in general and the change process in particular, but also about the mutual responsibility of governments and their donors for the effects of the programmes implemented. Sound performance is the mutual responsibility of the government and its donors. EuropeAid has produced various guidelines and policy documents on performance measurement which detail how aid is delivered and financed.

Performance Assessment Frameworks provoke dialogue

The PAF is a tool designed to express evolution in a particular sector and meant to be an instrument in support of the policy dialogue between government and the donors. In practice however, the PAF shows similarities with Monty Python’s 100 meter dash: all the athletes are ‘ready for start’, but after ‘go’ run into different directions.

Image
 Performance Management in Health and Education Sectors

On the positive side, PAFs provoke discussion about:

• the realism of targets set,

• the selection of indicators,

• the way these indicators reflect performance,

• the data behind the indicators,

• the statistical soundness of the systems used in both the analysis and reporting.

However, the accountability responsibilities of the partner government are not equal to the accountability responsibilities of a donor, since each stakeholder counts with its particular constituency, and has to deal with its own institutional hierarchical relations. Hence the figures of a PAF are not always interpreted in the same way by partner countries and donors, while ‘mutual responsibility’ is usually not (yet) assumed.

Ideally, a PAF shows the observed changes in outcomes of a sector policy, but in practice the time span required to bring about these changes exceeds the life time of the support programme. The Millennium Development Contracts, as started in nine countries in 2009, seek to address this problem. In the Millennium Development Contracts, the financing period has been extended to six years, while the decision about the disbursement of the variable tranche is taken after three years only.

Another common problem is that the performance of complex sectors has to be expressed by a few indicators only. In particular, in general budget support this may oversimplify reality, even if one acknowledges that policy makers and donors are interested in ‘the big picture’ only.

Reasons for a closer look

During the four-day seminar, the experts discussed an array of topics and were guided through issues like:

• the relation between policies, targets and indicators

• performance-based budgeting

• alternative approaches to performance measurement

• capacity development for performance measurement

• management information systems

• international agreements on data collection in relation to national systems

• the statistics behind the data and the quality aspects of data collection

• reporting systems.

All topics were illustrated by field examples, presented by the participants. Each presentation showed a different aspect of the complexities of performance measurement, and included examples from Ethiopia, Kenya, Mauritius, Mozambique, South Africa, Tanzania and Zambia.

Case studies were also presented by international organisations. Health Metrics Network (related to the World Health Organisation) showed how partner countries could be assisted in strengthening their management information systems in the health sector. Transparency International's 'Africa Education Watch' Initiative provided a quantitative look ‘behind the indicators’ in the educational sector, using their ‘corruption perception’ perspective.

Promising suggestions for improvement came out

The participants brought up series of suggestions for improvement at various levels of performance management. Some of these suggestions could be implemented at a rather short notice, others may require changes in systems and procedures, requiring more time.

• It was recommended that Performance Measurement as a subject would have to be ‘mainstreamed’ and ‘aligned’ with existing guidance materials. In addition, each sector support programme that monitors effectiveness of the programme through quantified data should explicitly set aside resources (finance, manpower) to enable the partner government’s institutions to provide data, as well as the processing, analysis and reporting.

• Above mentioned support aims at strengthening the capacity of the National Statistical Offices (NSO), in particular in relation to line ministries, like Health and Education. Despite recent international efforts to support NSOs, their role in development has been underestimated over many years. In various countries structural changes are required in terms of legislation, assurance of independence and information technology. A dedicated resource of materials on support to national statistical offices could be developed.

• As an extension of support to NSOs, the strengthening of the knowledge base of the 'Educational Management Information Systems' (EMIS) according to international standards was another area of interest.This builds on the experience of the Health Metrics Network (see their website www.healthmetricsnetwork.info) as initiated by the World Health Organisation.

• More awareness is needed of the usefulness of data audit of health and education sector Management Information Systems (MIS). It was recommended to develop a simple package of material explaining the steps to undertake a quality data audit of health and education sector MIS data (and for example a standard ToR).

• The assessment of the relations between policies, strategies and targets could be enhanced by undertaking periodically (every three years) more rigorous and structured 'sector-wide assessments'. This process would form part of the 'joint assessments' which have been conducted in various countries, and involve more stakeholders, also those from the civil society. This assessment goes beyond the Joint Annual Sector Review that is (in most cases) restricted to the public sector for a one-year period.

• The drafting and use of sector PAFs throughout the implementation cycle could be strengthened by developing guidance notes that set out more precisely the definition of each indicator, the sources of information, the risks attached, the data collection systems applied, the frequency of measurement (usually called a Data Dictionary) and its embedding (as far as possible) in the National Minimum Data Set.

Related topics

Development Effectiveness
Intervention Cycle Management (Project/Program Cycle Management)
Education
Health
Monitoring & Evaluation