Core Standard 5: Performance, transparency and learning
The performance of humanitarian agencies is continually examined and communicated to stakeholders; projects are adapted in response to performance.
Key actions (to be read in conjunction with the guidance notes)
Establish systematic but simple, timely and participatory mechanisms to monitor progress towards all relevant Sphere standards and the programme’s stated principles, outputs and activities (see guidance note 1).
Establish basic mechanisms for monitoring the agency’s overall performance with respect to the agency’s management and quality control systems (see guidance note 2).
Monitor the outcomes and, where possible, the early impact of a humanitarian response on the affected and wider populations (see guidance note 3).
Establish systematic mechanisms for adapting programme strategies in response to monitoring data, changing needs and an evolving context (see guidance note 4).
Conduct periodic reflection and learning exercises throughout the implementation of the response.
Carry out a final evaluation or other form of objective learning review of the programme, with reference to its stated objectives, principles and agreed minimum standards (see guidance note 5).
Participate in joint, inter-agency and other collaborative learning initiatives wherever feasible.
- Share key monitoring findings and, where appropriate, the findings of evaluation and other key learning processes with the affected population, relevant authorities and coordination groups in a timely manner (see guidance note 6).
Key indicators (to be read in conjunction with the guidance notes)
Programmes are adapted in response to monitoring and learning information.
Monitoring and evaluation sources include the views of a representative number of people targeted by the response, as well as the host community if different.
Accurate, updated, non-confidential progress information is shared with the people targeted by the response and relevant local authorities and other humanitarian agencies on a regular basis.
Performance is regularly monitored in relation to all Sphere core and relevant technical minimum standards (and related global or agency performance standards), and the main results shared with key stakeholders (see guidance note 9).
- Agencies consistently conduct an objective evaluation or learning review of a major humanitarian response in accordance with recognised standards of evaluation practice (see guidance note 6).
- Monitoring compares intentions with results. It measures progress against project objectives and indicators and its impact on vulnerability and the context. Monitoring information guides project revisions, verifies targeting criteria and whether aid is reaching the people intended. It enables decision-makers to respond to community feedback and identify emerging problems and trends. It is also an opportunity for agencies to provide, as well as gather, information. Effective monitoring selects methods suited to the particular programme and context, combining qualitative and quantitative data as appropriate and maintaining consistent records. Openness and communication (transparency) about monitoring information increases accountability to the affected population. Monitoring carried out by the population itself further enhances transparency and the quality and people’s ownership of the information. Clarity about the intended use and users of the data should determine what is collected and how it is presented. Data should be presented in a brief accessible format that facilitates sharing and decision-making.
- Agency performance is not confined to measuring the extent of its programme achievements. It covers the agency’s overall function – its progress with respect to aspects such as its relationships with other organisations, adherence to humanitarian good practice, codes and principles and the effectiveness and efficiency of its management systems. Quality assurance approaches such as Groupe URD’s Quality Compas can be used to assess overall agency performance.
- Impact monitoring: Increasingly, the assessment of impact (the wider effects of interventions in the short to medium term, positive or negative, intended or unintended) is viewed as both feasible and essential for humanitarian response. Impact assessment is an important emerging field, linking particular humanitarian contributions to changes in populations and the context that are complex and interrelated. The affected people are the best judges of changes in their lives; hence outcome and impact assessment must include people’s feedback, open-ended listening and other participatory qualitative approaches, as well as quantitative approaches.
- Maintaining relevance: Monitoring should periodically check whether the programme continues to be relevant to the affected populations. Findings should lead to revisions to the programme as appropriate.
- Methods for examining performance: Different approaches suit different performance, learning and accountability purposes. A variety of methods may be used including monitoring and evaluation, participatory impact assessments and listening exercises, quality assurance tools, audits and internal learning and reflection exercises. Programme evaluations are typically carried out at the end of a response, recommending changes to organisational policies and future programmes. Performance monitoring and ‘real-time evaluation’ can also be carried out during a response, leading to immediate changes in policy and practice. Evaluations are usually carried out by independent, external evaluators but internal staff members can also evaluate a programme as long as they take an objective approach. This would normally mean agency staff who were not involved in the response themselves. Humanitarian evaluation uses a set of eight dimensions known as the DAC (Development Assistance Committee) criteria: relevance; appropriateness; connectedness; coherence; coverage; efficiency; effectiveness; and impact.
- Sector-wide performance: Sharing information about each agency’s progress towards the Sphere minimum standards with coordination groups supports response-wide monitoring and creates an invaluable source of sector-wide performance data.