Select your language

Humanitarian Charter and Minimum Standards in Humanitarian Response


Core Standard 5: Performance, transparency and learning

The performance of humanitarian agencies is continually examined and communicated to stakeholders; projects are adapted in response to performance.

 

Key actions (to be read in conjunction with the guidance notes)

 

Key indicators (to be read in conjunction with the guidance notes)

 

Guidance notes

  1. Monitoring compares intentions with results. It measures progress against project objectives and indicators and its impact on vulnerability and the context. Monitoring information guides project revisions, verifies targeting criteria and whether aid is reaching the people intended. It enables decision-makers to respond to community feedback and identify emerging problems and trends. It is also an opportunity for agencies to provide, as well as gather, information. Effective monitoring selects methods suited to the particular programme and context, combining qualitative and quantitative data as appropriate and maintaining consistent records. Openness and communication (transparency) about monitoring information increases accountability to the affected population. Monitoring carried out by the population itself further enhances transparency and the quality and people’s ownership of the information. Clarity about the intended use and users of the data should determine what is collected and how it is presented. Data should be presented in a brief accessible format that facilitates sharing and decision-making.
     
  2. Agency performance is not confined to measuring the extent of its programme achievements. It covers the agency’s overall function – its progress with respect to aspects such as its relationships with other organisations, adherence to humanitarian good practice, codes and principles and the effectiveness and efficiency of its management systems. Quality assurance approaches such as Groupe URD’s Quality Compas can be used to assess overall agency performance.
     
  3. Impact monitoring: Increasingly, the assessment of impact (the wider effects of interventions in the short to medium term, positive or negative, intended or unintended) is viewed as both feasible and essential for humanitarian response. Impact assessment is an important emerging field, linking particular humanitarian contributions to changes in populations and the context that are complex and interrelated. The affected people are the best judges of changes in their lives; hence outcome and impact assessment must include people’s feedback, open-ended listening and other participatory qualitative approaches, as well as quantitative approaches.
     
  4. Maintaining relevance: Monitoring should periodically check whether the programme continues to be relevant to the affected populations. Findings should lead to revisions to the programme as appropriate.
     
  5. Methods for examining performance: Different approaches suit different performance, learning and accountability purposes. A variety of methods may be used including monitoring and evaluation, participatory impact assessments and listening exercises, quality assurance tools, audits and internal learning and reflection exercises. Programme evaluations are typically carried out at the end of a response, recommending changes to organisational policies and future programmes. Performance monitoring and ‘real-time evaluation’ can also be carried out during a response, leading to immediate changes in policy and practice. Evaluations are usually carried out by independent, external evaluators but internal staff members can also evaluate a programme as long as they take an objective approach. This would normally mean agency staff who were not involved in the response themselves. Humanitarian evaluation uses a set of eight dimensions known as the DAC (Development Assistance Committee) criteria: relevance; appropriateness; connectedness; coherence; coverage; efficiency; effectiveness; and impact.
     
  6. Sector-wide performance: Sharing information about each agency’s progress towards the Sphere minimum standards with coordination groups supports response-wide monitoring and creates an invaluable source of sector-wide performance data.