English

Re-examining Monitoring and Evaluation in the Context of Managing for Development Results in Africa

Moderated by Adeline Sibanda

Introduction

The experience shows us that Monitoring and Evaluation (M&E) systems in Africa are primarily copied and pasted from donor countries. These M&E systems have been instrumental in measuring and tracking development results in Africa.  It is also a fact that the current M&E practice is not immune from criticism. One of the critics is that they are not adequately addressing Africa’s development challenges. The powerholders and funders determine what and how to evaluate and who is involved in the process. The citizens’ voice is inadequate in these systems. Another shortcoming is that the African values, as well as ways of constructing knowledge and measuring results, are not reflected in these M&E systems. Therefore, there is a need to re-examine the M&E paradigms that are being used to measure development results in Africa. In light of the Agenda 2063 and Agenda 2030, it is vital to set up effective monitoring and evaluation systems that are owned and led by Africans.

In this discussion, we would like to interrogate the current M&E systems and how they can be improved to contribute towards managing for development results in Africa. Your response to the following questions is essential in this process of making M&E systems align with the development agenda in Africa.

Questions

  1. How could M&E strengthen Managing for Development Results (MfDR) in Africa?
  2. Which countries in Africa have succeeded in setting up effective M&E Systems? What are the critical elements of these systems?
  3. How do you think the Africa-led M&E system should be different from the current M&E systems?

5 thoughts on “Re-examining Monitoring and Evaluation in the Context of Managing for Development Results in Africa

  1. Dear Adeline,

    Thank you very much for initiating this debate. Your first question is how can M & E strengthen Managing for Development Results in Africa. My first reaction, and I know this may generate, a huge debate is packaging Monitoring and Evaluation almost as one activity. It is always taken that Monitoring and evaluation must always go together. In the process we “mis-assign” responsibilities and assignments in the implementation process. Because we even talk about a monitoring and evaluation report.

    Ideally monitoring is an exercise which takes place during implementation, and is best undertaken by line managers. Monitoring gives information on status which can be used in adjusting implementation activities. That information is therefore required real time! A monitoring and evaluation expert as they exist today may gather information on implementation but may not do anything with it because they are not experts. Information on weather patterns are only useful to pilots because, as one pilots the plane, the first officer or co-pilot monitors the instruments and advises the pilot real-time!

    For example in my organization monitoring and evaluation is domiciled in a Directorate we call Compliance and Quality Assurance. They release their Compliance Report at the end of the year and given its volume, other Directors rarely have time to read it because NOT everything in that Report affects ME. So in my view line managers must be actively involved in monitoring – what is now called supervision. In what I want to call enhanced supervision. This weakness explains why in many public service environments – Staff Performance Appraisal is one of the weakest links in Performance Management. Because many supervisors want to do it at the end of the performance period- which is in essence an evaluation.

    My view is therefore we need to change our approach to MONITORING as a function and put emphasis because it is what allows for corrective measures to be undertaken before it is too late. Most monitoring and evaluation reports normally- come as POST- MORTEM. When the horse has bolted. They are therefore of little use for the project they are prepared for. .

    Evaluation on the other hand is ideally at specific intervals or milestone when significant progress is expected to have been made. These evaluations at intervals may also be seen as monitoring exercises – but they are normally about assess progress against targets or objectives. If carried at intervals before the end of project they provide useful data for adjusting implementation appropriately.

    So my view is that we NEED TO REALIGN MONITORING with implementation.

    1. Thanks Sylvester for your input and specifically differentiating, monitoring and evaluation. You also brought in a very important point on Staff Performance Appraisal, I would be interested to know more about how this is linked to the overall monitoring and evaluation system and how effective it has been in your country in managing for development results?

  2. Paschal B. Mihyo, Tanzania

    Dear All,

    I join Sylvester and others in thanking Adeline for raising these issues for discussion. I agree with Sylvester that monitoring and evaluation complement each other but should not be lumped together. Monitoring is a continuous process during the implementation and evaluation seeks to measure outcomes and results. My contribution may not directly address the three questions but seeks to indicate what needs to be done to improve the contribution of M&E to MfDR.
    1. Data:
    We need to improve capacity and capabilities to generate, store, process, manage and utilize data especially the statistical component of data i.e. beyond information. The AfDB has been organizing capacity enhancement programmes on statistics within the RECs and at national level. Many national statistical offices have improved on data management but the weakness still lies in capacity for generating, processing, analyzing and utilizing data in monitoring performance at ministerial level and in local authorities where most action takes place. If data is not continuously developed and managed as part of the implementation process it loses its value addition if it waits to only be used in the evaluation.

    2. Performance related data:
    For data to be relevant to monitoring and evaluation it needs to be performance based. A quick glance at a sample of policies in several countries indicates that most o them have vague of generic results management frameworks. A few I have examined do not have quantitative baseline data, milestones or quantitative indicators which can be used for measuring performance. They only indicate targeted percentages. This makes it difficult for monitoring agencies to really ascertain the results as implementation progresses. It would help to improve M&E for MfDR if results based frameworks were strengthened as an essential element of M&E in MfDR.

    3. Research and evidence for M&E
    Most policies have very scanty provisions on research aimed at generating information and data as implementation progresses. A small sample of policies I examined have very scanty paragraphs on research and evidence for policy management. It could improve the contribution of M&E to MfDR if the former was supported by very strong units and teams dedicated to research which can generate quantitative and qualitative data during the process of implementation and support mid term review and end of programme evaluation. More often than not research comes at the end, conducted by research institutes, consultants or think tanks. By then it is too late because the evidence generated cannot be used retrospectively to support policy implementation.

    4. Capacity for utilizing data and research results
    Capacity for processing and analyzing data is on the rise in many countries but it still limited to statistical agencies, planning commissions, research institutes and think tanks. At the level of MDAs it is yet to be institutionalized and diffused at all levels. It will help our efforts to strengthen the culture of delivery if we also address the issue of capacity to utilize data at the level of MDAs reaching all categories of policy actors

    5. Best example of institutionalized E4M&E
    By E4M&E I mean evidence for M&E. We need to link policy formulation and implementation on the one and M&E on the other with data as a cross cutting issue. Policy formulation and implementation should be evidence based in order to have a holistic approach to MfDR. A very good example of best practices on how to link these elements can be seen in ‘The Guidelines and Good Practice for Evidence Informed Policy Making in the Department of Environment Affairs’ by A.Wils and others published in Pretoria by the Department of Environment Affairs (DEA, South Africa and ODI available on the DEA website.

    1. Paschal, thank you for bringing some key issues into this discussion. Baseline data is important in order to measure progress towards achievement of targets. You also mention generation and use of evidence which is key and a big part of why we do evaluations. Institutionalising M&E is another point you brought in, I would like to know which countries in Africa have institutionalised M&E and what are the critical elements of these systems? Are they working?

  3. Adeline, thank you for launching the debate on this topical issue. It is clear that each agency / institution looks for performance in their respective areas. And the question “how” is asked to understand the situation in order to adopt a strategy that significantly improves performance. I share the opinion of my friend Sylvester on the differentiation between the “monitoring function” and the “evaluation function”. For my contribution to answering your questions, I propose the following points:
    1. Results framework: Each national development plan and / or development project should establish one. This synthetic table integrates the different levels of results to be delivered (products, effects and impacts), the various indicators, the method of calculating the results and the target values. This way, actors and donors know exactly what to expect. On the face of it, the results and indicators are in line with international indicators, namely the SDGs, the Mo Ibrahim, the Human Development indicators, as the results framework responds to both national orientations and international objectives. This is the case of Madagascar, the SNISE (National Integrated Monitoring and Evaluation System) was developed and was adopted by Decree 2014 – 365 of May 20, 2014. All projects / programs fit together. For this, the actors are free to choose the activities to be implemented that they consider relevant knowing that they will be evaluated in relation to the results and not to the work being cut. This is a paradigm shift as it is the result that guides the activities and not the opposite as with the classical approach.

    2. Coaching results: it is necessary to boost the implementation to improve performance. For this, the coordination structure should train the monitoring and evaluation officers in coaching or recruit “coaches” to accompany the implementation. This coaching energizes the established process in order to (i) improve the working method, (ii) develop team spirit and (iii) achieve goals. Ultimately, coaching should be an integral part of monitoring.

    3. Publication of results: generally, the launch of projects / programs or development plan is highly publicized but the closing to measure the performance and especially the presentation of the results of the final evaluation is done in the discretion. All information is interesting (issues related to implementation, lessons learned, desired and unwanted effect … For each project closed, it should analyze its contribution to the national strategic plan and the various international indicators of the sector treated.

Leave a Reply