This, in no way, is a conventional ‘book review ‘. It is more of my interaction with the book as I read it. I had no intention of doing a review but colleagues seemed to ask for my take on the book- so here we go!
It’s not a practice guide or something like that, for Evaluators but, a collection of scholarly articles on the state of Evaluation in Africa. These paper were written from an analysis of Evaluation papers, articles etc found in a AfrED database created by CLEAR-AA and CREST as well as survey of evaluators and authors in the region. The work is of an unprecedented empirical depth and detail on the status of Evaluation in the region
That it’s based upon a study of evaluations, journal articles and conference papers found in a database poses some generalisation issues in that, the data relied upon is from on 22% of Africa (Largely Anglophone - although I note the inclusion of some Francophone countries in Chapter 3). Secondly it’s based on Open Source Reports- an even larger amount is not shared openly. The authors acknowledge this limitation. However this is not to discredit the book. To the contrary this is a marvelous piece of work that has gone where no other has. It is a great start.
The Editors were careful to set a time period (2005-2015) which they are looking at. In a fast changing world of M&E you wouldn’t want the book to seem to be static in time. They seem to acknowledge that tomorrow things may be different. I hope there are plans to have several editions over time and this could become the State of Evaluation in Africa Report
The book champions the Made in Africa narrative. I am however uncomfortable with how this is presented. It may lead to a “they and us” interpretation which sounds as if adopting western developed methods is a second colonization of Africa. There is no straight answer to the question: Is Africa difference or are we making Africa different? Furthermore for me Africa is not a homogeneous ontological, axiological and epistemological entity. It’s like the proverbial tactile elephant. Rather I think we need a fusion of western and indigenous knowledge systems. This way we will not fall in Mugaberism ‘keep your west to yourselves and we keep our Africa to ourselves’. I dream of a day when evaluation theory and methods developed in Africa will be applied in Europe or the United States this can only be possible if we are inclusive. In Made in Africa our quest is not for superiority but applicability and comparability. The current stunted growth and confusion in Evaluation as a discipline can be traced to identity rivalry between North American and European evaluators. We should avoid the same mistakes. One fundamental question we should be asking in the Made in Africa narrative is “What do we want to change and why?” The authors are on point however in their analysis of what they term “collaborations” between Western and local evaluators. Evaluation is still stuck in the ‘70s development approaches. Donors are slow to recognize local capacity. The book could well have drawn a causal line between their finding that evaluations in Africa are being conducted by Westerners (although this is contradicted in chapter 6) and the Eurocentric nature of the methods. A Shona proverb says “mbudzi kudya mufenje hufanan’ina”. Chapter 6 study found no significant relationship between country of origin and choice of evaluation method. Not sure if there’s any between country where one obtained evaluation qualifications and choice of methods
Chapter 4 analyzes evaluation reporting standards using Scandinavian donors as a case. The chapter is a must read for evaluators and commissioners I evaluation. It present good evidence upon which a discussion on standards can be based. The analysis should be extended to western funding agencies like USAID, DFID and CIDA. The chapter illustrates how evaluation reporting is not standard and aligned to international reporting standards. It notes with concern how ethics are neglected in Evaluation. However beyond research convenience, it doesn’t explore how these affects the development trajectory and use of evaluation reports
The book explores in sufficient detail the major narratives in African Evaluations including:
The book leans heavily towards public sector experiences/conceptual frameworks (whether this was consciously or not I’m not sure). In countries like Zimbabwe where very little donor funds are channeled through government the landscape may be different
Findings on who is carrying out evaluations though not surprising makes sad reading for southerners. The book notes that most evaluations (86%) are carried out by Notherners. The book also notes that the majority of Evaluations are of a poor quality. Does this then suggest that quality and capacity issues are not only African problems?
The book makes a bold and sweeping statement that African evaluations “serve more of an output monitoring function than a platform for strategic decision-making.” I struggle to convince myself that by merely looking at evaluation reports without looking at what happens thereafter is enough. Such a conclusion could best be made after reviewing Management Responses and Evaluation Results Use processes and products
A nagging question that kept popping at the back of my mind whenever the book mentions “commissioners of Evaluation “ was where are project/program implementers? Oftentimes implementing partners like SNV, World Vision etc commission evaluations on projects they implement that are funded by donors like Sida. So when the book talks of Evaluating Agency are they looking at the funder and subsume the implementer? This has a practical implications since Capacity, policies and procedures differ between and across agencies
I don’t find the book‘s conclusion that most evaluations are for management purposes surprising. One explanation can be found in the emergence of the ‘implementing at scale’ approach. In this approach implementing agencies first search for what works through ‘pilot projects’. These are subjected to Impact studies and once a positive causal relationship is identified implementation is scaled up. During scaling subsequent evaluations are confirming targets set etc. I am not sure it’s also necessary to see the management- governance continuum as a development hierarchy otherwise that would contradict utility theory
While the book is in no way a methods guide book, Scholars and practitioners may find the Methods section of each paper/chapter interesting. A number of methods are described as they are used. Methods like content analysis, appreciative enquiry, Delphi-technique etc. Chapters 5 and 6 dwells in detail on Evaluation methods (quantitative, qualitative and mixed methods). The authors could have shed light on the compatibility or lack thereof of ‘approaches ‘ vs ‘methods ‘
On Evaluation characterization, I particularly agree with the dichotomization of evaluation into Formative & Summative. This simplification is important for standardization. It’s a temporal import upon which you can superimpose other facets of Evaluation including Result Level (I.e. Output, Outcome or Impact) as well as it’s a result or process evaluation. I will post a graphic presentation of my reasoning elsewhere in this blog
Chapter 3 confronts the Gender & Equity Question in evaluation. It’s often tempting to fault evaluators for not including G&E in their evaluations without realizing that evaluators are often at the end of the a chain of poor designs. Project designers had no G&E lenses, implementers too. When an evaluators (non formative) comes they cannot impose a new design that will be tantamount to squaring a circle. Evaluators are victims of circumstance here. Whatever happened to evaluability assessment?
Without falling into the grammar/spellcheck pitfall I will just make a small observation. A few chapters were not properly proofread. Presentation of findings sometimes were hard to follow e.g. chapter 6, the one and only graph did little justice to the reach data on the Quality Framework sections. Individual or a composite graph would have been better.
I would recommend African evaluation scholars and practitioners to have a look at the book. It is definitely at the forefront of the most current evaluation narratives in Africa and beyond!
It’s not a practice guide or something like that, for Evaluators but, a collection of scholarly articles on the state of Evaluation in Africa. These paper were written from an analysis of Evaluation papers, articles etc found in a AfrED database created by CLEAR-AA and CREST as well as survey of evaluators and authors in the region. The work is of an unprecedented empirical depth and detail on the status of Evaluation in the region
That it’s based upon a study of evaluations, journal articles and conference papers found in a database poses some generalisation issues in that, the data relied upon is from on 22% of Africa (Largely Anglophone - although I note the inclusion of some Francophone countries in Chapter 3). Secondly it’s based on Open Source Reports- an even larger amount is not shared openly. The authors acknowledge this limitation. However this is not to discredit the book. To the contrary this is a marvelous piece of work that has gone where no other has. It is a great start.
The Editors were careful to set a time period (2005-2015) which they are looking at. In a fast changing world of M&E you wouldn’t want the book to seem to be static in time. They seem to acknowledge that tomorrow things may be different. I hope there are plans to have several editions over time and this could become the State of Evaluation in Africa Report
The book champions the Made in Africa narrative. I am however uncomfortable with how this is presented. It may lead to a “they and us” interpretation which sounds as if adopting western developed methods is a second colonization of Africa. There is no straight answer to the question: Is Africa difference or are we making Africa different? Furthermore for me Africa is not a homogeneous ontological, axiological and epistemological entity. It’s like the proverbial tactile elephant. Rather I think we need a fusion of western and indigenous knowledge systems. This way we will not fall in Mugaberism ‘keep your west to yourselves and we keep our Africa to ourselves’. I dream of a day when evaluation theory and methods developed in Africa will be applied in Europe or the United States this can only be possible if we are inclusive. In Made in Africa our quest is not for superiority but applicability and comparability. The current stunted growth and confusion in Evaluation as a discipline can be traced to identity rivalry between North American and European evaluators. We should avoid the same mistakes. One fundamental question we should be asking in the Made in Africa narrative is “What do we want to change and why?” The authors are on point however in their analysis of what they term “collaborations” between Western and local evaluators. Evaluation is still stuck in the ‘70s development approaches. Donors are slow to recognize local capacity. The book could well have drawn a causal line between their finding that evaluations in Africa are being conducted by Westerners (although this is contradicted in chapter 6) and the Eurocentric nature of the methods. A Shona proverb says “mbudzi kudya mufenje hufanan’ina”. Chapter 6 study found no significant relationship between country of origin and choice of evaluation method. Not sure if there’s any between country where one obtained evaluation qualifications and choice of methods
Chapter 4 analyzes evaluation reporting standards using Scandinavian donors as a case. The chapter is a must read for evaluators and commissioners I evaluation. It present good evidence upon which a discussion on standards can be based. The analysis should be extended to western funding agencies like USAID, DFID and CIDA. The chapter illustrates how evaluation reporting is not standard and aligned to international reporting standards. It notes with concern how ethics are neglected in Evaluation. However beyond research convenience, it doesn’t explore how these affects the development trajectory and use of evaluation reports
The book explores in sufficient detail the major narratives in African Evaluations including:
- where evaluations sit in the management-governance continuum. But I struggle to find ‘accountability‘ in this framework. I also find this framework better fitting a country where donor funds go through government rather than Non-state actors
- Whose agenda is Evaluation in Africa
- Capacity for evaluation
- Striking a balance between Monitoring Vs Evaluation
- Role of evaluation in the public sector I.e can it trigger change?
The book leans heavily towards public sector experiences/conceptual frameworks (whether this was consciously or not I’m not sure). In countries like Zimbabwe where very little donor funds are channeled through government the landscape may be different
Findings on who is carrying out evaluations though not surprising makes sad reading for southerners. The book notes that most evaluations (86%) are carried out by Notherners. The book also notes that the majority of Evaluations are of a poor quality. Does this then suggest that quality and capacity issues are not only African problems?
The book makes a bold and sweeping statement that African evaluations “serve more of an output monitoring function than a platform for strategic decision-making.” I struggle to convince myself that by merely looking at evaluation reports without looking at what happens thereafter is enough. Such a conclusion could best be made after reviewing Management Responses and Evaluation Results Use processes and products
A nagging question that kept popping at the back of my mind whenever the book mentions “commissioners of Evaluation “ was where are project/program implementers? Oftentimes implementing partners like SNV, World Vision etc commission evaluations on projects they implement that are funded by donors like Sida. So when the book talks of Evaluating Agency are they looking at the funder and subsume the implementer? This has a practical implications since Capacity, policies and procedures differ between and across agencies
I don’t find the book‘s conclusion that most evaluations are for management purposes surprising. One explanation can be found in the emergence of the ‘implementing at scale’ approach. In this approach implementing agencies first search for what works through ‘pilot projects’. These are subjected to Impact studies and once a positive causal relationship is identified implementation is scaled up. During scaling subsequent evaluations are confirming targets set etc. I am not sure it’s also necessary to see the management- governance continuum as a development hierarchy otherwise that would contradict utility theory
While the book is in no way a methods guide book, Scholars and practitioners may find the Methods section of each paper/chapter interesting. A number of methods are described as they are used. Methods like content analysis, appreciative enquiry, Delphi-technique etc. Chapters 5 and 6 dwells in detail on Evaluation methods (quantitative, qualitative and mixed methods). The authors could have shed light on the compatibility or lack thereof of ‘approaches ‘ vs ‘methods ‘
On Evaluation characterization, I particularly agree with the dichotomization of evaluation into Formative & Summative. This simplification is important for standardization. It’s a temporal import upon which you can superimpose other facets of Evaluation including Result Level (I.e. Output, Outcome or Impact) as well as it’s a result or process evaluation. I will post a graphic presentation of my reasoning elsewhere in this blog
Chapter 3 confronts the Gender & Equity Question in evaluation. It’s often tempting to fault evaluators for not including G&E in their evaluations without realizing that evaluators are often at the end of the a chain of poor designs. Project designers had no G&E lenses, implementers too. When an evaluators (non formative) comes they cannot impose a new design that will be tantamount to squaring a circle. Evaluators are victims of circumstance here. Whatever happened to evaluability assessment?
Without falling into the grammar/spellcheck pitfall I will just make a small observation. A few chapters were not properly proofread. Presentation of findings sometimes were hard to follow e.g. chapter 6, the one and only graph did little justice to the reach data on the Quality Framework sections. Individual or a composite graph would have been better.
I would recommend African evaluation scholars and practitioners to have a look at the book. It is definitely at the forefront of the most current evaluation narratives in Africa and beyond!