This is another instalment of my "moans about poor evaluation practice" series, triggered by a recent review of evaluation reports in the complex field of governance and human rights.
One of the reports I read used a "traffic light" system: for each evaluation question, the authors decided whether what they found was good ("green light"), in need of some improvement ("yellow light"), or bad ("red light"). That in itself made me feel a bit queasy. Does a "red light" mean an organisation has to drop everything and stop operating? Does that form of visualisation pay any respect to the efforts people put into their work? Yes, evaluators are there to assess the "value" of what they are supposed to evaluate, but does that entitle us to make pronouncements as to what must stop and what can go on? I am not sure.
But what made me really mad was the chapter about impact. The evaluators found the data they could find were insufficient to assess impact, notoriously difficult to measure in that field (see also DFID working paper 38, referred to in my earlier post below). Their conclusion: a "red light" on impact - which had not even been measured! Now this is what I call sloppy evaluation.
When I do not have sufficient data to make an informed judgement on something, the only honest, rigorous judgement one can pronounce is that there is not sufficient data, hence, no informed judgement can be made. Everything else is speculative - or outright dishonest. One can continue by making recommendations as to how regular monitoring over future years could improve the chances to make informed conclusions on impact, or as to what resources would need to go into a future evaluation that would assess impact. But pronouncing that an organisation produces no "impact" just because the relevant data are not available seems quite irrational - and irresponsible, because busy donor staff who are too busy to look beyond the "traffic lights" might base their decisions on wrong conclusions.
No comments:
Post a Comment