Thursday, 26 May 2016

Let's evaluate together

This is the time of the year when I would like to be able to clone myself, to respond to all those requests for evaluation proposals (RFPs) while busily working away on on-going jobs that need to be completed before the Northern hemisphere summer break breaks out. List servers publish new RFPs every day; as July approaches, the deadlines become increasingly adventurous. In late May, RFPs ask for offers to be submitted by the first week of June; the selected evaluation team would start working right away. It seems many of those who publish those last-minute quick-quick RFPs assume evaluation consultants spend their days sitting in their offices, twiddling thumbs, chewing nails or randomly surfing the web, waiting for that one agency to call them up and get them to work right away, tomorrow! Drop everything and work for us!


Many of those evaluations are mid-term or end-of-project evaluations, which tend to happen at highly predictable moments (in the middle or near the end of project implementation) and could be planned many months, even years ahead. But this is not what worries me most about the seasonal avalanche of RFPs. What worries me most is that they tend to produce evaluations of questionable value.

Often, those last-minute RFPs are about projects of modest size, with meagre resources for evaluation. In that situation, the evaluation terms of reference (TOR) would typically ask for 20-40 consulting days, to cover the entire set of OECD/DAC criteria - relevance, effectiveness, efficiency, and impact and sustainability, all that within 2-3 months and on a shoestring budget. As someone who has reviewed a couple of hundred evaluations, I know that the resulting evaluation reports tend to be a bit on the shoddy side. With some luck, the participants in the evaluation might have found the evaluation process useful. But don't look for ground-breaking evidence in quick and dirty single-project evaluations.

It does not have to be that way. For instance, organisations that receive money from several funders can convince their funders to pool resources for one well-resourced evaluation of their overall activities rather than a bag of cheap three-week jobs. Funders who support several complementary initiatives in the same geographical region, or who support the same kind of project in many different places, can commission programme evaluations to better understand what has worked and what hasn't, under what circumstances.

It makes more sense to take a step back and look at bigger pictures, anyway, because no development intervention happens in isolation. Project X of NGO Y might yield excellent results because NGO Z runs project ZZ in the same region, and project X wouldn't have the slightest chance to succeed if project ZZ wasn't there. You need time and space to find out that kind of things.

And last but absolutely not least, there is no reason why evaluation should only happen in the middle or at the end of an intervention. Some of the most useful evaluations I have come across have been built into the project or programme from the beginning, supporting programme managers in setting up monitoring systems that worked for those involved in the programme and for those evaluating it, and accompanying the project with on-going feed-back. This doesn't need to be more expensive or more complicated than the usual end-of-project 40-day job. But it can provide easy-to-use information in time to support well-informed decision-making while the project is being implemented - not just when it's over.

No comments:

Post a Comment