Tuesday, 10 December 2019

Less is more - also in evaluation questions

Writing evaluation terms of reference (TOR) - that is, the document that tells the evaluators what they are supposed to find out - is not a simple exercise. Arguably, the hardest part are the evaluation questions. That section of evaluation TOR tends to grow longer and longer. This is a problem because: Abundant detailed evaluations questions may lock the evaluator into the perspective of those who have drawn up the TOR, turning the evaluation into an exercise with quite predictable outcomes that limit learning opportunities for everyone involved. 

Let me explain. For those readers who are not evaluators - just try to imagine you are one, for these two paragraphs (and maybe also when you draw up your next TOR). You are developing an offer for an evaluation, or you have won the bid already and you're preparing the inception report. You sit at your table, alone, or around a table with your evaluation team mates and you gaze at the TOR page - or even pages - of evaluation questions. Lists of 30-40 items totalling 60-100 questions are not uncommon these days. Some questions are broad - of the type, "how relevant is the intervention in the local context", some extremely detailed, for instance "do the training materials match the trainers' skills". (I am making these up but they are pretty close to real life.) Often, in the sector of much of my evaluation activity, the questions are roughly structured along the OECD/DAC criteria for evaluation, which are OK. But your specific evaluation might need a different structure to match the logic of the project - think of human rights work or political campaigns, for example. 

While you are reading, sorting and restructuring the questions, some important questions come to your mind that are not on the TOR list. You would really like to look into them. But there are already 70 evaluation questions your client want to see answered and the client has made it clear they won't shed a single one. There is only so much one can do within a limited budget and time frame. What will most evaluation teams do? You bury your own ideas and you focus on the client's questions. You end up carrying out the evaluation within your client's mental space. That mental space may be very rich in knowledge and experience - but still, it represents the client's perspective. That is an inefficient use of evaluation consultants - especially in the case of external evaluations, which are supposed to shed an independent, objective or at least different light on a project.

Why do organisations come up with those long lists of very specific questions? As an evaluator and as the author of several meta-evaluations, I have two hypotheses: 
  • Some evaluations are shoddy. Understandably, people in organisations that have experienced sloppily done evaluations may want to take greater control of the process and they don't realise that tight control means losing learning opportunities. 
  • Many organisations adhere to the very commendable practice of involving many people in TOR preparation - but their evaluation department is shy about filtering and tightening the questions so that they form a coherent, manageable package.
What can we do about it? Those who develop TOR should focus on a small set of central questions they would like to have answered - if your budget has less than six digits (in US$ or euros), try to stick to five to ten really important questions - less is more. Build in time for an inception report, where the evaluators must present how they will answer the questions, and what indicators or what guiding questions they'll use in their research. Read that report carefully to see whether it addresses the important details you are looking for - if it doesn't and if you still feel certain details are important, then discuss them with the evaluators.

My advice to evaluators is not to surrender too early - often, clients will be delighted to be presented with a restructured, clearer set of evaluation questions, if your proposal makes sense. If they can't be convinced to reduce their questions, then try to find an agreement as to which questions should receive most attention, and explain which cannot be answered with a reasonable degree of validity. This may seem banal to some among you, but to tell from many evaluation reports in the development sector, it doesn't always happen. 

No comments:

Post a Comment