A group of Swedish researchers and development professionals has published a hefty review of evaluations on Results Based Management in Development Cooperation. The full Vähämäki/ Schmidt/ Molander study is available HERE. The authors conclude that the basic idea behind the "results agenda" - i.e. that you need to know how your development interventions perform so as to make the right decisions - is uncontested. They have found that RBM may indeed improve planning and monitoring of development interventions.
But implementing results based management (RBM) has proven difficult. The authors report major issues associated with RBM: the application of RBM is complex; conflicts may arise where RBM is used for several different purposes (for example, both for "control" and for "learning"); and proper RBM conflicts with management practices centred on control and process.
That last point takes me to a hair-raising story I came across some time ago. An international NGO had received funding from a major multilateral donor, for a multi-year project based on a vague, confusing funding proposal. A few months into project implementation, the NGO asked a representative of that donor agency whether they could redesign the logical framework, to turn it into something more precise and manageable. Even a casual exam of the original funding proposal would have made it clear that such redesign was necessary.
But the donor representative said no. They discouraged the NGO from introducing major adjustments, on the ground that the donor's internal procedures would be too complicated. So they preferred the NGO continue using the flawed logical framework for a couple of years, and have the effects examined in the end-of-project evaluation. This doesn't sound like "managing for results". It sounds more like "management practices centred on control and process", disregard for results, and money down the drain. It is particularly painful as it comes from one of the many donors that fully subscribe to results-based management - in theory.
In practice, RBM means that obtaining good results for the project participants/ target groups/ beneficiaries should guide project design and implementation. When you discover there is something in your project design that might prevent you from attaining the results you aim for, change the design. When you discover your project objectives confuse rather than inspire the project participants, redefine them. When you discover the indicators or the tools you have chosen for monitoring don't work, develop new ones. When you realise some activities don't make sense, stop or adjust them. Development, by definition, is all about change.
Vähämäki/ Schmidt/ Molander borrow the term "obsessive measurement disorder" to describe a situation where overly controlling "RBM" harms project effectiveness. They note that aid agencies have shifted sizeable resources towards control processes, even though results information is generally not used for the purpose of decision-making. This may warrant a closer exam of the cost-effectiveness of the "results agenda", as the authors suggest.
But implementing results based management (RBM) has proven difficult. The authors report major issues associated with RBM: the application of RBM is complex; conflicts may arise where RBM is used for several different purposes (for example, both for "control" and for "learning"); and proper RBM conflicts with management practices centred on control and process.
That last point takes me to a hair-raising story I came across some time ago. An international NGO had received funding from a major multilateral donor, for a multi-year project based on a vague, confusing funding proposal. A few months into project implementation, the NGO asked a representative of that donor agency whether they could redesign the logical framework, to turn it into something more precise and manageable. Even a casual exam of the original funding proposal would have made it clear that such redesign was necessary.
But the donor representative said no. They discouraged the NGO from introducing major adjustments, on the ground that the donor's internal procedures would be too complicated. So they preferred the NGO continue using the flawed logical framework for a couple of years, and have the effects examined in the end-of-project evaluation. This doesn't sound like "managing for results". It sounds more like "management practices centred on control and process", disregard for results, and money down the drain. It is particularly painful as it comes from one of the many donors that fully subscribe to results-based management - in theory.
In practice, RBM means that obtaining good results for the project participants/ target groups/ beneficiaries should guide project design and implementation. When you discover there is something in your project design that might prevent you from attaining the results you aim for, change the design. When you discover your project objectives confuse rather than inspire the project participants, redefine them. When you discover the indicators or the tools you have chosen for monitoring don't work, develop new ones. When you realise some activities don't make sense, stop or adjust them. Development, by definition, is all about change.
Vähämäki/ Schmidt/ Molander borrow the term "obsessive measurement disorder" to describe a situation where overly controlling "RBM" harms project effectiveness. They note that aid agencies have shifted sizeable resources towards control processes, even though results information is generally not used for the purpose of decision-making. This may warrant a closer exam of the cost-effectiveness of the "results agenda", as the authors suggest.
No comments:
Post a Comment