Sunday, 14 October 2012

Evidence of what worked, at some point, in some place


A major highlight at this year’s annual conference of the German Evaluation Society (DeGEval) was Prof. Gert Biesta’s keynote speech on research as a provider of evidence for policy. “Evidence-based” is a buzzword in development (as well as other disciplines, such as education); bilateral and multilateral donors have mobilised considerable funding to “building an evidence base”. The basic idea is that interventions should be based on the best possible evidence of what works. In his speech, Biesta unravelled the meanings of “intervention”, “evidence” and “what works” to raise fundamental questions: What role should evidence play in policy making? What kind of evidence are we talking about, anyway, and can it replace professional judgement and wider democratic deliberation?

Biesta started with a definition of “evidence”, described as the available body of facts or information that give an indication whether a belief or a proposition is valid. Ideally, evidence should be based on information from a wide range of sources and perspectives. Generating evidence implies weighing different, sometimes contradictory information and deciding what conclusions to draw from it.

However, in practice, “things have worked out much more crudely”, as Biesta put it. A single method, randomised controlled trials (RCTs), currently enjoys the reputation of being the “gold standard” in knowledge generation. (As a matter of fact, I recently followed a discussion thread hosted by a reputed evaluation society where some contributors indulged in near-lyrical praise of RCTs as the uncontested tool for evaluation, learning, policy making, planning, you name it. I wondered whether the writers were aware of the array of other research methods developed by and commonly used in natural and social sciences.) According to Biesta, experimental approaches such as RCTs come with epistemological issues (what kind of knowledge is generated through RCTs?) and ontological problems (what is the understanding of social reality that underlies RCTs?).

Biesta distinguishes between representational epistemology and transactional epistemology. In representation, the researcher is a spectator: she looks at the world and tries to to show – represent - how things are in the world. John Dewey referred to a “Kodak picture of reality”. (For those who have grown up in the digital age: Kodak used to make photo cameras and films.) The experimental researcher works quite differently. He is not just a spectator: he intervenes on things in the world by running experiments, and observes how things react to his intervention. Biesta calls this a transactional epistemology
What we can claim to know through experiments is what consequences certain actions have in certain types of relationships. An experiment tells us “what has worked” in a certain situation we set up in the past; there is no guarantee it’ll also work in the future. (Btw as Thomas Kuhn has shown, failed experiments are a major source of new paradigms in science. An experiment doesn’t prove your theory is correct – it only proves that it has worked under certain circumstances.) 

Furthermore, the experimental researcher needs to shut out as many environmental factors as possible, so as to avoid “external” elements influence the results of the experiment. Since social systems are open and recursive (people act on the basis of what they think and feel, broadly speaking), the only way to conduct RCTs on such complex systems is to create an artificial environment and limit people’s choices. To reduce complexity, that is. Fast food chains are good at that: they produce few, standardised meals their clients can choose from. This approach minimises “outside” influence (e.g. the variety of local tastes and cuisines); it decreases the scope for decision-making for everyone involved; and it conveniently reduces the openness for interpretation of what happens. Only, it is quite remote from real-life situations (apart from fast food restaurants and the like).

Experimental research can be useful to gain knowledge on the effects of specific types of circumscribed interventions, for example in agriculture and in medicine. But, Biesta argues, it is too blunt an instrument to produce knowledge on social systems. Societies are no fast food restaurants. People operate in vastly diverse contexts; everyone uses a different set of ways to make sense of what happens around her and what happens to her. People have a wide and diverging ranges of choices of actions they can take on the basis of what they think, what they want, what they feel, and what happens to them (and experiments only take care of "what happens to them" - a tiny part of what happens to them, as a matter of fact). How are you going to squeeze so much complexity into a controlled experiment? To generate “evidence” the RCT way, you reduce interaction with the environment, and limit sense- and decision-making to a minimum that people rarely encounter in real life - unless they are locked into high security prisons or other quite predictable, sheltered situations. 

That takes us to a different set of issues around “social experiments” – the price to be paid for such experiments in terms of values and ethics. One could argue that the RCT approach, pushed to extremes, encourages researchers and policy makers to create the kind of reality that makes experiments work – the fast-food-, or prison-type simplified, deterministic reality. Biesta suggested that current developments in higher learning have been going down the fast-food road, with increasingly standardised sullabi, curricula and exams for easier comparison in national and international rankings. That would be diametrically opposed to the knowledge-generating, critical role traditionally attributed to research. 

I don’t remember how exactly Biesta concluded; I am writing this on the basis of my memory and my messy notes (plus, most of the bits in parentheses are my own musings). For a more complete rendering of Biesta’s thinking, there is a literature list on his website (click on "website" to get there). The speech is close to the paper: Biesta, G.J.J. (2010). Why ‘what works’ still won’t work. From evidence-based education to value-based education. Studies in Philosophy and Education 29(5), 491-503. You need to buy or borrow the journal to get the full paper; an abstract is available here
PS: Since I wrote this post, Gert Biesta kindly looked through it and shared a link with me to an earlier paper (click on "paper") on the same topic. 

No comments:

Post a Comment