All impact evaluations face the challenge of assessing causality: in what ways and to what extent are observed changes attributable to the intervention itself?  Or would some of these changes have happened anyway, for other reasons?

CAG Consultants has been using innovative methods to assess impact in our evaluation of the ‘Transitional Arrangements (TA) for Demand Side Response’ scheme for the Department of Business, Energy and Industrial Strategy (BEIS). The TA was part of the Capacity Market for electricity: it aimed to encourage growth in ‘Demand Side Response’ by providing an incentive for businesses to be on standby to reduce their electricity demand temporarily, if needed by National Grid.

We’ve faced the usual evaluation challenge of trying to work out how far the TA contributed to observed changes, compared to other external factors.  The number of TA participants was very small, and the subject area complex, so we and BEIS ruled out statistical approaches to impact evaluation at an early stage.  Instead, we used a ‘realist evaluation’ approach, combined with ‘process tracing’ methods, to assess the impact of the TA.  We partnered with Charles Michaelis (SDS/CECAN), Dr.Barbara Befani (University of Surrey/CECAN) and Gill Westhrop (Community Matters Pty/Charles Darwin University) to apply these methods, following the approach put forward by Pawson and Tilley in their seminal book ‘Realistic Evaluation’ (1997).

The realist approach involved in-depth research about ‘what worked, for whom, in what circumstances and why’, to explore whether and how the TA and other factors influenced different TA participants. To do this, we defined and tested realist hypotheses about how the TA and/or other factors might have sparked different mechanisms for change, for participants in different contexts.  For specific TA participants, we asked ourselves: ‘Did the TA really help this participant to develop more Demand Side Response, in the way BEIS intended?  Or would they have achieved the same level of Demand Side Response outcomes anyway, for various other reasons?’

To improve the rigour of our testing process, we also defined pieces of evidence or ‘clues’ that we would expect to see or like to see in our research evidence, if different hypotheses were true.  Using a technique called ‘process tracing’, we used these evidence tests to compare the weight of evidence for competing causal hypotheses in particular cases.

We have presented our experiences of applying process tracing within a realist evaluation at a CECAN seminar This is available on video here.

We concluded that process tracing was useful in helping to synthesise evidence from different sources, and in generating high-level test results that summarised the strength of evidence for competing hypotheses without sharing confidential data.  However, the method was complex and resource-intensive to apply, particularly given the large number of realist hypotheses that we had developed. We think it is best applied to a small number of competing hypotheses and a small number of cases.

More detail on our evaluation methods and findings are presented in the published reports on Phases 1-3 of the TA evaluation (Phase 2 is available here and Phase 3 here ).  Further detail on the realist approach and process tracing work will be presented in the final report on Phase 4 of the TA evaluation, which will be published shortly by BEIS.

If you’d like to find out more about the evaluation services that CAG offers, please get in touch with CAG Partner Mary Anderson on ma@cagconsult.co.uk

 

Oct 29, 2018