Category Archives: Evaluation

UK Evaluation Society Conference 2017 – Initial Reflections

This is a short recap of the 2017 UKES Conference which was held on 10-11 May and themed around the use and usability of evaluation.

Beginning with the opening keynote, Michael Anderson from the Centre for Global Development highlighted that publication bias in journals towards positive-results was being replicated as publicity bias in the media; that slogans are effective where they have intuitive appeal, even if they’re disputed factually (e.g. “Prison Works” from Michael Howard in 1993); and this presents evaluators with a conundrum around using slogans to present findings, on one hand recognising that soundbite headlines are effective communication devices (so ‘if you can’t beat them, join them’), but on the other hand, recognising evaluators’ role to provide nuanced judgments.

Lorraine Dearden provided the second keynote on the same day she was actively defending her evaluation evidence in the media (clarifying the effectiveness of free school meals and breakfast clubs in light of Labour’s proposals). This was consistent with Lorraine’s position that impact evaluations need to be wholly rigorous to add significant value… but also recognising that process evaluations have an important role in making sense of the numbers.

Martin Reynolds ran a thought-provoking session on professionalism in evaluation and how systems-thinking can help interrogate the ways in which evaluators balance facts and values using boundary judgments; and that a transparent, civic-led model of professionalism is missing in evaluation at present.

Raman Srivastava provided an insightful update from Canada on the new Policy on Results combining the previous Policy on Evaluation with federal resource management functions, shifting the use of evaluation findings to the point when Ministerial decisions are made about programmes and budgets. In contrast to some conference conversations which viewed politics as a threat to effective evaluation use, Raman helpfully reminded attendees that “good policy can be good politics.”

Sue Holloway was the third keynote and identified three challenges to greater evaluation use in the voluntary and community sector: motivation; capacity (to produce AND use evaluations); and money. In response to these factors, Sue highlighted that evaluation needs to clearly demonstrate its value, and that includes having a clearer idea of what quality standards apply among evaluation practitioners.

At the AGM, the UKES Council presented the 2017-2020 Business Plan and invited evaluation practitioners to become more involved with the Society, including taking part in the Society’s working groups and keeping in touch using LinkedIn and twitter.

Three of the What Works Centres gave a timely progress update, along with an insistence that RCTs aren’t the only studies they’re considering, only that “the strength of their recommendations reflects the strength of evidence they have available,” as noted by Beth Shaw from NICE, hence qualified impact studies are given more prominence. It was also excellent to hear Will Finn’s policing examples involving PC Positive and Sgt Cynical, and his comparison between a successful evaluation and his mother-in-law’s curry!

All of the other sessions that I attended were enjoyable, such as the discussions on engaging beneficiaries in evaluation co-production, managing evaluation steering groups, and protecting evaluators’ and their role from internal politics in large organisations. There was also an important update from UKES Council members on the Voluntary Evaluator Peer Review (VEPR), with significant interest from international societies in how the Society is using this to support professional development among UKES members. A full list of the sessions I attended is presented below.

Thanks to all who organised and participated in this year’s conference!

  1. Giving Voice to Evaluations in an Era of Slogan and Snapchat, Michael Anderson, Visiting Fellow, Center for Global Development
  2. Beyond burden: Engaging beneficiaries as equal partners in evaluation, Bethan Peach and Tarran Macmillan, OPM Group
  3. Balancing and managing different perspectives in evaluation steering groups, Joe Duggett, SQW Ltd
  4. Good and Not So Good Practice in Quantitative Impact Evaluation, Professor Lorraine Dearden, Professor of Economics and Social Studies, University College London and Research Fellow, Institute for Fiscal Studies
  5. The challenges of developing a system to develop evaluators’ capabilities through a voluntary peer review grounded in reflective practice, Derek Poate, Dr Dione Hills and Professor Helen Simons, UKES VEPR Sub-group
  6. Evaluation as public work: An ethos for professional evaluation praxis, Dr Martin Reynolds, The Open University
  7. Canada’s new Policy on Results: How the Department of National Defence is evolving its evaluation function scheme, Dr Raman Srivastava, Federal Department of National Defence, Canada
  8. Challenges of achieving both accountability and learning: Case study of the evaluation of a payment by results programme, Alex Hurrell, Oxford Policy Management
  9. The politics of utilisation focused evaluation, Lydia Richardson, IPE Triple Line and Alison Napier, INTRAC
  10. The Evidence Journey: A View from the Other Side, Sue Holloway, Chief Executive, Project Oracle
  11. The work of HMG’s What Works Centres, Sara MacLennan, What Works Centre for Wellbeing, Beth Shaw, National Institute for Health and Care Excellence (NICE), Will Finn and Abigail McNeill, College of Policing
  12. Monitoring, evaluating and learning at multiple levels: 25 years of the Darwin Initiative, Dr Simon Mercer, LTS International
  13. Deepening our understanding of challenge fund evaluation, Clarissa Poulson and Lydia Richardson, IPE Triple Line
Share

Interviewing in Realist Evaluation

Just a quick post with some reflections on Dr. Ana Manzano’s seminar last week (8 Feb 2017) at the University of Leeds on The craft of interviewing in realist evaluation.  It’s a great benefit to have the Realism Leeds team one-hour away and they have a busy programme of other sessions planned for 2017 – keep track via the @RealismLeeds twitter. It was a great talk and nice to meet Ana and members of the team!

Firstly, Ana shared how she is frequently asked how to describe a realist interview, and that it comes back to how you approach the interview to begin with.

Recognising the importance of context – mechanisms – outcomes (CMO) to realist evaluation, preparing for interviews involves thinking about the CMO of the programme / phenomena in advance to get a broad idea of the lines of enquiry. However, it is important to not set them in stone.

One of the key characteristics of a realist interview is treating it as an open card game. Rather than the interviewer retaining prior knowledge close to their chest and acting with false naiveté (with the intention of avoiding data contamination), realist interviews are more of a shared journey between interviewer and interviewee with knowledge being laid out on the table in an effort to improve understanding of the programme theory between both parties.

Importantly, this makes realist interviewing more of an iterative process where the lines of enquiry and CMO focus may be refined over the course of a programme of interviews.  This process of eliciting the programme theory involves theory gleaning, theory refining/testing and theory consolidation – for more detail, see links below.

Qualitative social research interviews already place power in the hands of the interviewer to shape the research outcomes, but the ongoing iterative analysis that characterises realist interviews appears to give even more power to the interviewer.  This perhaps makes it especially important to view the research / evaluation findings as a product which is distinct to the researcher(s) leading the interviews. It also implies it is important to build some form of tracking into the research design so that the iterative process of refining the lines of enquiry / questioning is acknowledged in the reporting.

Manzano, A. (2016) The craft of interviewing in realist evaluation, Evaluation 22: 342-360. Sage. http://journals.sagepub.com/doi/abs/10.1177/1356389016638615

Event summary: http://www.sociology.leeds.ac.uk/events/2017/dr-ana-manzano-the-craft-of-interviewing-in-realist-evaluation

Into 2017 on the Front Foot…

ABRE is looking forward to an exciting year of research and evaluation in 2017 – a year which will be shaped by the 2016 EU referendum, the changing role of evidence, new innovative research methods and opportunities for professionalisation in evaluation. Here are some reflections and useful links for the year ahead!

Firstly of course, the UK’s decision to leave the EU is seismic and begins a long term process of transforming uncertainty into opportunities for economic development and other sectors. While UK Government funding commitments to 2020 are welcome, they also heighten the need for organisations to get their ducks in a row for a new operating environment in the near future. For example, funders’ increasing emphasis on demonstrating accountability and value for money already appears to be generating a culture change in the voluntary and community sector. The message for all seems to be get ready for change.

The past year has also seen a public diminishment of facts and experts on both sides of the Atlantic, in politics at the macro level but also at much more micro levels as reported in the ABRE blog one week before the EU vote. The research and evaluation community has a lot to do to win confidence in a changing world. Sites such as Sense about Science, Full Fact, BBC R4 More or Less and The Guardian’s Reality Check provide useful starting blocks (for the UK at least). Similarly, from this year’s personal reading list, the paper on New Political Governance by Jill Anne Chouinard and Peter Milley provides a helpful primer from North America on some key considerations when a ‘take it or leave it’ approach to evidence exists.

More positively, 2016 has allowed for some exciting new research approaches and innovations. ABRE has recently been collecting video feedback from participants in a Big Lottery funded youth employment programme, and development of the Assess-Evaluate-Develop Framework has progressed with the website launch in July and continued refinements into 2017.

ABRE also continues to support steps to enhance the evaluation profession in the UK, which has involved Andrew’s participation in the UKES Voluntary Evaluator Peer Review (VEPR) pilot in March, attendance at the UKES national conference in May, and election to the UKES national Council in December.

Many thanks to colleagues, clients and friends in 2016, and ABRE looks forward to continued work together throughout 2017. Finally, end of year thoughts with the family, friends and colleagues of David Kay who will be sadly missed but whose enthusiasm and kindness of character will be fondly remembered!

UK Evaluation Society Conference 2016 – Initial Reflections

Tower BridgeTwo days (27-28 April), 16 learning sessions, lots of networking!  Lots of international development content which took a bit of navigating for evaluators working in a UK/EU context (while still picking up some very helpful info from globally-based colleagues!). Some initial reflections below, along with a full list of sessions attended.  I’d expect that all of the presenters would be happy to field any requests for further information, although I’d also certainly be happy for any follow-up discussions, particularly on the Voluntary Evaluator Peer Review pilot/process which I’ve participated in and would highly recommend.

Choosing methods.  There was a sense of some RCT pushback among attendees and presenters, not least from the newly launched CECAN which will be looking into alternative approaches using realist context, simulated RCTs / policy modelling, drawing on participatory expertise and ongoing monitoring / iteration.  Later in the first day, I became slightly worried that evaluators might no longer be needed for choosing appropriate methods… well, maybe not that worried yet, but the resource being developed by Dr. Barbara Befani and Michael O’Donnell was a really interesting development in actually setting out a list of available methods, let alone how it has drawn on expert feedback to advise on which ones may be most suitable for different situations.

Simple, complex and complicated.  CECAN Director Nigel Gilbert introduced the challenges of upward and downward causation, and the example of steering a wheelbarrow to illustrate a wobbly evaluation pathway… and I learned that ‘complex’ is derived from the Latin to ‘intertwine’ (…I’ll probably be using that as a random fact in conversations). Prof. Picciotto also ran a very lively session trying to convince us that the frequently used ‘simple’ example of following a recipe (as opposed to sending a rocket to the moon or raising a child – Glouberman & Zimmerman, 2002) is not even simple, especially when celebrity chefs get involved!

Demonstrating policy influence and intangible added value. This was applicable for a lot of the discussions on how to measure strategic influence in an international development context, and it was a great introduction to process tracing and Bayesian confidence updating (using hoop tests, smoking gun tests, doubly-decisive tests and straw-in-the-wind tests) for assessing confidence in causal attributions (session 9).  In an EC context, a similar challenge is being tackled to demonstrate the value of European-level approaches for certain issues and to address calls for subsidiarity (session 2). Carol Candler shared experiences of conducting strategic consultations in Singapore where the emphasis on ‘face’/saving face meant that critical assessments needed sensitivity and delicate handling, but that attempts from strategic leaders to delegate their interviews should be resisted.

Engaging audiences. From Dr Beatriz Garcia’s presentation on cultural / economic impacts of the Liverpool European Capital of Culture 2008, I liked how a pedagogic approach made it simpler to map/present investments allocated to the event itself, wider city regeneration, and meeting wider European objectives; and secondly, how using regular, glossy updates on impact during the event(s) helped promote engagement and strategic buy-in. Claire Hutchings made the good point that we should be moving towards evidence-informed policy, not evidence-based policy.

History of evaluation.  Finally, Bradford Rohmer gave an enjoyable presentation on the history of evaluation in the EC, from early written evaluation guidance in the late 1990s which was restricted to mid-term and ex-post; to the inclusion of ex-ante from the 2000s; and the emphasis on DG standardisation and evaluation working documents in last year’s Better Regulation Package.

Sessions attended below:

  1. Evaluating complexity, Professor Nigel Gilbert, Director, Centre for the Evaluation of Complexity Across the Energy-Environment-Food Nexus (CECAN)
  2. What is EU-added value and how can it be measured? Andrew Hetherington, Coffey
  3. Evaluating complexity every day: Practical approaches to evaluating complexity in European funding, Laura Hayward, ICF International
  4. Evaluating the culture of major events: The long-term view, Dr Beatriz Garcia, Head of Research, Institute of Cultural Capital, University of Liverpool
  5. Unpacking methodological appropriateness for Impact Evaluation: Presentation of an online tool for selecting appropriate methods, Dr Barbara Befani, University of East Anglia; Michael O’Donnell, Bond
  6. Understanding what works: Do we know how to mix methods? Professor Bob Picciotto, King’s College London
  7. Making the infinite countable? Responding to the challenges when evaluating innovation policy, Jonathan Cook, SQW
  8. youngballymun’s performance story report: A rigorous and pragmatic evaluation of a complex community change initiative, Dr Gemma Cox, youngballymun
  9. From assessing impact to assessing confidence about impact: Harnessing the potential of Process Tracing and Bayesian confidence updating to evaluate policy influence in complex and uncertain settings, Dr Barbara Befani, University of East Anglia; Gavin Stedman-Bryce, Pamoja UK Ltd; Stefano D’Errico, IIED; Francesca Booker, IIED and Centre for International Forestry Research
  10. Unpacking and optimising mixed methods evaluation: Insights from the Carers’ Employment Pilot Evaluation, Dr Annette Cox, IES
  11. Halfway house: The confused past and uncertain future of evaluation in the European Commission, Bradford Rohmer, Coffey International
  12. UK Evaluation Society AGM
  13. Recognising messiness and embracing real world complexities: Evaluation and the SDGs, Claire Hutchings, Head of Programme Quality, Oxfam
  14. Voluntary Evaluator Peer Review update, Derek Poate, Chair, UKES VEPR Sub-group
  15. Learning from our successes: A positive approach to assessing Public Value, Carol Candler, Voluntary and Philanthropy Sector Development Advisor; Helen Highley, Brightpurpose
  16. Joining the dots…between services, evaluators and funders: The Project Oracle journey, Professor Georgie Parry Crooke, London Metropolitan University

This year’s full programme is available at: www.profbriefings.co.uk/ukes2016/

Thanks to all the organisers and presenters at this year’s event.

Making a Difference in 2015

2015pic

2015 was the UN Year of Evaluation and you may have seen that EvalPartners sponsored the Evaluations that Make a Difference initiative which provides a global viewpoint on how research and evidence can really help programs to change people’s lives.

From ABRE’s perspective, it was very pleasing to work on several program evaluations during the course of the year covering activities as diverse as climate change mitigation, hi-tech sector support, enterprise promotion, community and voluntary services, and skills enhancement and training.  Although some of these examples might be of smaller scale or scope than the EvalPartners examples, the same premise remains that well researched and grounded evidence can make a big impact on how organisations or groups move forward, whether this is progressing a pan-European approach to climate change innovation or helping charitable organisations to improve their success rate when bidding for funding (with Kada Research).

It was also insightful to share thoughts with evaluation colleagues at the UKES conference in May and York RCTs conference in September, including presenting on simplicity in evaluation… which of course is fairly complex!

ABRE’s work in 2015 included research support for multiple economic strategy assignments in the south east of England (Kada), as well as assisting with rapid evidence assessments and bid writing elsewhere. It was also the year of moving into a new office in the centre of Chesterfield. Into 2016, work is already underway on the exciting evaluation of a sector innovation program in the West Midlands.

Best wishes and a happy new year to colleagues and clients, and ABRE looks forward to working together throughout the year!

Keeping Evaluations Simple (Without Being Stupid)

Although the opportunity exists to undertake highly technical evaluations, several established maxims recommend keeping things simple wherever possible to improve effectiveness.  This includes the well-known acronym K.I.S.S. or ‘Keep it Simple Stupid’ which originated in the 1960s US Navy and engineer Kelly Johnson who emphasized that the benefits from having more advanced aircraft are dependent on whether they are repairable by an average mechanic operating in combat conditions.  This viewpoint is similarly attributed to figures such as Albert Einstein, “everything should be made as simple as possible, but no simpler,” and Leonardo da Vinci, “simplicity is the ultimate sophistication.”

The technical depth and breadth of methodologies available to evaluators and the complexity of issues being investigated can make it challenging for evaluators to pick the right research approach and communicate their work effectively to participants and end-users.  There are at least five aspects that can determine the level of complexity for a particular study:

  • Area of study. For example, Zimmerman (2001) differentiates simple issues (where there is agreement and certainty) from complicated (where there is either agreement or certainty) and complex (where there is neither).
  • Evaluation design and methods. Many standalone and mixed methods are available to investigate and triangulate towards reliable evidence.
  • Research processes, planning and project management. Different approaches possible for managing the research process.
  • Presenting and communicating. Reporting of research findings (and the methods that produced them) is arguably as important as the quality of the research itself.
  • Overall understanding of evaluation. Evolving and overlapping roles of evaluators can lead to misunderstandings among researchers and research audiences alike.
‘Simple’ is a relative concept when it comes to maintaining one of these

‘Simple’ is a relative concept when it comes to maintaining one of these. Photo credit: CnOPhoto / Shutterstock.com

Example 1: Evaluation design and methods

Sophisticated tools and techniques may make a compelling case for improving the quality and rigour of evaluations.  Yet at the same time, there is an argument that they can sometimes be akin to fitting jet engines for mechanics to repair with just a spanner.  Several factors that can influence simple evaluation design are highlighted below.

  • End-user needs. Deciding whether the evaluation will primarily be driven by a research theory perspective to enhance subject-matter knowledge or by a utility approach that emphasizes how the findings could be used by practitioners and/or research participants. i.e. for whom is the study intended?
  • Focus of the research question. The choice of evaluation design may be between a method that is less precise but directly addresses the research question and one that is more rigorous but only offers a proxy result.  This will depend on the purpose and aims of the evaluation.
  • Research scope and technical detail. Paring back the research scope and/or structure can be helpful, particularly if there is openness for results that are less specific and more generalisable. For example, evaluation rubrics advocated by Davidson (2014) focus on using high-level evaluation questions and synthesizing responses into a consistent rubric format.
  • Understanding causation. Where applicable, a program theory approach helps to open the grey/black box of causation.  Further, the UK Magenta Book (2011) notes that “where the logic model is particularly complex, restricting the scope of the evaluation to consider shorter, simpler links in the logic chain can increase the ability of process evaluations to provide good evaluation evidence.” 

Sometimes however, the benefits of increased complexity outweigh the benefits of keeping things simple.  The UK Magenta Book (2011) lists a series of barriers to generalisability where neglecting them in a method could be described as being ‘too simple’:

  • Internal and external validity – ensuring that research results are interpreted accurately and study sample results are representative.
  • Strategic context – making sure that important contextual issues are accounted for.
  • Additionality – identifying substitution, displacement and other elements of net impact.
  • Unintended consequences – capturing spillover effects, both positive and negative.

Example 2: Simple reporting and communicating

Policymakers and other end-users of evaluations have different levels of tacit knowledge and limited time, so the evaluator’s challenge is provide technically accurate reporting that the target audience(s) can understand simply and quickly.  This applies both to outlining the evaluation method and to translating research findings for the end-user so that they understand what it means for policy and practice. Examples of challenges and solutions related to simple reporting include the following.

  • System modelling. As noted by the W.K. Kellogg Foundation (2004), outcome mapping and logic models provide “a systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan to do, and the changes or results you hope to achieve.”
  • Visual presentation. Increasing use of infographics, illustrations and video that can be easily disseminated, including through social media.
  • Keeping messages brief and punchy. This applies to both verbal and written means of communication.  Vaughan & Buss (1998) advocate reports that “communicate reasoning as well as bottom lines; use numbers sparingly in the summary reports; [and] elucidate, don’t advocate,” while  Oliver et al. (2014) note that “academics used to giving 45-minute seminars do not always understand that a hard-pressed policymaker would prefer a 20-second phone call.”
  • Hierarchy of detail. A verbal summary of the main research messages is the tip of the iceberg, but this is typically backed up by greater levels of detail provided in presentation slides, factsheets, executive summaries, full reports and supporting technical annexes. For example, the widely used 1:3:25 (bullets: exec summary: report) model.
  • Promoting engagement and involvement. An inclusive, two-way dialogue between researchers and end-users can help to simplify communication, with Funnell and Rogers (2011) highlighting that “those who have contributed to developing a theory of change often feel much more connected to it.”
It’s important to know whether a 45-minute seminar or a 20-second phone call would work best

It’s important to know whether a 45-minute seminar or a 20-second phone call would work best. Photo credit: iofoto / Shutterstock.com

Happy mechanics and evaluators

There are several aspects that determine the complexity of an evaluation, yet maxims like K.I.S.S. serve as a reminder for evaluators to keep things simple where possible in order to improve effectiveness.

Consideration of whether an evaluation method or design is simple enough will typically come down to deciding what works best for whom. Similarly, the most suitable means of reporting will depend on the requirements, evaluation culture and capacity of the study audience.  In this regard, the background story to K.I.S.S. offers an interesting parallel between on-the-ground engineers fixing aircraft and in-the-field evaluators seeking to keep research simple, relevant and accessible.

This article is based on a presentation given to the UKES conference in May 2015. My thanks to the session participants for their valuable Q&A feedback.

References:

Davidson, J. (2014) The Rubric Revolution: Practical Tools for All Evaluators, UKES London/South East Regional Evaluation Network Event 26th September 2014

Funnell, S. C. & Rogers, P. J. (2011) Purposeful Program Theory: Effective Use of Theories of Change and Logic Models, San Francisco, Jossey-Bass/Wiley

HM Treasury (2011) The Magenta Book: Guidance for Evaluation, TSO, London https://www.gov.uk/government/publications/the-magenta-book (accessed October 2015)

Oliver, K., Innvær, S., Lorenc, T., Woodman, J. & Thomas, J. (2014) Negative stereotypes about the policymaking process hinder productive action toward evidence-based policy, LSE website http://blogs.lse.ac.uk/impactofsocialsciences/2014/06/02/how-to-get-policymakers-to-use-more-evidence/ (accessed October 2015)

Pawson, R. & Tilley, N. (1997) Realistic Evaluation, London, Sage

Vaughan, R. J. & Buss, T. F. (1998) Communicating social science research to policymakers, Thousand Oaks, CA, Sage

W.K. Kellogg Foundation (2004) Logic Model Development Guide, Battle Creek, MI https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide (accessed October 2015)

Zimmerman, B. (2001) Ralph Stacey’s Agreement & Certainty Matrix, Schulich School of Business, York University, Toronto, Canada http://betterevaluation.org/resources/guide/ralph_staceys_agreement_and_certainty_matrix (accessed October 2015)