This is a short recap of the 2017 UKES Conference which was held on 10-11 May and themed around the use and usability of evaluation.
Beginning with the opening keynote, Michael Anderson from the Centre for Global Development highlighted that publication bias in journals towards positive-results was being replicated as publicity bias in the media; that slogans are effective where they have intuitive appeal, even if they’re disputed factually (e.g. “Prison Works” from Michael Howard in 1993); and this presents evaluators with a conundrum around using slogans to present findings, on one hand recognising that soundbite headlines are effective communication devices (so ‘if you can’t beat them, join them’), but on the other hand, recognising evaluators’ role to provide nuanced judgments.
Lorraine Dearden provided the second keynote on the same day she was actively defending her evaluation evidence in the media (clarifying the effectiveness of free school meals and breakfast clubs in light of Labour’s proposals). This was consistent with Lorraine’s position that impact evaluations need to be wholly rigorous to add significant value… but also recognising that process evaluations have an important role in making sense of the numbers.
Martin Reynolds ran a thought-provoking session on professionalism in evaluation and how systems-thinking can help interrogate the ways in which evaluators balance facts and values using boundary judgments; and that a transparent, civic-led model of professionalism is missing in evaluation at present.
Raman Srivastava provided an insightful update from Canada on the new Policy on Results combining the previous Policy on Evaluation with federal resource management functions, shifting the use of evaluation findings to the point when Ministerial decisions are made about programmes and budgets. In contrast to some conference conversations which viewed politics as a threat to effective evaluation use, Raman helpfully reminded attendees that “good policy can be good politics.”
Sue Holloway was the third keynote and identified three challenges to greater evaluation use in the voluntary and community sector: motivation; capacity (to produce AND use evaluations); and money. In response to these factors, Sue highlighted that evaluation needs to clearly demonstrate its value, and that includes having a clearer idea of what quality standards apply among evaluation practitioners.
At the AGM, the UKES Council presented the 2017-2020 Business Plan and invited evaluation practitioners to become more involved with the Society, including taking part in the Society’s working groups and keeping in touch using LinkedIn and twitter.
Three of the What Works Centres gave a timely progress update, along with an insistence that RCTs aren’t the only studies they’re considering, only that “the strength of their recommendations reflects the strength of evidence they have available,” as noted by Beth Shaw from NICE, hence qualified impact studies are given more prominence. It was also excellent to hear Will Finn’s policing examples involving PC Positive and Sgt Cynical, and his comparison between a successful evaluation and his mother-in-law’s curry!
All of the other sessions that I attended were enjoyable, such as the discussions on engaging beneficiaries in evaluation co-production, managing evaluation steering groups, and protecting evaluators’ and their role from internal politics in large organisations. There was also an important update from UKES Council members on the Voluntary Evaluator Peer Review (VEPR), with significant interest from international societies in how the Society is using this to support professional development among UKES members. A full list of the sessions I attended is presented below.
Thanks to all who organised and participated in this year’s conference!
- Giving Voice to Evaluations in an Era of Slogan and Snapchat, Michael Anderson, Visiting Fellow, Center for Global Development
- Beyond burden: Engaging beneficiaries as equal partners in evaluation, Bethan Peach and Tarran Macmillan, OPM Group
- Balancing and managing different perspectives in evaluation steering groups, Joe Duggett, SQW Ltd
- Good and Not So Good Practice in Quantitative Impact Evaluation, Professor Lorraine Dearden, Professor of Economics and Social Studies, University College London and Research Fellow, Institute for Fiscal Studies
- The challenges of developing a system to develop evaluators’ capabilities through a voluntary peer review grounded in reflective practice, Derek Poate, Dr Dione Hills and Professor Helen Simons, UKES VEPR Sub-group
- Evaluation as public work: An ethos for professional evaluation praxis, Dr Martin Reynolds, The Open University
- Canada’s new Policy on Results: How the Department of National Defence is evolving its evaluation function scheme, Dr Raman Srivastava, Federal Department of National Defence, Canada
- Challenges of achieving both accountability and learning: Case study of the evaluation of a payment by results programme, Alex Hurrell, Oxford Policy Management
- The politics of utilisation focused evaluation, Lydia Richardson, IPE Triple Line and Alison Napier, INTRAC
- The Evidence Journey: A View from the Other Side, Sue Holloway, Chief Executive, Project Oracle
- The work of HMG’s What Works Centres, Sara MacLennan, What Works Centre for Wellbeing, Beth Shaw, National Institute for Health and Care Excellence (NICE), Will Finn and Abigail McNeill, College of Policing
- Monitoring, evaluating and learning at multiple levels: 25 years of the Darwin Initiative, Dr Simon Mercer, LTS International
- Deepening our understanding of challenge fund evaluation, Clarissa Poulson and Lydia Richardson, IPE Triple Line