Keeping Evaluations Simple (Without Being Stupid)

Although the opportunity exists to undertake highly technical evaluations, several established maxims recommend keeping things simple wherever possible to improve effectiveness.  This includes the well-known acronym K.I.S.S. or ‘Keep it Simple Stupid’ which originated in the 1960s US Navy and engineer Kelly Johnson who emphasized that the benefits from having more advanced aircraft are dependent on whether they are repairable by an average mechanic operating in combat conditions.  This viewpoint is similarly attributed to figures such as Albert Einstein, “everything should be made as simple as possible, but no simpler,” and Leonardo da Vinci, “simplicity is the ultimate sophistication.”

The technical depth and breadth of methodologies available to evaluators and the complexity of issues being investigated can make it challenging for evaluators to pick the right research approach and communicate their work effectively to participants and end-users.  There are at least five aspects that can determine the level of complexity for a particular study:

  • Area of study. For example, Zimmerman (2001) differentiates simple issues (where there is agreement and certainty) from complicated (where there is either agreement or certainty) and complex (where there is neither).
  • Evaluation design and methods. Many standalone and mixed methods are available to investigate and triangulate towards reliable evidence.
  • Research processes, planning and project management. Different approaches possible for managing the research process.
  • Presenting and communicating. Reporting of research findings (and the methods that produced them) is arguably as important as the quality of the research itself.
  • Overall understanding of evaluation. Evolving and overlapping roles of evaluators can lead to misunderstandings among researchers and research audiences alike.
‘Simple’ is a relative concept when it comes to maintaining one of these

‘Simple’ is a relative concept when it comes to maintaining one of these. Photo credit: CnOPhoto / Shutterstock.com

Example 1: Evaluation design and methods

Sophisticated tools and techniques may make a compelling case for improving the quality and rigour of evaluations.  Yet at the same time, there is an argument that they can sometimes be akin to fitting jet engines for mechanics to repair with just a spanner.  Several factors that can influence simple evaluation design are highlighted below.

  • End-user needs. Deciding whether the evaluation will primarily be driven by a research theory perspective to enhance subject-matter knowledge or by a utility approach that emphasizes how the findings could be used by practitioners and/or research participants. i.e. for whom is the study intended?
  • Focus of the research question. The choice of evaluation design may be between a method that is less precise but directly addresses the research question and one that is more rigorous but only offers a proxy result.  This will depend on the purpose and aims of the evaluation.
  • Research scope and technical detail. Paring back the research scope and/or structure can be helpful, particularly if there is openness for results that are less specific and more generalisable. For example, evaluation rubrics advocated by Davidson (2014) focus on using high-level evaluation questions and synthesizing responses into a consistent rubric format.
  • Understanding causation. Where applicable, a program theory approach helps to open the grey/black box of causation.  Further, the UK Magenta Book (2011) notes that “where the logic model is particularly complex, restricting the scope of the evaluation to consider shorter, simpler links in the logic chain can increase the ability of process evaluations to provide good evaluation evidence.” 

Sometimes however, the benefits of increased complexity outweigh the benefits of keeping things simple.  The UK Magenta Book (2011) lists a series of barriers to generalisability where neglecting them in a method could be described as being ‘too simple’:

  • Internal and external validity – ensuring that research results are interpreted accurately and study sample results are representative.
  • Strategic context – making sure that important contextual issues are accounted for.
  • Additionality – identifying substitution, displacement and other elements of net impact.
  • Unintended consequences – capturing spillover effects, both positive and negative.

Example 2: Simple reporting and communicating

Policymakers and other end-users of evaluations have different levels of tacit knowledge and limited time, so the evaluator’s challenge is provide technically accurate reporting that the target audience(s) can understand simply and quickly.  This applies both to outlining the evaluation method and to translating research findings for the end-user so that they understand what it means for policy and practice. Examples of challenges and solutions related to simple reporting include the following.

  • System modelling. As noted by the W.K. Kellogg Foundation (2004), outcome mapping and logic models provide “a systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan to do, and the changes or results you hope to achieve.”
  • Visual presentation. Increasing use of infographics, illustrations and video that can be easily disseminated, including through social media.
  • Keeping messages brief and punchy. This applies to both verbal and written means of communication.  Vaughan & Buss (1998) advocate reports that “communicate reasoning as well as bottom lines; use numbers sparingly in the summary reports; [and] elucidate, don’t advocate,” while  Oliver et al. (2014) note that “academics used to giving 45-minute seminars do not always understand that a hard-pressed policymaker would prefer a 20-second phone call.”
  • Hierarchy of detail. A verbal summary of the main research messages is the tip of the iceberg, but this is typically backed up by greater levels of detail provided in presentation slides, factsheets, executive summaries, full reports and supporting technical annexes. For example, the widely used 1:3:25 (bullets: exec summary: report) model.
  • Promoting engagement and involvement. An inclusive, two-way dialogue between researchers and end-users can help to simplify communication, with Funnell and Rogers (2011) highlighting that “those who have contributed to developing a theory of change often feel much more connected to it.”
It’s important to know whether a 45-minute seminar or a 20-second phone call would work best

It’s important to know whether a 45-minute seminar or a 20-second phone call would work best. Photo credit: iofoto / Shutterstock.com

Happy mechanics and evaluators

There are several aspects that determine the complexity of an evaluation, yet maxims like K.I.S.S. serve as a reminder for evaluators to keep things simple where possible in order to improve effectiveness.

Consideration of whether an evaluation method or design is simple enough will typically come down to deciding what works best for whom. Similarly, the most suitable means of reporting will depend on the requirements, evaluation culture and capacity of the study audience.  In this regard, the background story to K.I.S.S. offers an interesting parallel between on-the-ground engineers fixing aircraft and in-the-field evaluators seeking to keep research simple, relevant and accessible.

This article is based on a presentation given to the UKES conference in May 2015. My thanks to the session participants for their valuable Q&A feedback.

References:

Davidson, J. (2014) The Rubric Revolution: Practical Tools for All Evaluators, UKES London/South East Regional Evaluation Network Event 26th September 2014

Funnell, S. C. & Rogers, P. J. (2011) Purposeful Program Theory: Effective Use of Theories of Change and Logic Models, San Francisco, Jossey-Bass/Wiley

HM Treasury (2011) The Magenta Book: Guidance for Evaluation, TSO, London https://www.gov.uk/government/publications/the-magenta-book (accessed October 2015)

Oliver, K., Innvær, S., Lorenc, T., Woodman, J. & Thomas, J. (2014) Negative stereotypes about the policymaking process hinder productive action toward evidence-based policy, LSE website http://blogs.lse.ac.uk/impactofsocialsciences/2014/06/02/how-to-get-policymakers-to-use-more-evidence/ (accessed October 2015)

Pawson, R. & Tilley, N. (1997) Realistic Evaluation, London, Sage

Vaughan, R. J. & Buss, T. F. (1998) Communicating social science research to policymakers, Thousand Oaks, CA, Sage

W.K. Kellogg Foundation (2004) Logic Model Development Guide, Battle Creek, MI https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide (accessed October 2015)

Zimmerman, B. (2001) Ralph Stacey’s Agreement & Certainty Matrix, Schulich School of Business, York University, Toronto, Canada http://betterevaluation.org/resources/guide/ralph_staceys_agreement_and_certainty_matrix (accessed October 2015)

Share