There is an increasing awareness that decisions to stop, continue or expand health services needs to be supported by some ‘credible evidence’. Whilst there may be disagreements about what constitutes ‘credible evidence’, service providers, government departments and funders are all beginning to commission evaluations to help answer some basic questions about the ‘quality’ or ‘value’ of a particular service.
In addition to formative and summative evaluations about ‘value’, there is an increasing level of interest in the role of developmental evaluations, evaluations that involve changing the intervention, adapting it to changed circumstances, and altering tactics based on emergent conditions (Patton, 2008, p 137).
Lattice Consulting is highlighting the differences between formative, summative and developmental evaluations as we think that one-off evaluations will be few and far between and that the health system will move more and more towards developing ‘evaluative knowledge’ (Rist, 2006).
Having said that, the government’s current emphasis on ‘results’ (State Services Commission, 2012) and the push for health and social service providers to create results-orientated evaluation and monitoring systems will not work on their own. The potential for organisations to grow evaluative knowledge and engage in continuous improvement activities will only ever be realised depending on how effectively providers use such systems.
As part of the move towards a more results-orientated system, health care providers are also showing an increasing level of interest in developing their own evaluative capability, so that they are better able to produce information that can be used to:
- support their own process of continuous improvement and organisational development and
- increase the level of transparency and accountability to their funders, service users, their Boards and to their local communities (Ebrahim, 2003).
The pragmatic evaluation tradition where people ‘learn by doing’ aligns well with the notion of ‘kiwi ingenuity’, which supposes that all problems can be solved by simply trying something and then seeing if it works. However, in order to avoid wasting precious time and resources, it is a good idea for providers to consider calling on the services of a professional evaluator to either help increase the evaluative expertise of the organisation’s own workforce and/or to design an adequately rigorous evaluation that can deliver the desired results within the organisation’s given limitations of budget, time and data.
Phillipa Gaines has conducted a number of small-scale evaluations and has a Post Graduate Diploma in Social Sector Evaluation Research from Massey University. Her work in the field of evaluation has been influenced by the writings of Michael Quinn Patton (utilisation focused evaluation), Bob Williams (systems concepts), Patricia Rogers (evaluating complex interventions), Jane Davidson (evaluative rubrics), Thomas Schwandt (credible evidence and technical rationality) and more recently by Yoland Wadsworth (human inquiry for living systems).