When it came to tools, early hominids were happy with a stone axe and a few sticks to rub together to make fire. Now we are massively more advanced and instead have an iPhone App that can blow out candles (yes, really.)
But most problems we face are a bit trickier, and don’t fall easily under the capability of simple tools. Evaluating a complex public health intervention is certainly something that is fraught with nuance and difficulty. Even to a man with a hammer, it doesn’t look like a nail.
This is not to say there isn’t help at hand for anyone considering various types of evaluation – there is. There’s a growing number of high quality manuals and guides that stretch from the academic to the practical – from the MRC’s Developing and Evaluating Complex Interventions: new guidance, to the National Obesity Observatory’s Standard Evaluation Frameworks.
However, when thinking about the evaluation of complex public health interventions there is not always a single path to take, or even necessarily a ‘right answer’. And crucially it is not possible, and sometimes not desirable, to evaluate everything.
Standardisation is difficult |
As the authors of Assessing the Evaluability of Complex Public Health Interventions: Five Questions for Researchers, Funders and Policy Makers (who include CEDAR’s David Ogilvie and Andy Jones and Fuse’s Martin White) put it:
“Evidence to support government programs to improve public health is often week. Recognition of this knowledge gap has led to calls for more and better evaluation, but decisions about priorities for evaluation also need to be addressed in regard to financial restraint.”
Using England's ‘Healthy Towns’ initiative as a case study, the article above presents five questions to stimulate and structure debate in order to help people make decisions about evaluation within and between complex public health interventions:
“Evidence to support government programs to improve public health is often week. Recognition of this knowledge gap has led to calls for more and better evaluation, but decisions about priorities for evaluation also need to be addressed in regard to financial restraint.”
Using England's ‘Healthy Towns’ initiative as a case study, the article above presents five questions to stimulate and structure debate in order to help people make decisions about evaluation within and between complex public health interventions:
- Where is a particular intervention situated in the evolutionary flowchart of an overall intervention program?
- How will an evaluative study of this intervention affect policy decisions?
- What are the plausible sizes and distribution of the intervention's hypothesized impacts?
- How will the findings of an evaluative study add value to the existing scientific evidence?
- Is it practicable to evaluate the intervention in the time available?
But what do we mean by a thinking tool? Well, in a pre-digital world, this can be as simple as making a list of pros and cons for a decision. Or something more involved such as Edward de Bono's Six Thinking Hats, a stakeholder mapping chart, or a SWOT analysis (Strengths, Weaknesses, Opportunities, Threats). And in the online and mobile world, there are a growing field of thinking and project tools such as Mindjet or Basecamp. Obviously our thinking tool will be more tailored than these, but the goal is to produce something that can be applied to a wide range of intervention within – and possibly beyond – public health.
We’re keeping an open mind about exactly how our thinking tool might work, so it would be good to hear your thoughts. You can read some more about the plans here, or get in touch with me at ocf26@medschl.cam.ac.uk / 01223 746892.
No comments:
Post a Comment