Guest post by Katie Cole
The mantra of “but there’s no evidence for it!” is one I’ve said or thought many times, both in my work, discussions with family and friends, or when shouting at the BBC Today programme.
But as an early-career academic, I’m increasingly aware there is a complex web of considerations when trying to translate evidence into policy, and that there are times when chanting our mantra may do more harm than good.
I recently attended a Royal College of Physicians/Alma Mata seminar on alcohol advocacy. At one point, a panel member suggested that social norms interventions to address excessive alcohol consumption on university campuses “sounded very promising” and policy-makers were considering it. I’ve looked into US research into these interventions: a national evaluation concluded that they are ineffective in reducing alcohol consumption. Whilst I could have made this point, I felt it was more complex than that. Don’t we need to test the policy in the UK drinking context to make a more robust contribution to the debate? Shouldn’t we seek to support policy-makers to integrate evaluations into pilots, or to finance full-scale trials?
Another challenge I’ve had was during a placement at a Primary Care Trust. I was involved in the Individual Funding Request process, where the PCT considers funding treatments and procedures not normally available on the NHS. I worked up a number of cases, looked at the evidence base and presented the case to a panel of clinicians and non-clinicians. In most cases, the evidence base was of poor quality: finding a case series for the exact condition and treatment in question represented a minor professional achievement. Usually, the case series found that, lo and behold, most cases improved, which often sparked disproportionate optimism that we had a justification for funding the treatment. In contrast, when I found a randomised controlled trial with only modest results, the panel were more inclined to propose not funding the treatment. Here I was challenged to explain the difference between the strength of the evidence base, and the strength of the effect size; whilst at the same time, acknowledging the difficulty of decision-making against a poor evidence base.
A final challenge has been in developing The Lancet UK Policy Matters website, which includes short summaries of the evidence underpinning a range of UK health-related policy changes. In developing the format of the summaries, we had to be very clear to authors that statements purporting the intended benefit of the policy should not be included in the ‘evidence’ sections of the summary – this was reserved for peer-reviewed research or evaluations. Our experience in guiding authors highlighted to us how meticulous we as professionals need to be in the choice of language we use when drawing on our scientific expertise.
Above all other lessons, these experiences have taught me that advocating for evidence in policy making is challenging, complicated and requires skill. It demands an understanding of the evidence itself – its strengths and limitations – but also of the policy making process. Whilst these issues can be difficult to reconcile, the above experiences have only strengthened my drive to communicate effectively with all actors in the policy making process.
Katie Cole co-founded The Lancet UK Policy Matters website with Rob Aldridge and Louise Hurst.