Tuesday, 16 July 2013

Systematic reviewing: soporific or scholarship?

Guest post by Kathryn Oliver

For several years, I was employed by a systematic review facility which provided review evidence for various government departments. We did full systematic reviews (about 18 months), rapid reviews (about 8 month), systematic maps, papers of all of the above... I read a lot of papers and learned a lot. One important lesson was about the difference between a good quality research study, and a good quality paper. We did our quality appraisal on the reportage of a study – not the study itself. If a trial didn’t mention randomisation, we assumed it wasn’t. If it said it was randomised, but didn’t say how, it got a few more points; randomisation methods reported (tossing a coin, birthdays, researcher selection (!!!), computer-generated) got them a few more. In short, I learnt a lot about social and clinical research methods, and how to write them up....


But I started to worry that all this reviewing was having a pernicious effect on me. Apart from the effects of occasional tedium, I wondered – was it making me a bad scholar? The reviews we did were high quality and published in good journals; there was consensus that we were doing a good job. But I still worry about missed opportunities. Without criticising my excellent training or support from systematic review colleagues, I present some personal reflections on what I could have done better:

1. Not ignore everything about the papers except for the methods and findings

Since my reviewing days, I’ve gone on to do other research jobs, a PhD and a post doc – so I’ve now grappled with writing research papers myself. I now appreciate the thought and craft that goes into introductions, discussions, conclusions, implications; all loaded with high-quality academic thought and really, the parts you want other people to take notice of. But did I as a systematic reviewer? No I did not. I went straight to the methods (“another rubbish paper then”) and then to the findings (“they haven’t even controlled for income! losers”) and ignored the rest. All that thought, all those carefully constructed implications for other researchers – out the window.

2. Take the implications further

Most systematic reviews (if not most papers these days) have a section called “Policy and Practice Recommendations” or similar. A systematic review may include, say 100 papers – that’s a lot of recommendations. Now I wonder, why didn’t we - or research funders, for whom it’s probably even more important – collate these into a “state of the field” list of recommendations on which there was consensus or disagreement?

3. Collate the conclusions

“More research is needed” was probably the conclusion of most papers I read. To be fair, some did accurately describe precise research questions – and these again could (should?) have been collated into list of outstanding questions which could feed into the agenda setting for research funders, used in priority-setting with patients and the public, or just posted somewhere to inform future grant applications.

4. Draw on more types of information

Despite advances in systematic reviewing methods to develop ways to synthesise evidence from non-trial evidence, there is still a huge emphasis on empirical research data – for entirely understandable reasons. I still wonder if there are opportunities to draw on other type of research output though: commentaries and conceptual pieces, unpublished datasets (especially to answer questions raised by the review), policy pieces to give context and feed into recommendations...

I still do systematic reviews, and I still don’t do all these things, partly because of the time, lack of funding, and so on and so forth. I do try and write a commentary about the review now though, reflecting on the ‘state of the field’ which is a useful exercise for me, if no one else. These days, I try and think of a systematic review as a way to really learn about what is going on in a body of literature, and an opportunity to engage with the current debates. I think the trick is giving yourself the time to think, and not just plough through that huge pile of full-text retrieved....

2 comments:

  1. Re: no.2... I think this something that Andrew Booth talks about a lot - why aren't researchers doing better scoping before they embark on new studies?

    Re: no.3... such a place exists, but how well it's known, I'm guessing not well. It's called DUETS - the Database of Uncertainties about the Effects of Treatments.

    It's part of NHS Evidence and can be found at http://www.library.nhs.uk/duets/

    ReplyDelete
  2. Hi srobalino

    I completely agree, DUETS is great. Obviously doesn't apply to non-effectiveness of treatment reviews though (such as correlational reviews, views reviews etc)

    On the exploitation of the evidence base, I don't know how researchers can be enabled to scope more effectively. We wrote a paper for Evidence and Policy on developing some methods to do this (comparing review findings with PPI information) but it was really tricky. Comments welcome!
    http://www.academia.edu/2534974/Making_the_most_of_obesity_research_developing_research_and_policy_objectives_through_evidence_triangulation

    ReplyDelete