Tuesday, 25 March 2014

Sometimes you know what the answer will be, sometimes you don’t

Posted by Jean Adams

Quite often, I’m pretty sure I know what the results of a piece of research will be before we start it. I’ve never written my pre-thoughts down, so who knows how good I am at predicting the results, and how good I am at gradually changing my mind as the results become available and convincing myself, post-hoc, that I knew that all along.

We have just finished a systematic review of the effectiveness of financial incentives for changing health-related behaviours. Before we started, the general chit-chat on incentives was that they work for short-term, simple behaviours, but not for long-term, complex ones; that the effects don’t last much beyond the period that you give the reward for; and that you probably need to give people quite a lot of money to have an effect. It wasn’t absolutely clear to me why people thought this, but some prominent people, who I respect a lot, said at least some of it in some high profile journals. So I assumed I was missing something.

Our review was justified because no-one had ever tried to bring evidence on financial incentives on all health-related behaviours together in one systematic review. But I was pretty sure it was going to be one of those worthy-but-not-earth-shattering bits of work that would just confirm what everyone says already.

Like all good (or maybe bad) systematic reviews, this one seemed to go on and on. And on. The whole ‘rule book’, register your protocol, approach to systematic reviewing makes me think that it should be a nice, clean, linear, no decisions made on the hoof, sort of research method. Maybe that’s how it is for you. But it never seems to be for me. I think I’ve been entirely explicit with my inclusion criteria, but then they don’t seem to be any use for screening the articles the search found. I think we’ve finally identified all the included articles. Then some inter-library loan we’d forgotten about turns up and the reference list identifies another five papers to screen.

I find all of this unexpected messiness a little unsettling. Obviously, the number of times I’ve experienced it means it shouldn’t really be that unexpected anymore. But it is. The messiness makes me think I’ve somehow done it wrong. At which point I start to enter the bad part of the creative cycle and it is way too easy to get stuck there. Especially when it takes a year and about 30 rejections to get your review published.

The creative process
I don’t know why it took so long to get our review published. I don’t think (by which I mean please tell me this wasn’t the reason, it took so long) it was that it was badly done. It seemed to be more that everyone thought that a systematic review on the effectiveness of financial incentives was not news. We know about them - they work for short-term, simple behaviours, but not for….see above.

But it turns out that that wasn’t what we found at all. Most of the evidence we found that met our criteria in terms of study design was on smoking - a long-term, complex behaviour. We found financial incentives to be more than twice effective as usual care or no intervention for helping people to quit smoking. Effect size for smoking cessation dropped off in those studies following up for more than six months after incentives had been withdrawn, but not entirely. The effects for short-term, simple behaviours, like coming for screening or vaccinations, was similar - about twice as effective as usual care. Effect didn’t seem to be vary massively with incentive size.

I still haven’t managed to convince myself I knew this all along.

No comments:

Post a Comment