So shall we talk impact factors?
Every peer-reviewed journal that has been around for a few years has an impact factor. It’s a number (calculated to three decimal places!) that reflects the average number of times papers published in the journal are cited by other papers in peer-reviewed journals.
In an unthinking world, there is an assumption that the higher the impact factor of a particular journal, the ‘better’ the journal is. From there, it’s easy to start to think that the papers published in journals with higher impact factors describe ‘better’ research and that the people who write papers published in journals with higher impact factors are ‘better’ researchers.
It's just much easier to judge who's 'best' in some areas than others |
Of course you can see that such a simple metric is never going to be a particularly accurate reflection of the quality of either the research or the people writing the research. Lots of people have written lots of things about how bad a job impact factors do of assessing quality. They’re even pretty poor at predicting the number of citations any particular paper will get. And who said that citations are a good marker of how good a paper (or the research it describes) is anyway?
I know all of this. I understand it. Yet I still care about impact factors. I still want to submit my papers to journals with high impact factors and it frustrates me when my papers get rejected by the ‘best’ journals and I have to do the long, boring job of trying the next ‘best’ journal, then the next ‘best’, and then the next ‘best’. Which is not to say that every paper I’ve ever published has been rejected by The Lancet (certainly none have been published there). But that I do want the best possible for each of my little paperlings.
Lately I’ve been doing quite a lot thinking about this, wondering why I am so obsessed by playing the impact factor game that I know to be such a highly suspect marker of skill – as much ludo as chess. I guess some of it is vanity – if that’s what people are going to think makes me ‘good’, then I’ll do it because I definitely want them to think I’m ‘good’ (although, why do I care what those people who place such value in such a suspect number care?). Some of it is pure competitiveness – if it’s hard to do that, then I’m going to do it.
But I guess right at the very bottom of it all is the REF. The big ol’, not quite sure that we think that’s a particularly good marker of quality either, REF. I have to have four papers that are ‘good enough’ for the REF or to be submitted as an individual; I definitely want to be submitted (see above re vanity and competitiveness, but let’s also add in it’s sort of my job); and in the absence of other markers of quality, we seem to be relying pretty heavily on impact factors to decide what papers are ‘good enough’. Not entirely. But quite a bit.
Whilst many of the people I work with are similarly fairly obsessed by the impact factor game. Not all of them are. Some people, who I have lots of respect for, reckon it’s a bit fat waste of time (and morale) to try your paper in a high impact factor journal, get it bounced, reformat it for the next one, get it bounced blah blah blah. They laugh and say ‘oh just get it out and move on’. Maybe it isn’t a coincidence that these tend to be the sort of people who have loads of ‘good enough for the REF’ papers anyway. But I am starting to wonder if what I need to be doing is not constantly ‘aiming high’, but developing a better sense of what papers are worth ‘aiming high’ for.
I know all of this. I understand it. Yet I still care about impact factors. I still want to submit my papers to journals with high impact factors and it frustrates me when my papers get rejected by the ‘best’ journals and I have to do the long, boring job of trying the next ‘best’ journal, then the next ‘best’, and then the next ‘best’. Which is not to say that every paper I’ve ever published has been rejected by The Lancet (certainly none have been published there). But that I do want the best possible for each of my little paperlings.
Lately I’ve been doing quite a lot thinking about this, wondering why I am so obsessed by playing the impact factor game that I know to be such a highly suspect marker of skill – as much ludo as chess. I guess some of it is vanity – if that’s what people are going to think makes me ‘good’, then I’ll do it because I definitely want them to think I’m ‘good’ (although, why do I care what those people who place such value in such a suspect number care?). Some of it is pure competitiveness – if it’s hard to do that, then I’m going to do it.
But I guess right at the very bottom of it all is the REF. The big ol’, not quite sure that we think that’s a particularly good marker of quality either, REF. I have to have four papers that are ‘good enough’ for the REF or to be submitted as an individual; I definitely want to be submitted (see above re vanity and competitiveness, but let’s also add in it’s sort of my job); and in the absence of other markers of quality, we seem to be relying pretty heavily on impact factors to decide what papers are ‘good enough’. Not entirely. But quite a bit.
Whilst many of the people I work with are similarly fairly obsessed by the impact factor game. Not all of them are. Some people, who I have lots of respect for, reckon it’s a bit fat waste of time (and morale) to try your paper in a high impact factor journal, get it bounced, reformat it for the next one, get it bounced blah blah blah. They laugh and say ‘oh just get it out and move on’. Maybe it isn’t a coincidence that these tend to be the sort of people who have loads of ‘good enough for the REF’ papers anyway. But I am starting to wonder if what I need to be doing is not constantly ‘aiming high’, but developing a better sense of what papers are worth ‘aiming high’ for.
I play the game too - but out of obligation. I agree with your collegues that it is best to get a paper out there and move on - but in some disciplines this isn't adequate either. Ultimately the impact factor used in this way is bad for research (blogpost link below).
ReplyDeletehttp://researchfrontier.wordpress.com/2013/08/19/why-journal-impact-factors-are-bad-for-science/