Tuesday 25 September 2012

Democratizing, or polarising and exclusive?

Posted by Jean Adams

There seems to be a lot of discussion around at the moment about the benefit of social media to researchers. The stuff I’ve been reading isn’t really about the value of social media as a research tool, more about the value of social media as a communication tool both within the research community, and between the research and other communities.

Much of the discussion takes place on social media, but I’ve also come across some in more traditional media too - in peer-reviewed journals and a recent conference workshop.

Almost everyone is enthusiastic. Twitter apparently helps researchers keep up to date with their field, and to disseminate their findings more widely, and provides an opportunity to ‘crowd source’ ideas, research funding and even volunteer researchers.

The social media adoption curve

One blogger found that blogging and tweeting about their publications led to a massive bump in downloads of her papers – suggesting that many more people were at least aware of them, if not reading and citing them. Others have proposed that the benefits of academic blogging are so self evident that the question has now moved from “why would a researcher blog?” to “why would a researcher not blog?”.

I don’t consider myself particularly ahead of (or even riding the crest of) the social media curve. In fact, I’m quite a social media laggard. I opened my Twitter account about two years ago, but have only been actively using it for around 12 months. I’ve been reading blogs for two or three years. But I follow less than twenty and hardly any of them are work-related. I’ve left fewer than half a dozen comments on other people’s blogs.

And yet I think I am a convert. In the last week I have used Twitter to share ideas with researchers I have never met, sound off about the boring bits of my job (and get a little support back), ‘speak’ to a journalist, and source references for this post. I have been alerted to research I will use in my teaching and been kept up to date with public health and university politics. Twitter provides the possibility of engaging with almost anyone (who is on Twitter) and in this way it is democratizing.

But I am increasingly starting to worry about the sentiment in parentheses in that last sentence. What sort of people are not on Twitter? Perhaps these people are also worth engaging with. Even amongst the Tweeps, I am not shy of the ‘unfollow’ button and rarely put up with people who I disagree with, of find too dull, for long. And in these ways, Twitter can be polarising and exclusive.

As we have developed the Fuse blog and @Fuse_online twitter feed, we have encountered the usual institutional angst about the risks that these media pose to our ‘reputation’. Much of this reflects our collective uncertainty and inexperience of how these avenues might develop. Some of us are not too far removed from previous social media horror-stories that, unsurprising, have put a few people off for good.

My, now almost reflexive, response to these issues is to turn to the internet, to Google, to Twitter and to other people’s blogs. What do others say about the balance of risks to benefits? What exactly might go wrong? How can we mitigate the risks? Good advice often boils down to: think before you act, and don’t do it when you’re drunk.

What is missing, is the other side of the argument. There does not seem to be any coherent discussion about why researchers shouldn’t engage with social media. Perhaps there is no such thing. Perhaps I have ‘unfollowed’ myself into isolation from such points of view.

Thursday 20 September 2012

Making the connection

Guest post by Kathryn Oliver

About 2 years ago, I had one of those ‘Eureka’ moments that totally changed my life. Genuinely. It was right up there with finding out about Oyster cards, or washing machines, or something.

At the time, I was a PhD student in my first year, working on a fairly standard project about developing health indicators. As a project, it was fine – about the use of evidence by policy makers, one of my main interests, and I was getting lots of experience in survey design. But for years, I’d been kicking round ideas in my head about the importance of personal relations. Didn’t they really explain nearly all human behaviour? Weren’t peer effects important for the spread of obesity or smoking? Wasn’t social capital important for mental health?

I’d been living on my own in London for a year or two and had found myself pondering the role of human relationships more and more. Of course, I had friends and relations, but I also liked being known by the man in the newsagents and the end of the road, and saying 'hi' to the neighbours. Did they count, I wondered? Would these relationships be enough to protect me from isolation, or going ballistic on the tube?

Imagine my delight when, attending a Social Network Analysis seminar day, run by the Mitchell Centre at the University of Manchester, I discovered an entire body of research – methods, philosophy, approaches – which looked at connections between individuals using formal statistical methods. Finding out that other people had had similar ideas to me, and had developed dedicated research methods to investigating these ideas was probably one of the best research moments I’ve ever had.

Unlike traditional statistics, network analysis does not treat individuals (whether bridgespolicy makers, or swingers) as independent. Instead, any ties between actors are identified, described quantitatively and/or qualitatively and mapped. The statistics used are based on graph theory, but you don’t have to understand it to admire the elegance and usefulness of network analysis. Depending on the relationship collected, people’s attitudes, behaviours, health outcomes and more can be predicted.

For me, this is really the missing element from a lot of public health research. It can be used to identify good targets for research, or opinion leaders in secondary schools, so more targeted messages can be produced and sent out. It allows us to understand, describe, and analyse the social context within which individuals live. And, of course, make beautiful pictures. 

Example of Social Network Analysis diagram.
People have used network analysis to study all kinds of things – it’s very popular in the business world to identify ‘future leaders’ or ‘people who make things happen within my business’. Researchers have compared US senators voting patterns to cows who lick one another.

My PhD changed quite a lot after this seminar. I ended up using a combination of social network analysis and ethnography to study where public health policy makers found evidence, who the main sources of evidence were and how evidence was incorporated in the policy process. For years, academics in my field have been talking about the importance of interpersonal knowledge translation and how policy makers prefer to get their info from real people. Now I’ve been able to add my own tiny part of the story, come up with new research ideas on the basis of my findings, and learn a niche method (always useful).

My boyfriend still calls them snog webs though.

Wednesday 19 September 2012

Criticism is your friend. I mean it. Really.

Guest post from Emily Murray

Ah, criticism. It is the architecture of the scientific process. Manuscripts are written describing hypothesis of the way we think the world works. But before our hypothesis can be transmitted to the world, we must submit these little pieces of ourselves to others to rip to shreds.

After we’ve re-written the manuscript a few times until it vaguely resembles the work of art we created before, we must submit it to a whole new group of ‘criticizers’, aka reviewers.

Where they commence to rip it to shreds as well.


If we’re lucky, when writing this manuscript (and hopefully 10 others) we discover a hole in the literature. And some twisted part of our brain thinks it would be a fantastic idea to spend six months, weekends and evenings included, turning that ‘hole in the literature’ into a 100-page document of why you are the best researcher ever and will cure all of the world’s ills with this one research project, aka a ‘grant proposal’.

Where they won’t even bother to rip it to shreds. They will only tell you that it did not score high enough to be ripped to shreds.

As you may or may not be able to tell…I hate criticism.

During my student days, I used to loath the days I would receive feedback from my mentor; praying that at least one page wouldn’t be covered in red track changes. Sometimes I would have day dreams where papers were returned to me in their original black-and-white pristine condition with only a note on top saying, “This is fantastic! You must submit this to the Lancet at once!”


But then something happened. My dream came true! (Well, minus the Lancet comment.) I sent out a paper to co-authors and most were returned with nary a comment. I started to think that maybe all of those hours of writing and re-writing and re-re-writing were starting to pay off. Maybe something had clicked in my brain over the last fortnight to push me from a ‘so-so writer’ into a ‘fantastic writer’. Malcolm Gladwell, eat your heart out.

Well, you can guess what happened when I submitted the paper to a journal.

It was rejected. 

*cue copious weeping* 

But seriously, there was definitely a lesson here: Lots and lots of criticism and feedback from mentor and co-authors equaled acceptance. Little to no criticism from mentor and co-authors equaled rejection.

No, the lesson is not that I suck as a writer and should take up a profession in travel photography instead (although this does sound quite attractive in the middle of a second revise and resubmit).

What I drew from this is that sometimes *wait for it* criticism can be a good thing. That criticism is an opportunity in disguise. A way for others to point out the weaknesses in our work, not because they hate us or think our work is appalling, but because they want to help us make it better. Or at least that’s the way I like to think about it.

For those new to this way of thinking, I present both my ‘old’ and ‘present’ way of reacting to certain criticisms:

1. The Grammar Police have gone through my paper with a vengeance. Commas, active verbs, plurals, and future conditional tenses are apparently problems for me.

‘Old’ reaction: Obviously, all previous English Teachers were lacking. I suck.

‘Present’ reaction: Find colleague who is also ‘Grammar Police’ and bribe them to read all future papers.

2. “I don’t understand why you XXX…”

‘Old’ reaction: Obviously, my ability to express my thoughts in plain English is lacking. I suck.

‘Present’ reaction: Realize that I didn’t do a good job of explaining that section. Read it out loud to myself. Or have someone read the section who has never read it before. Like my mother. Oh wait, she has heard it like 50 times before…on to the next guinea pig… 

3. “Have you thought about this? Or tried this? What happens?” 

‘Old’ reaction: Great, there goes my weekend running all of this analysis over again. They suck.

‘Present’ reaction: Ah, I didn’t think of that. Doing this extra work will help me better understand the patterns in my data.

4. “… [no comments]”

‘Old’ reaction: I am a genius.

‘Present’ reaction: Hm, I need to find someone to read this who is willing to rip me to shreds. My paper will be much better for it.

Monday 17 September 2012

This is what evidence is made of

Posted by Jean Adams

I recently re-joined the systematic review club. I did a systematic review once. It was fine. I learnt how to do it, I did it, I published it. It was a good learning experience. Certainly good enough to learn that I didn’t need to do another one in a hurry. Or at least I didn’t need to do the nitty-gritty reviewing myself. But things happen and before you know it you’re second reviewer on a systematic review that you just can’t pass on to anyone else.
I love a good (and sometimes not so good) radio drama
There are some jobs that were designed to be Friday afternoon jobs. Jobs that clearly need to be done, but that you don’t need to think too hard about. Jobs that you can do whilst catching up on BBC Radio 4 drama serials on the iPlayer. Reformatting the tables in your latest rejected manuscript to meet the exact, esoteric, requirements for the next journal in your list. Adding references from Endnote into Word.

I love a little pile of Friday afternoon jobs. As they don’t require much brain input, I find them easy to churn through and they make me feel unusually productive. Productive, unthinking, with added radio stories. Just what I need to end the week.

In contrast, other jobs are very clearly Tuesday morning jobs. Jobs that need sustained, un-interrupted thought. Jobs where even Radio 3 is intrusive. Drafting the justification section of grant applications. Deciding what exactly is the key message in your latest paper. Working out the analysis plan for the 3MB of data you’ve just received.

I don’t mind Tuesday morning jobs. If I have the time, the space, the right environment and am making progress, I really like the satisfaction of biting off big chunks of Tuesday morning jobs. In fact, high quality Tuesday mornings jobs are what keep me in the job.

I know some people don’t mind systematic reviewing. I know some people even positively enjoy systematic reviewing. These are wonderful people. We need systematic reviews and we need systematic reviewers. I am pleased to count systematic reviewers among my friends. But, really, I am not a systematic reviewer. I’m always happy to come up with the idea and justification for a systematic review on a quiet Tuesday morning. But the real-life screening and data extraction, bread and butter of systematic reviewing are not my bag at all.

The problem, I have decided, with systematic reviewing, is that it is neither a Friday afternoon job, nor a Tuesday morning job. You need to concentrate to decide if the paper you’re reading meets all of the inclusion criteria you’ve set. You can’t possibly listen to radio stories whilst you’re systematic reviewing. But you don’t really have to come up with any great new ideas. The ideas happened way back on a Tuesday morning in November when you drafted the protocol.

I procrastinate outrageously when I am systematic reviewing. I check Twitter. I make a cup of tea. I decide I’m procrastinating too much and that I must not do anything but review until I have reviewed 10 more papers. I wonder what’s happening in the tennis and convince myself that I’ll review much better if I just check the scores and get it out of my system. I think of blog posts I could write.

But, as I am slogging my way through and slowly passing papers from the ‘to screen’ to the ‘screened’ pile, I try and remember that it is systematic reviews that we hope might guide decisions; that this pain is what evidence is made of.

Thursday 13 September 2012

On evidence

Posted by Simon Howard

In my first week at medical school, one of the professors warned that most of what we were to be taught was factually wrong. It was an arresting statement, but it may have been true: Studies have shown that textbooks and experts frequently lag behind evidence, sometimes recommending “treatments” that are actually known to be harmful.

Do Primary Care Trusts do the same? PCTs, like the one I work in, currently commission the majority of NHS services provided to patients in their catchment areas (though not for much longer). Sometimes, academics get frustrated with PCTs for seemingly doing things that either have little evidence, or appear to contradict it altogether. Given that evidence is the bedrock of public health, and given the potential for decisions to affect whole populations, this might seem worrying.


In defence of PCTs, a lot of evidence based work does happen. Most major pieces of work include a review of academic literature at an early stage, and follow the findings. The annual Joint Strategic Needs Assessment and regular detailed Health Needs Assessments also take into account published literature and local and national data in a fairly systematic way.

But there are lots of barriers to following the evidence. Books and books could be written on this topic, from the applicability of evidence in the real-world to deciding if research is really relevant to a particular population. But I’m no expert, and I’m not going to try and describe anything technical, complicated, or even remotely clever. These are just a few examples of practical barriers to following the letter of the academic evidence in public health.

One huge barrier is – as with most things in life – money. In a world of ever-tightening budgets, an academic’s seemingly reasonable intervention can be unaffordable. As an extreme example, research by the FAA and CAA suggests that three or four lives would be saved in an average aircraft fire if all passengers were provided with smoke hoods. However, the vanishing rarity of in-flight fires, the enormous cost of supplying and maintaining smoke hoods, and the cost of the fuel required to propel them around the world, all make this proposal financially unjustifiable.

Not all examples are quite so clear-cut. Sometimes, instead of choosing not to do something, PCTs try to cherry-pick the best bits of interventions in a way that is almost certainly infuriating to the academics who pioneered them, and possibly less effective in practice. But, sometimes, doing something is better than doing nothing.

Often, there can be a big lag between publication of evidence and its implementation. One reason is the complex contractual nature of commissioning: it’s often difficult to make small changes to services that have already been commissioned. The constant pressure to reduce costs incentivises longer contracts which spread the financial risk, but which also increase the evidence-practice lag. I’m sure it’s deeply frustrating to be an academic shouting “there’s a better way to do this” while services continue unchanged.

There’s also a political element to public health. Decisions to cut services that are no longer supported by evidence are particularly tricky. In England and Northern Ireland, the evidence that cervical screening in women under the age of 25 causes more harm than good has led to a withdrawal of the service in this age group. The clear evidence, combined with clear recommendations from the World Health Organisation and National Screening Committee hasn’t stopped this becoming a topic for political debate and petition, and hasn’t (yet) changed policy in Wales or Scotland. It seems likely that this political element will play a bigger part in decision making as public health moves to the overtly politicised world of local authorities.

To me personally, the most frustrating barrier to following the evidence is an inability to access it. It continues to baffle me that the NHS doesn’t have anything like the level of straightforward desktop access to literature that university colleagues have. In the 21st century, it seems crazy that I sometimes have to ask the BMA to take a paper journal off a physical shelf, scan it in, and email it to me as the only practical cost-effective way to access a paper that’s of general interest, rather than something specific to any individual project.

I think a latent awareness of what’s going on in academia is important in public health. It might not matter so much when someone’s doing a big literature review prior to introducing a new service, but it can help with horizon-scanning, and with those little every day decisions that aren’t worthy of a trawl though the literature, and with planning for the future. This is something we can all play a part in: public health professionals probably need to broaden their awareness of the academic things going on around them, and academics probably need to shout louder about the latest developments in their fields. As an associate member, I’m probably biased, but I think FUSE is great at helping both groups.

Wednesday 12 September 2012

Research, personal information, information governance etc.

Posted by Rose Watson

So, we all know about the Data Protection Act.

We all know that if we want personal information about people (e.g. research participants) then we have to get ethical approval to obtain that information and that this  requires us to state how and where that information will be stored; who will have access to it; and what we will do with it. We have to promise to keep it confidential. We also have to inform our participants of the same details. This is particularly true if we require access to NHS patients or staff for our research. 

Firstly a favourable ethical opinion must be sought and there is a national system for this: the Integrated Research Application System. Secondly, each NHS Trust who will be involved in the research must validate something called a Research Passport, another national system (invented by Mr Bureaucracy, as written about by Bronia Arnott a short while ago). It basically boils down to this – if you are employed by a Higher Education Institute (HEI) you fill in some forms about yourself, you get a criminal records bureau (CRB) check to check that it is OK for you to work with children and/or vulnerable adults and you undertake an occupational health assessment. This is all signed off by the HEI human resources department and then sent off to the lead NHS Trust Research and Development Department to be validated. 

This protection of people is all good. I hope my personal details held by others are well guarded too. Nobody wants to think that the people and organisations we have trusted with our personal details will just go around giving them to anyone willy nilly.

However, as researchers, we are expected to hand out our personal details on a regular basis, often in duplicate, without any information given to us about how it will be stored, who will have access, what it will be used for (although, it is implied that it will only be used to check you are suitable to work with children and/or vulnerable adults). This is all fine, I expect to give a certain amount of information about myself, I understand the need to safeguard people (and of course to not bring research into disrepute). 

It does worry me though. These are my personal details after all. Of course there are the issues with the system not being entirely followed and NHS Trusts obviously feel the need to cover their backs in case anything should go wrong, hence all of the duplication. Risk averse society and all that jazz. 


However, in the spirit of being risk averse, I would ask that my personal details are also treated carefully. With the same due respect I give to my research participants’ personal details. Unfortunately my details have now twice been lost in the post in this system. I would ask that people let me know why they are collecting information (especially details which are extra to the national Research Passport system); where they will store it (and please, a bit more information than ‘electronically’: what on earth does that mean?); and who will have access to it. These are simply the same questions that researchers must answer (and rightly so) when they ask people for personal details.

In short, it is perhaps time we were all a bit more conscious of the personal details that people are collecting about us as researchers. Do they really need ALL of that information? Why? What about how it is transported?

Monday 10 September 2012

Rules for the perfect supervisor

Posted by Lynne Forrest

We’ve previously had two blog posts explaining what makes a perfect research student. It was hard to disagree with any of it really, but, it’s a two-way relationship and in the interests of fairness, we students now get to respond and say what we require in the perfect supervisor.

Disclaimer: these traits are desirable in a generic ideal supervisor and any resemblance to any actual Fuse/IHS supervisor should not be implied. The views and experiences reported here reflect a consensus of opinion derived from the student body and are not necessarily mine (I’d really like a reference and a job at the end of my PhD…)

So, assuming that we’ve now all become the perfect research student, what can be done to further improve the research experience? Although comments ranged from ‘my supervisors are brilliant’ to ‘my supervisors constantly have me in tears!’ some common themes did emerge.

These are the things we think you should do to become the perfect supervisor:

1. Set ground rules at the first supervision meeting so that everyone knows what is expected of them.

2. Don’t spread yourself too thinly. Although having a ‘big name’ supervisor can be useful to students in terms of being able to utilise your experience, knowledge and connections, if you are always too busy to deal with us then this is somewhat negated. Possibly appoint a more junior colleague as the main supervisor.

3. Prepare for meetings and actually read the documents that the student sends you. If we follow the rules and send a document well in advance but you still don’t read it then this is a hugely frustrating issue. There is a power imbalance in the PhD/supervisor relationship that needs to be acknowledged, but not exploited. If we keep to our side of the agreement, then please can you do the same?

4. Be supportive, approachable and understanding.

5. Be constructive and remind the student that your comments shouldn’t be taken personally.
Criticism is fine as long as it is directed at the work rather than the person. A supervisory meeting is not an episode of ‘The Sweeney’ and you need never adopt the ‘bad cop’ role…(unless, of course, this has been agreed in 1.)

6. Promote a healthy work/life balance. 

Promote a healthy work/life balance

7. Forward any opportunities that you think might be relevant to the student. Please don’t just assume that we’ll know what is possible. For example, it was suggested that students should offer to supervise an undergraduate dissertation but I don’t think anyone knew that was even an option for PhDs. It’s hard to be proactive with things you know nothing about. Similarly with teaching opportunities.

8. Deal with each student as an individual. As one student eloquently put it ‘we’re like unique little snowflakes’! A one-size-fits-all approach just doesn’t work here. A mature student may need different handling to a younger one. However, on saying that you also need to…

9. Ensure equality of opportunities. Make sure that ALL students know what is available.

10. If you are not the main supervisor you still need to turn up for meetings occasionally. It’s very embarrassing when a student says hello to you in passing and you have no idea who they are. If you really aren’t interested in doing it then please hand the role to someone else.

11. Give lots of clear feedback. And if possible always try to end a supervisory meeting on a positive note. If your student constantly exits in tears then something has gone very wrong somewhere…

12. Sort out any supervisory disagreements outside the meeting. And don’t talk about other stuff over your student’s head. We only get an hour a month so let’s talk about us and our lovely project…

And I could go on and on….there was lots more! Do you agree? Please feel free to comment.

Thursday 6 September 2012

The value of being an imperfect research student

Posted by Heather Yoeli

In their posts of July 23 and 30, White and Adams provide a rigorously evidence-based summary of how to be the excellent research student. I found it a beautiful, if slightly disconcerting, read: carefully structured, convincingly argued, mindful of its chances of being published in the BMJ and (I assume) flawlessly citation-managed and submitted conveniently in advance of its deadline. In their two-part analysis, however, White and Adams neglect either to verify or to justify the imperative of their paradigmatic implication. Or, in less pretentious-sounding academic twaddle... they don’t really tell us what’s so brilliant about being the perfect research student.

And therefore, I would like to respond by proposing that the archetypal Perfect Research Student may not be doing any favours to him or herself or to his or her participants.

To begin with, I will critically evaluate the semiotics of the use made of their Lisa Simpson image. Lisa, as all fans of The Simpsons will know, is a perfect student; bright, attentive and thorough. Her brother Bart is, by contrast, somewhat imperfect; whilst no less intelligent and creative than his sister, he has a tendency to be impetuous, slapdash and prone to sending his supervisors things he is still working on.* And yet, outside of the classroom sphere, it is Bart rather than Lisa who displays the more competent social skills and interpersonal confidence; he has a relaxed, confident and slightly zany manner of engaging and communicating with others. He would make an excellent ethnographer or qualitative interviewer. Lisa, by contrast, has spent too much time at too tender an age seated with her laptop precariously balanced upon a pile of textbooks to know how to talk to anyone other than her laptop. 

And moving The Simpsons to the personal, I have learned through my ethnographically qualitative fieldwork that participants often respond more readily to imperfect than to perfect researchers. Ethnography is about regarding participants as real people, and about building relationships with real people, and real people are inherently imperfect.** I have been carrying out fieldwork on the Cowgate estate in Newcastle (glances distractedly up from laptop to wave to everyone she’s been chatting to) which is a community in which most thirty-something women possess more useful aspirations than to join the hierarchy of public health academia, and therefore a community which regards with confusion and cynicism the archetypal Perfect Research Student.*** I have therefore learned that participants find it easier to relate to me when I am imperfect; when, for example, I arrive at a meeting with half a bowl of my daughter’s porridge (or even half a tummy-full of my son’s puke) adhered to my leggings, or when I get halfway home with a participant’s gloves in my bag. Whereas most of my participants have had no personal experience of sitting in a postgraduate supervision session, many of them have experienced a stroppy toddler refusing her breakfast or a cheerfully regurgitant baby projecting his breakfast back towards the floor, and all of them will have done something as brainless and daft as walking off with someone else’s gloves because all people everywhere have done something similarly brainless and daft. 
 
Imperfection, therefore, is what connects us to our humanity. And our humanity as researchers is what connects us to other people. And being connected to others is a vital component of all qualitative research.


*Admittedly, the last bit isn’t true. It’s merely what I do on an almost monthly basis, and White and Adams tell me I shouldn’t.
**All of the clauses in this sentence should have been evidenced and referenced. I have neglected to do so merely to exemplify my own imperfection.
***Again, this statement should have been verified. It isn’t. As Bart Simpson might say, don’t have a cow about it, dude.

Wednesday 5 September 2012

Good CoP, bad CoP

Posted by Janet Shucksmith and colleagues

A few dogged staff members in Fuse had been talking for a while about the need to share experience of Knowledge Exchange (KE) in public health across the UKCRC Centres of Excellence. “After all we are the Centre for Translational Research in Public Health” they would say.

And so with all the Centres descending on Durham for conference and summer school fun this was an opportunity not to miss.

We were going to attempt to form a super-group, well not quite, more what is known as a Community of Practice (CoP); inviting anyone (not drawn away by one of the other workshops) with an interest in KE.


Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.
Excitement grew. What could we share? What could we learn from one another? These questions were newly critical with the second phase renewal bids for centre funding being drafted. Here was the chance to see what crept out of the woodwork when the invitation was issued.

A dozen or so good souls heeded the call. Introductions first, and the meeting immediately assumed the air of an Alcoholics Anonymous gathering..."I'm Stephanie and I am a health statistician.."..this in a very apologetic tone. Never mind, Stephanie. We can see past this failing and will be able to glimpse your inner beauty. The group thus reassured, it turned out that several participants were similarly (dis)abled. What was going on? Was this some crazy research version of the Mystery Shopper schtick? Anyway, it looked like there were enough fuzzy qualitative people to make the group viable, so on we sailed.

CoPs come in all shapes and forms but have the same essential characteristics. They bring together like-minded individuals keen to share thoughts and ideas on a specified theme, often to share resources, news and updates, as well as to argue and reason together to learn more about the chosen theme. So you can have a CoP on almost anything. Try Google if you don't believe us. Best of all the community of practice acronym opens up a whole new range of possibilities. Taxocop is, disappointingly, not a virtual community dedicated to improving the ability of its members to swindle the Inland Revenue, but rather a group of people who like putting things in taxonomies. Hmmm, I think I know where I'd put them in my taxonomy of strange colleagues…

Discussion swirled around, producing some new vocabulary for a few of us...'hive thinking' and 'hackathons' as a way of brainstorming practical problems in the field of computer software (not just for the geeks then).

We emerged with an agreement to continue and develop the CoP. First steps after a note of the meeting involved setting up a virtual community. Kevin Balanda from the Northern Ireland group volunteered. ‘We need to talk about Kevin’* was the comment on the following Monday morning after the first attempt at this went somewhat awry, but we quickly recovered and are now locked into the Health Well website and starting to explore how to use it. We will meet in the too, too solid flesh at least once a year at Centres of Excellence conferences, and may meet for events in between when we have specific issues to ‘hack’ over. It has in the first instance set us off exploring the importance of policy and practice partners being able to access the sorts of journal and report resources in which public health evidence is embedded ... Watch this space!

If you are interested in joining the Community of Practice please contact Kevin.Balanda@publichealth.ie 

*It is thanks to the hard work of Kevin that we now have a forum for discussion

Monday 3 September 2012

Achieving impact

Posted by Jean Adams 

There are two things that research is judged on in the UK at the moment – the quality of ’outputs’, which means journal papers in my neck of the woods; and ‘impact’. The official definition of impact is “any social, economic or cultural impact or benefit beyond academia”. In public health, the sorts of things that might count are a change in local or national policy or practice that followed from your research findings. 

It all sounds so sensible, doesn’t it? Of course we should specifically value research that has a positive impact on the everyday lives of the people who, by and large, pay for it through their taxes. Aside from the obvious implication that just to know something we didn’t know before is somehow a less valuable than ‘impact’, my experience is that achieving impact is a serendipitous thing. Something that seems to be as reliant on being in the right place at the right time, as on doing high quality research. 

Research impact
I spent two hours last month defending a piece of work that we published earlier this year. It was some of the most intensive questioning I’ve experienced on my research. A bit like my PhD viva, but without the suit and focusing on just one 3,000 word paper, rather than three full years of work and a 50,000 word thesis.

The paper was about TV food advertising to children. It had come to the attention of Ofcom. Our findings were at odds with one of Ofcom’s own reports and so they were keen to discuss differences between our respective methodologies and how we might reconcile our conflicting findings.

It was an interesting experience and the people from Ofcom had obviously done their homework – they’d read our paper in great depth and wanted to talk through every sentence, and every cell of every table. As I say, it was a pretty intense couple of hours. I think we came out okay – sure our research had some limitations, but so does all research – compromises have to be made. But that doesn’t mean we don’t stand by our findings. They were kind enough to leave off pointing out how rude we’d been about their research until the last 10 minutes or so.

As the meeting was winding up, I took the chance to ask: “so what happens next?” Because when you get a chance to speak with people who work for national policy organisations, you kind of think you should take the opportunity to try and somehow make a contribution; to achieve impact.

My polite enquiry was met with a shrug and: “well, this is not something that’s on the policy agenda at the moment.”

Our research findings reflect the final recommendations of NICE public health guidance on prevention of cardiovascular disease, as well as the position of the British Heart Foundation, the World Cancer Research Fund, and the Scottish Public Health Minister. But it’s not on the policy agenda.

And that, I fear, may be that. We did some work that certainly could achieve impact. We even got the chance to speak to some people who might have been able to help us enact that impact. But it’s not on the policy agenda.

Serious question: what else can we do to achieve impact?