If you had asked me, before I began my PhD, to define research ethics, I’d have told you that ethics is about morals and values, about how and why we conceptualise, construct or differentiate right and wrong and about what we do with our judgments or perceptions of right and wrong – and that research ethics is thereby about how we promote the personhood and dignity and human rights of our subjects, and about how we know that what we’re doing isn’t going to harm them. I was idealistic, principled, and, fresh from a Religious Studies degree, prone to expressing my every academic thought in an 85-word sentence, and prone to using too many commas. These days, I’d define research ethics more succinctly, and less naively: it’s about placating the insurance companies by assuring your host institution that you won’t do anything that’d get them sued, and it’s about following the rules.
Within health academia, the contemporary drive towards research ethics developed primarily in response to the horrific atrocities committed during the clinical and epidemiological ‘research’ undertaken by Nazi doctors in the World War II concentration camps. In 1947, the Nuremberg Code was published, and in 1964, the Helsinki Declaration, most recently updated in 2008. Both of these documents provide clear standards and principles designed for researchers to interpret and apply with regard to their own contexts and situations, but never a how-to guide to manage every conceivable scenario. In conceptual terms, this is known as teleological ethics: advocating doing what is right, however this is to be achieved. But many people criticise teleological methods for their inherent lack of common-sense safeguards. For example, it’s great that the Helsinki Declaration tells us (Section A5) that we have a particular duty to those underrepresented in research, but how far can I go and what can I do to find the underrepresented of Cowgate?
Josef Mengele: one of the leading Nazi doctors in the medical research programme at Auschwitz |
Within the UK academic community, research governance has appropriated responsibility for ethics, ensuring that every university, NHS Trust and Local Authority produces a detailed handbook of rules mandating what researchers should do in absolutely any situation they may encounter. In conceptual terms, they provide a deontological ethic: stipulating that the rules are followed, because rules are rules and the organisational insurance provider has set their premium on the understanding that the rules will be followed.
The problem, however, with such a deontological minutiae of rules is that it reduces research ethics to a vast set of forms to be completed and a prescribed sequence of actions to be completed. By depriving researchers of the capacity to think for themselves about what might be right or fair or appropriate in their own particular study, such handbooks prevent researchers from using their creativity to respond innovatively to the most vulnerable of participants.
Throughout the history of research, ethics has been an evolving, changing discipline, always discussing responding to the new challenges it is posed. A few years back we were considering whether it was ethical to accept typewritten student essays (in case somebody else had written it) and a few years before that we were considering whether it was ethical to accept women into medical schools (because the academic pressure might disrupt their menstrual cycles). Today we’re considering gene therapy and social networking, and the lesson from the history of ethics is that the generation below us will see no problem whatsoever with mitochondrial gene transplants or Facebooking study participants – assuming, of course, that mitochondrial disease and social media still exist. But when all of our ‘ethics’ comes distilled in a university or NHS-approved directive of ordinances simply to follow, and when we know that we’ll never get our proposal agreed without doing exactly what we’re told, it can be hard to innovate, or even to think. For example, I know that I should anonymise all data (Northumbria University Research Ethics and Governance Handbook, p.20), but am I not permitted to make an exception for the participant who says she will consent to participation only if I agree her real name when quoting from our conversations in my PhD thesis?
Or, to put it more succinctly, we’re all spending too little time thinking about how we can do the very best for our research participants because we’re all wasting too much energy poring through the rulebooks and filling in the forms.
Am I right? And if so, what does it mean that I’m right? And if I’m wrong, what do you mean? Discuss.
The problem, however, with such a deontological minutiae of rules is that it reduces research ethics to a vast set of forms to be completed and a prescribed sequence of actions to be completed. By depriving researchers of the capacity to think for themselves about what might be right or fair or appropriate in their own particular study, such handbooks prevent researchers from using their creativity to respond innovatively to the most vulnerable of participants.
Throughout the history of research, ethics has been an evolving, changing discipline, always discussing responding to the new challenges it is posed. A few years back we were considering whether it was ethical to accept typewritten student essays (in case somebody else had written it) and a few years before that we were considering whether it was ethical to accept women into medical schools (because the academic pressure might disrupt their menstrual cycles). Today we’re considering gene therapy and social networking, and the lesson from the history of ethics is that the generation below us will see no problem whatsoever with mitochondrial gene transplants or Facebooking study participants – assuming, of course, that mitochondrial disease and social media still exist. But when all of our ‘ethics’ comes distilled in a university or NHS-approved directive of ordinances simply to follow, and when we know that we’ll never get our proposal agreed without doing exactly what we’re told, it can be hard to innovate, or even to think. For example, I know that I should anonymise all data (Northumbria University Research Ethics and Governance Handbook, p.20), but am I not permitted to make an exception for the participant who says she will consent to participation only if I agree her real name when quoting from our conversations in my PhD thesis?
Or, to put it more succinctly, we’re all spending too little time thinking about how we can do the very best for our research participants because we’re all wasting too much energy poring through the rulebooks and filling in the forms.
Am I right? And if so, what does it mean that I’m right? And if I’m wrong, what do you mean? Discuss.