5

The time I didn’t get published in the BMJ

1.

I gave a rather fiery talk about patient engagement at a conference recently and posted the transcript and video here on my blog. I didn’t intend for anything specific to happen—I just like to post my work. Then, surprisingly, it took on a life of its own. It was shared widely on Twitter and discussed with passion, especially among patients and social scientists. It seemed to resonate with people so much that the BMJ, the esteemed British Medical Journal, contacted me about writing an essay!

I was so excited. I imagined the fame and fortune that would come of being published in the BMJ—all the conferences, the consulting engagements, the research papers, book chapters! I also imagined the fun blog post I would write, linking to the article and maybe sharing a humblebrag about how I couldn’t believe the BMJ had contacted little old me.

2.

This all came together with perfect serendipity. In the initial discussions, I was told of their keen efforts to engage patients in all kinds of ways—as reviewers, editors, advisors. And I knew I could produce something great. I’ve been writing essays for a decade or more. I completed graduate studies, wrote a book, edited a collection of essays on ethics, contributed a chapter to the same collection, and presented countless times at conferences, in graduate classes, and to professional groups. If ever a patient/caregiver could produce work fit for such a prestigious publication, surely it would be me.

Let me tell you, I laboured over that thing. I got lots of encouragement from a BMJ editor, with whom I exchanged emails, had a Skype call, and reviewed the arguments one by one. My assignment felt clear, if not exactly simple—keep the message essentially the same, but cut the essay in half and ensure that readers can follow along despite the massive edits. I knew it wasn’t perfect when I emailed it to the editor a couple of weeks later, but it was good enough for a first look.

When I received the essay back late December, it had been commented on by 3 reviewers—the editor and 2 others. Ah, yes—peer review! I was excited to get comments but I also knew to brace myself, to get ready for the criticism. I thought there would be disagreement but frankly I wasn’t prepared for what I experienced. The comments showed me that the reviewers—one especially—substantially misunderstood the subject of my essay. I attribute some of this to professional and perhaps cultural differences between how ‘patient engagement’ is defined in Canada and in the UK.

The reviewer seemed to think I was talking about PPI in research (Public and Patient Involvement—INVOLVE gives a nice overview). In the UK especially, PPI is an approach to research that also encompasses specific methodologies and frameworks. It was certainly not my intention to comment on this particular set of practices. This reviewer, a scholar of PPI, read my essay differently. The tone of the comments was huffy and condescending. They even provided a list of references ostensibly so I could read up on PPI, as clearly I didn’t know what I was talking about.

But, whatever. I hear that peer review can be like that. I contacted the editor I’d been working with and expressed my concern about the misunderstandings. If I clarify this in my essay, I asked, will it lose relevance for the BMJ? If so, I should stop now. The editor was encouraging: don’t worry about it, carry on, this is important commentary and broadly applicable. I was invited to disregard comments I thought weren’t relevant or helpful and to explain my rationale for disregarding them in the cover letter.

I forged ahead. I spent another week reworking the essay, clarifying the misunderstandings and adding personal anecdotes to better ground my commentary in my experiences as a patient/caregiver. I submitted the essay mere hours before the deadline I’d promised to meet. Still flattered. Still excited.

3.

Two weeks later, I got The Email. Not from the editor I’d been working with, but from another with whom I’d only had brief contact previously.

There were some blunt formalities, followed by this:

“We felt the piece was rather long for what it said, that in places it seemed repetitive, and that it was a bit more anecdotal and vague than we would accept for an Essay in BMJ.”

It was then suggested the topic might be of interest to the Opinions editor, who was cc’d on the email.

Were these fair comments? Well, the talk on which the invited essay was based was indeed long and repetitive. That’s how lectures on complex ideas work. I had done my best to whittle it down and weed out the repetition, and I think I did a good job. The essay was anecdotal because, well, it was based on my experiences. My wide-ranging observations that were now criticized as vague had previously been deemed broadly applicable.

This was so confusing. They had worked with me to produce something rather specific, then rejected the essay for being exactly what they’d asked for. The casual wording of the rejection email made the critique hard to refute—they basically said thanks but no thanks.

4.

So there it was. The flattery I’d allowed myself to feel was now just embarrassing. What was I thinking, to imagine it would be that easy to get published in the BMJ? The sharp disappointment lasted a day or two but soon faded enough that I could start to think about other things, like what to do with this now tainted essay. Still, I had a nagging thought: did I just experience the exact thing the essay was critiquing? Was I asked to contribute something meaningful, from a patient perspective, only to have it sidelined as “anecdotal?”

This experience has raised for me certain questions: what, and whose, knowledge is considered legitimate? And how is that determined?

I had given the BMJ what they’d asked for—an adaptation of an existing lecture, valued (presumably) on the basis of a unique patient perspective. This lecture, like most of what I write, was easily accessible to them; I publish everything here on my blog unless restricted by agreements. I obviously write from my experiences as a patient/caregiver first and foremost. Therefore, the thing I submitted is exactly what they should have anticipated. Yet, when put through the BMJ’s legitimization engine it failed.

Here’s my speculation: The established process for determining legitimacy—peer review followed by opaque machinations of editorial discretion—showed that (my) patient experience and analysis was simply not good enough, not legitimate, and in fact should be more appropriately assigned to the patient ghetto called “Opinion.”

I shouldn’t be surprised. The mechanisms used to determine what and whose knowledge is legitimate, letting through only “real science” done by “real scientists,” are designed to filter out exactly what I produced. In this definition of “real science,” accounts of personal experience must be wrapped in accepted methods of qualitative research (which, by the way, the BMJ considers an “extremely low priority”). Quotes and anecdotes may be utilized but only to animate or illustrate a particular data point from the “real research.” In this generally accepted definition of “real science,” something a person says, on its own, is merely an opinion.

The funny thing is I don’t dispute this. “Prove it,” I always say. My whole life I’ve believed that science = truth. And now, new ways of thinking suggest we ought to hear from patients to inform how we apply what we know and to possibly lead research in new directions. Which of course is fine—this shouldn’t be a challenging idea. But as “real science” tries to incorporate the voices of patients beyond merely surveying our opinions, while trying to maintain its stance as the producer of the one and only truth, the whole enterprise falls apart.

These efforts to include patients appear in all the forms I’ve been talking about lately—patients in advisory roles, patients as research partners, patients as co-designers of services and programs. Some might say we’re taking over! But I think nothing could be further from reality. As my talk pointed out, I think we’re invited to these roles because of what we represent, not because of what we accomplish. We might be helpful at times, but we have no more power than we ever had.

5.

I understand that the BMJ has an editorial process. No doubt they ingest a lot of material, put the good submissions through their review and editing process, then choose a select few for publishing. But let’s be clear: I didn’t knock on their door asking for them to please take my essay. I responded to their invitation in good faith, worked very hard to comply with their rules, then in the end wasn’t taken seriously—which is the exact critique I’ve been making of patient engagement efforts all along.

I keep mentioning that this whole episode was initiated by the BMJ because I think it’s an important detail. I often remind people that “patient engagement” is not something that patients do on their own. It’s an institutional endeavour. Granted, there are indeed engaged individuals who doggedly pursue what I think of as “patient justice”—their work is radical, disruptive, maybe even emancipatory. (For example: Zal Press at Patient Commando and “e-Patient Dave” deBronkart.) However, I think the vast majority of patient participants in “patient engagement” are doing what I did—responding to an institutional invitation. We therefore become enrolled in the institutions’ work, on their terms, for their purposes.

If I hadn’t felt so flattered I might have seen this coming. Of course they couldn’t take me seriously. A scientific journal lives and dies by its adherence to (at least the perception of) a rigorous process. Peer review is what promotes legitimacy in research and scientific publishing. Say what you will about its validity as a mechanism in determining legitimacy (here’s a critique of peer review, and here’s another one)—it is indeed a well-established convention to which everyone tacitly adheres. Because of this, patient stories and viewpoints traditionally have not found a natural home in scientific journals, other than perhaps as narratives in case studies, or as opinion pieces.

But change is in the air. Institutions are now confronting the exciting but complicated idea that patient perspectives should actually comprise an integral part of expanding society’s understanding of medicine, health, and healthcare—not just as opinion but as a critical perch from which to pursue research, develop drugs and devices, provide diagnostic tools, deliver treatment and care, etc. Patient experience is knowledge. Patients are experts. That’s what they keep telling us, anyway.

I imagine this is what the BMJ was aiming for when they contacted me—to demonstrate their patientengagementness. And because my message is essentially anti-establishment, they would get to be a bit edgy, too. Their earnest Patient Partnership campaign outlines all kinds of good intentions, but I can’t help but shake my head at the fuzzy language:

“We see partnering with patients, their carers, community support networks, and the public as an ethical imperative essential to improving the quality, safety, value, and sustainability of health systems.”

Cool. But who gets to define what kind of “partnering” is meaningful ? Why is it an ethical imperative? In what way is it essential, and how do you know that partnering has improved anything?

6.

So now this essay has morphed into a version of that (rejected) essay. It can’t be helped, I suppose. This experience is a perfect opportunity to point out, once again, that nice people and good intentions alone won’t get us anywhere. As long as institutions (and publishers) continue to expect patient perspectives to meet the criteria of “real science,” our work will never be seen as legitimate. Patient voices, even those that now advocate from the inside of the institution, will continue to be relegated to the kids’ table at the dinner party.

And before you ask—yes, I’ve considered the obvious. Maybe my essay really was shit. But if they didn’t want a patient’s perspective based on personal experience, it doesn’t matter whether it was good or not. It was doomed from the start.

Jennifer

5 Comments

  1. Jen, I’m sorry that happened. You’re right, there seems to be a real crossover of what they want and how they want it. I hope you get to use your essay. I would love to read it.

  2. Fascinating – thanks a million for sharing this weird experience! Two thoughts come to mind:
    First, a friend in university used to say (tongue in cheek) that there should be only one newspaper because there was only one truth!
    Second, it might be interesting to ask the BMJ whether the ‘peers’ in the peer review they touted were people in the roles of ‘patients’ or ‘parents’? We this TRULY peer review?!

    (PS… I am fine with this being shared!)

    • Thanks for commenting Peter! They have an open peer review policy which means I know who the reviewers are. One is certainly a ‘patient peer’ who is also one of their editors, one is a PPI researcher (not sure about personal background) and the other might be described as an executive (has a PhD) for an online patient community, which runs as a business not a charity. It’s an interesting question as to what constitutes ‘peer’ in this case. Goes back to my question of legitimacy – maybe patient reviewers aren’t legitimate either?

      Edit: Just looked up some more info – this particular PPI researcher could also be considered a ‘patient peer.’ So, not sure what to say. The review process was unpleasant but it’s not where things got held up. Maybe it’s that the patient people and the ‘real’ editorial people do not agree on what constitutes legitimacy.

  3. Resonates with so many of my experiences – I was even published recently as a co-author on lung cancer targeted population screening with two eminent leading clinicians yet the criticism about it has said we are biased! Despite the clinicians explaining exactly who we are and our involvement (which on my part has been entirely voluntary), I increasingly find that I am invited to take part in all manner of events/papers/committees to tick a box but follow up to anything suggested does not happen or takes so long to even be acknowledged, I lose the will to live. I wonder if we changed the word to ‘people’ as many clinicians/researchers see us as samples, scans, tests, numbers yet when something happens to one of their loved ones or their profession, it’s as if they expect a completely different service.
    As a lay member on a NHS board, I have repeatedly cited specific examples of poor patient care for people I’ve witnessed or even personal experience but this has been dismissed in discussions ‘as we can’t raise that with the Trust as it’s nothing more than hearsay’. Like you, I was trained to always cite examples if complaining or complimenting anything, seems as if health professionals expect a different kind of truth from their service users. We have to crack this one if outcomes are to improve for so many conditions. communicating with patients in language/terminology they may understand would be a good place to start.

    • Hi Janette – so glad to know I’m not alone! I heard another patient say recently that the way her contribution is pushed aside is being told “they can’t get into the weeds.” I like your description of “different kind of truth” – apparently some things are truthier than others :)

Leave a Reply

Your email address will not be published. Required fields are marked *