14 April 2014

Why Things Are This Way: The Excessive Use Of Passive Voice In Science Writing?

It has been some time since I last posted here on the McCulley/Cuppan blog. Just like some other things in my life, I simply got out of the habit. So, like my physical workouts, I am looking to make blog posts a more regular part of my life.

With that said, onto the question I want to consider in this post—why are most research documents that cross my computer screen written in the passive voice? Literally,  many, if not all of the clinical research reports, much of the regulatory documentation, and painfully all of the clinical research protocols are written in the passive voice.

Why do I even bring this topic up? Well, some researchers in the academic community feel that the use of the passive voice (verbs that do not indicate who or what is doing the action) can lead to writing where the sources or agents of action are not clear. This is my big criticism of protocols, a class of documents centered on characterizing events and agents. 

These writing researchers also comment that repeated use of the passive voice results in texts, which are “flat and tedious” to read. From my very subjective perspective, I have to agree with that last comment as I have certainly seen my share of documents that fit this description. 

It is worth noting here too that reading research shows most readers prefer active voice, especially readers for whom English is not their mother tongue.

The point of this blog is not a grammar lesson (trust me I’d never do that as I almost fall asleep from boredom when someone mentions the word grammar.) But just to be sure we are all on the same page by what I mean with ‘passive voice’ versus ‘active voice’; here is my explanation in a nutshell:
  • In the active voice, the agent performing the action is the grammatical subject of the sentence and the recipient of the action is the grammatical object.
  • The passive voice switches this around, making the recipient of the action the grammatical subject and the agent the object if the agent is even included.

Passive sentences that I read by the truck load contain phrases such as these:
  • The three arms in the study will be…..
  • Training on diary completion will be provided to patients…..

The worst abuse of passive voice is the heavy use of state-of-being verbs combined with past tense: (is, was, be) + (past tense) like the following examples:
  • PDGF and its receptor (PDGFR) have been implicated in the pathobiology of pulmonary hypertension in animal studies……….
  • Wonderdrug has been shown to be an effective treatment in XX disease.
  • The molecular constructs most effective for PI3K/AKT/MTOR pathway inhibition were shown to be.......
  • Mortality at 24 weeks after first dose was to be ascertained for………
Unfortunately, it is way too common in protocol writing for the author to utilize passive voice and fail to explicitly mention who will perform the act. Often the all-important agent is missing from the discussion and the reader must interpret who is the agent of a specified task. In numerous instances this will not be problematic, but in other cases it can cause confusion that leads to inquiries or missed tasks.

So back to the point of this discussion. I ask this question about the habitual use of the passive voice because many science journals, like Nature, and most of the leading style guides recommend the active voice over the passive voice. So if the top-flight journals like Nature and the leading style guides like the Chicago Manual of Style recommend writing in the active voice, then why do people working in pharmaceutical and medical device companies still demand passive voice as the default style pathway?

When I ask people in the workshops I teach at various pharmaceutical and medical device companies why they slavishly write in passive voice, the answers range from: “I did not know that is the style I was using.” to “This is the way science must be written.” Ignorance is never an acceptable excuse, but to invoke the need to meet some mythical style-standard is absurd. The slavish use of the passive voice in science writing is a self-maintained and mutually-committed act drawn from a fairy tale. I use the term fairy tale here with careful consideration. Fairy tales are about imaginary worlds. Hereto we have a situation where people imagine what readers want and prefer and routinely invoke the imaginary in defense of their personal or organizational belief system about high quality scientific writing.

From my perspective, the two leading reasons that the passive writing style is so broadly applied are:
  1. Many writers have only read documents written in this style and are formally and informally conditioned to replicate the style. A prevalent form of conditioning is teachers in the sciences, at levels of the academic system, who demand their students write applying the passive voice style.
  2. The power of precedence—what we did previously was accepted or published; therefore it had to be good. So make this document look, taste and smell like the previous ones.
The bottom line is writing in this style is a habit and a bad habit at that.

I am working my way through some papers addressing the psychological mechanisms that form and maintain habits in work groups and organizations. Now I am looking for references related to the meaningful steps necessary to mitigate or eliminate these habits. When and if I find an effective elixir for this bad writing habit, I will let you know.

05 March 2013

Time to Reconsider How You Write for the Regulatory Reader

With the continuing expansion of on screen tools for analyzing, manipulating, and using technical data it is worthwhile to take a moment and consider the implications of how we think about documents and document design in 2013.

Let me use as the starting point for this discussion my position that in the world of regulatory writing it is clearly time to retire the classic notion of a document that has been around since the Irish monks hung out in European monasteries scribing the ancient texts in Latin on bound pages of vellum. So, stop thinking about and judging documents as something going from page 1 to n and constrained by the classic measurements of “Letter size” and “A4.” 

Electronic regulatory submissions that are compiled for viewing on screen using a tool like Global Submit Review must be characterized by definitions decidedly different than what worked for an Irish monk. Think three dimensional.

Now you must think of documents in the manner of Suzanne Briet. In 1952, Briet created the following working definition for a document—“A document is the physical evidence that supports a fact.” I suggest her definition describes the mental model we must  now apply to regulatory submission documents in 2013. Documents are now defined by the user, not you. Why is it that way? Because of the tools they now use to navigate electronic documents.

The culture of reading in the regulatory domain has really evolved over the past five years. As Christine Rosen states in her article, People of the Screen, published in The New Atlantis:

“Every technology is both an expression of a culture and a potential transformer of it. In bestowing the power of uniformity, preservation, and replication, the printing press inaugurated an era of scholarly revision of existing knowledge. From scroll, to codex, to movable type, to digitization, reading has evolved and the culture has changed with it.”

It is important to keep in mind that research shows there is a distinct change in behavior when reading on-screen versus reading within the framework of “Letter size” and “A4.”

Some of the differences are relatively subtle and others are rather profound. For instance, the research of cognitive scientists like Mary C. Dyson, Andrew Dillon and many others has looked closely at how physical text layout impacts on screen reading. Papers from Megan Fitzgibbons, Stuart Moulthrop, and others consider behavior tendencies and strategies for searching large complex documents for specific pieces of information and the use of hypertext attributes (I'd call this the working application of Briet’s definition of a document.)

At this point in time I have met few in the medical writing industry who ever consider how there narratives and tabular displays impact the on-screen user. They just kept on building document content like they always have. I have yet to hear a medical writer talk about how paragraph length and density of detail impact on-screen reading speed.

I have met fewer still who consider the implications of reading pathways through and between documents as the on-screen reader now make considerable use of bookmarks, hyperlinks, and key term searches. They just keep on publishing to only third-level headers while the documents actually subordinate content to the 5th or 6th level header and many writers toss in hyper-links without much regard for utility of the link to the end-user. Keep in mind this quote from Megan Fitzgibbons:

Literacy is a key component of the information seeker's side of the equation, because the abilities to locate, read, and evaluate texts are the basis of successful information gathering processes.”

The net-net I am trying to bring across here is that for information professionals to form effective principles of document design they must reconsider regulatory agents’ needs, attitudes, and ways of working with documents in the “electronic document age” and that information professionals must also understand the technical capabilities of the document interface tools in prevalent use by regulatory agents so that they can design documents today that will meet the changing needs of the reader 12 to 36 months from now.

31 December 2012

Thoughts on the Knowledge-managing Medical Writer


I have written from time to time here and elsewhere regarding my vision for what should constitute a good working definition of the regulatory medical writing profession in the 2000s. I thought I’d end the year by sharing a few more attributes that make up my working definition. I also want to contrast this vision with my observations of what I see as the most common mind-set for the medical writing profession. I place these considerations before you to see whether they resonate or perhaps play but a very flat note. Comments and counterarguments are always appreciated.

I argue that to succeed in 2013 and the years forward, a regulatory medical writer must see their role as a knowledge manager, not as a writer—that is, not as a scribe of clinical narratives or descriptive text in various types of documents. To play off the work of Metz Bemer in her article Technically It’s All Communication: Defining the Field of Technical Communication I too suggest that many aspects of knowledge management are indeed a sub-discipline of regulatory medical writing. So in this piece I will continually refer to the “knowledge-managing medical writer.”

My observations during the past 10-12 years of many medical writers in many organizations suggest few see or operate in a role beyond “a scribe of clinical narratives or descriptive text in various types of documents.” They do not act in the manner I assign to the knowledge-managing medical writer. This older vision of a medical writer worked in the past and may work in certain situations and with certain document genre today, but will fall to the edges of the road as we move forward in time.

Over the years I have become acquainted with some individuals who really do recognize the importance of knowledge management and representing greater value than “just a scribe.” I do not use this term “scribe” derisively, but in the context of how many clinical development or regulatory submission teams and organizations treat medical writing resources—they are just an afterthought brought in late in the development or strategy life cycles to attend to the matters of writing up the details. Writers may be wrongly characterized to such a model, but then again many are correctly cast. I see the situation as similar to what played out in the Irish monasteries of the 10th century—monastic scribes working in the background arduously producing manuscripts.

I argue that as part of knowledge management, regulatory medical writers at a minimum have a role in advising teams on the effective design of documents for the highly selective professional reader at regulatory agencies. I know there are some reading this piece who fully understand the implications of what reading research shows—and shows with ample evidence I might add—that successful research reports and submission dossiers are predicated on more than just good study results and an adequate template (though this does make the task of writing significantly easier.) A brief digression here: I do not consider whether a submission package is or is not approved as the only parameter of success, but I am not going to describe my working definition of “successful documents” in this piece.

Okay, back to the point—most reading this piece will likely agree that successful documents routinely require more than just someone sitting at the computer who has an advanced degree in the life sciences, great attention to detail, good analytical skills, above average MS Word skills, and good command of the English language prevalent in medical writing (by-the-way: I suggest these attributes make for a reasonable working definition of a scribe.) My observations suggest that not many writers take on the role of advising their teams on effective document design. A role that I fully associate with the knowledge-managing medical writer.

I suggest “successful” clinical research reports require close, carefully orchestrated and well-articulated design intentions and regulatory submission documents demand significantly more consideration of such intentions within and across the “corpus” of various Module 2 submission documents. Therefore, I argue that medical writers need to carry the “knowledge torch” regarding how to consider and then act out various design intentions in regulatory submission documents.

Now please keep in mind, by “design” I do not mean making a document look pretty or to comply with a template. These are mechanical attributes of a document. By design, I mean how one builds and shapes arguments and how one creates division and hierarchy to convey meaning to the busy professional reader in complex, technically demanding sets of data and information. These are among the semantical attributes of a document.

My observations suggest many writers retreat from such tasks. They are very comfortable staying in the background operating in the manner of the monastic scribe versus the manner of the knowledge‑managing regulatory medical writer. They are most comfortable working to the model of “tell me what you want and I’ll write it” and the dictate of “here—write your document just like this one because it was approved by senior management.” I suggest this is an authoring approach that would appear quite comforting to an 11th century Irish monk writing manuscripts in the monastery at Le Mont Saint-Michel.

So looking towards 2013 and years forward, I suggest that some of the things really good knowledge-managing regulatory medical writers will do are as follows.

  • These writers understand that the mantra of “well this is how we have always done it” is not a meaningful metric—especially since regulatory agents in public forums and private sessions talk about how they sometimes gasp and choke their way through data and documents in submission packages. Good regulatory medical writers clearly understand that they are writing for a decision-making reader and look to get their teams to understand the implication of errant document design decisions for this type of reader. These writers understand how to design documents to satisfy the question-based inquiry of the decision-making reader of regulatory submission documents.

  • Knowledge-managing regulatory medical writers truly understand the working definition of the term “concise” and look to influence their teams to recognize not only the importance of concise writing but the hallmarks as well. These writers know that writing style must vary by document genre and that you cannot simply take a “one shoe will fit all feet” approach to both style and level of detail as you move across various document genres. Their writing is marked by brevity of statement and is free from low-value elaboration and superfluous detail.

  • Knowledge-managing medical writers recognize there is marked difference between descriptive text and expository text and assiduously avoid redundant representation of data in a textual form. These writers understand that much of descriptive text at best adds bulk and at worst adds noise to a document. They look to educate their teams to recognize much of traditional scientific writing style brings no added value to the domain of regulatory documentation, but will consume considerable amounts of time and energy to produce and manage within their documents.


I end this piece with a working definition that I feel is a good fit for the knowledge-managing medical writer—“the knowledge managing writer utilizes a range of strategies and practices with a team to identify, create, represent, and enable adoption of insights and experiences to foster effective work practices to create high-quality document products.”

05 September 2012

Unproductive Review Practices

This is another post from our archives, but is pertinent to the document assessment work we've been doing this summer:

In "Why the Focus on Review Practices?," my colleague Jessica Mahajan highlights the observation made in our McCulley/Cuppan consulting that reviewers, who are expected to work toward enhancing document quality to improve its effectiveness, tend to direct much of their review effort to mechanical and stylistic document elements (syntax, word choice, punctuation, grammar) at the expense of the intellectual work the document is supposed to do. One of my previous posts "How Do We Get People to Apply Improved Work Practices?" explores ways to motivate change when change would provide significant benefits to both individual and organization. In turn, I have a theory about why we continually see subject matter expertise for review applied to the task of copy-editing, and why that practice is so hard to change. The theory is built around how we:
  • Learn to write.
  • Learn to review.
  • Ask for review.

How We Learn to Write
Think about how you learned to write. If your experience was like that of kids I visit in middle and high schools, then your teachers tried to encourage you to write in a context and with a purpose. Unfortunately, they likely ended up using rubrics that are all about structure, word usage, and typography. A rubric that had little regard for how well writing fulfilled purpose and satisfied reader's needs. A rubric I saw recently (I collect these things, and this one was typical) graded students on everything but content, and as long as the writing followed the specified form it got top marks (an A in this instance). A really interesting paper on black holes, and the physics behind them (which may have been beyond some readers, but worked really hard to make the ideas accessible to a varied reader audience) got a B because of errors in typography. More popular are the 5 or 6 equally weighted measures called writing traits. Students are given points for:
  1. Ideas and Content
  2. Organization
  3. Voice
  4. Word Choice
  5. Sentence Fluency
  6. Conventions
Just look at this: how can the assemblage of ideas and content bear value no greater than word choice and sentence fluency? When our ideas are given so little weight (~17% here), is it any wonder that people attend to form over function?

This is how we learn to write--texts we create are based on finished models that rarely tell us what makes them good models. Further, we are never given insight to the process of crafting and iteratively refining a text, that is, a model for what should be in place in a first-draft document, versus a second draft. In most learning environments, documents are judged based on how well they adhere to rules constructed for talking about how language should work.

Unfortunately, this approach does not change when we get into higher education. Some courses in technical degree programs have a writing component. But if you test out of freshman composition (where the previous description is still pretty accurate) then at best you may get one required course in technical communication. This might be taught by a creative writing student who is mostly interested in finishing their MFA and thinks that learning to "write" a well-organized memo (form) should be one of the four major projects students will prepare for the course. Because creative writing and technical writing don't have much to offer each other, right?

There are exceptions to this scenario, but unfortunately the above is probably a pretty good description of the rule. More to the point, grading is hard (especially when there are not good rules for anything other than grammar and punctuation), and most students are primarily interested in receiving top marks. So students simply want to know what they have to do to get the top mark. The model is a finished document that is good enough--in terms of content, organization, voice, word choice, sentence fluency, and conventions--to get a top mark. Likely the target given to the students addresses only five of the aforementioned six attributes. The one left out is content. So the student focus and energy goes into fulfilling these five attributes.
As an informative aside, when departments ask for help in training their people (students or employees), the most frequent initial and typical request is to "just help them with their grammar". This despite the fact that we know when we focus on grammar, the quality of writing, measured by what the writing does, goes down.
Learning to write in the workplace is slightly different. Here we're given a finished document, and told to 'make it look like this'. The document is complete, but bears no annotation or associated guidance to suggest what attributes of the document makes superior and worthy of the status of a 'model'. In a worst case scenario, someone's internal dialog might go something like this, "I'll look pretty stupid if I ask my boss what it is about this document that makes it a good model--so I won't. I mean, it is obvious, right? And besides, I (choose all that apply: a. got A's in English, b. have a PhD, c. have written technical reports before, d. I have all the data in the report) so... I must be okay, right?"

In our McCulley/Cuppan consulting, we constantly see a model used for constructing documents where new documents are based on old ones. Authors endeavor to make new reports as complete as possible before asking anyone to give them a look. I can recall several instances where authors were told to write until they had nothing else to say and then their supervisor would be ready to look at the report. This approach to writing--to model a preexisting document and to make it as complete as possible before bringing extra eyes in to help--sets up a workplace dynamic that sabotages the potential for productive change.

How We Learn to Review
How we learn to review follows the model of how we learn to write. In school, students construct papers that respond to prompts and are graded. We spend our time learning how to construct sentences that are grammatically correct, forgetting that people can get over a misspelled word or two or a tense problem if we have something to say. Often the only thing teachers can use to distinguish one useful response to the prompt from another are the mechanical elements of a sentence. And they can't give everyone an A. That would be grade inflation, or worse!

Papers are returned to students with lots of blood (well, red ink since lots of teachers still like those pens) that identify words misspelled, grammar errors and organization problems. In other words, the students work is 'assessed', but not reviewed. And I can't blame the teachers--this is what they were trained to do and to help students effectively learn to communicate well via the written word involves a lot of reading (not fun or easy, I promise). Identifying the mechanical problems in a document is easiest and fastest, which is a consideration when you have thirty or more papers to read in an evening. I have a colleague teaching so many sections that she's got 115 papers to read at one sitting!
The problem is compounded by the fact that in competitive societies we're taught not to collaborate. Rare is the teacher that has students collaborating on projects or written work, though thankfully, this is changing. We learn not to share our answers with others ('cause that's cheating). What we practice in school is what we bring to the workplace, supplemented with observations and suggestions from people who review with us that help us to construct new models. In terms of document review, we start with the models we got from our teachers: fix the typos, suggest alternative wording, and massage the format.

Since our colleagues use this model too, we stick with it. In other words, we do what is familiar. We also have to do something during the review. In the absence of more specific instructions we have to let people know we put reasonable effort into the review exercise. After all, review is an activity. One of the ways to measure the extent of our activity is to count up the total number of review remarks we have left on a document. The more remarks, the better job we did as a reviewer. So we turn our attention to verifying the numbers in the report are accurate and make to 'dot the i, cross the t, and don't forget that comma'.

We are conditioned to be reactive reviewers--we respond to what is present in a document, not what is missing. We are conditioned to operate on the concept of a finished document, no matter where the document sits in the drafting process. Even with a draft one we start at page one and we work straight through the document until we're finished--that is, finished with the document, out of time, or out of energy. We see this all the time in our assessment of review practice. There is straight-line decline in the frequency of review remarks per page as you move through the document. We see Draft 1 report synopses and summaries overloaded with review remarks even though in the body of the report the Discussion Section is only 30% completed and there is no Conclusion Section yet.

Through conditioning in the workplace, we have no sense of strategic review. The prevalent strategy is to simply get the review done so we can 'get back to our day job'. We often have a reverential belief that all it takes to succeed with a scientific or technical report is to just get the data right. That is, make sure the data are accurate. We are also conditioned to think that all you have to do is get the study design right--the rest does not mean too much. That is, the report is merely a repository for data. So we are conditioned to discount the value of scientific reports because constructing well-written, clear, concise, and informative documents takes time away from our 'day job' of conducting science.

In other posts we've talked about the importance of review and the huge commitment organizations have made to review. You would think that, since it is so important, more time would be spent on training people to become more effective reviewers--particularly during their professional training. Yet we don't see this. We've not found a single academic program offering credentials in technical communication or medical writing that offers a course in review (as opposed to editing)--yet the complexity and difficulty of review would certainly warrant one.

Most reviewers learn to review on the job. How do we know? We've asked thousands working in various organizations covering a broad spectrum of disciplines, and we've read others who've asked. Further, a quick survey of the most popular books in Technical and Professional Communication and Medical Writing devote little real estate to the topic of review. In a three hundred page text, we find less that 5% devoted to review. Yet review is certainly more than 5% of the process.

How We Ask for Review
When we analyze review practices and products for our clients we look at more than just the documents under review. We also assess how people communicate about review and the tools they use to facilitate review. Typically communication regarding the task of review is a simple request: "Please review this by such-and-such (a date)." We rarely find instructions from the author to help inform reviewers: "Please have a look at section x because I'm really having trouble explaining y." We'll post a longer description of this topic, but the point to be made here is that authors rarely help their reviewers with instructions/review requests that focus reviewers on what would help the document and authors advance their work.

Our assessment of review practices suggests that the collective review effort does little to improve the document's communication quality. It likely will improve accuracy of the data, compliance with a template, and contain sentences that are all grammatically correct. But the conveyance of messages and the logic of arguments may remain murky or even suspect. Given everything I have said up to this point, why would you expect a different outcome than this?

The Theory
So here is the theory: Expensive subject matter experts are reduced to copy-editing because that is what they know best (they come into the professional box with plenty of conditioning from the academy), it is familiar, it is what everybody else does, and their organization hasn't offered them a better alternative. Further, the situation won't get any better because even if (when) they find a better alternative they're too busy to change (they have their day job to do and besides they have too many documents to review to be fettered by revising ways of working), and even if they wanted to change things, the organization's leadership wouldn't buy into it.

Fortunately, much can be done to really 'move the needle on the meter' and improve individual and organizational ways of working when it comes to the task of document review. I know this to be the case from the consulting and training work we do as we've helped a number of organizations improve review practices and document quality.

14 August 2012

Much can be Done to Improve Reviewing of Research Protocols


I have been away from the blog for most of the summer. It has not been an intended absence, rather I have been caught in a whole lot of work and personal obligations that kept me away from making posts here. 

We have been doing several projects this summer related to the planning, writing, and reviewing of clinical research protocols. In this post I want to share some thoughts about review.

We believe that protocol reviewers need a more a refined mental model for how to review drafts of their documents—how to concentrate on the big-picture rhetorical concerns and set aside attending to edits and stylistic corrections during the early rounds of review and then attend exclusively to these elements in the final rounds of review.

In work with several clients, we find that reviewers confuse the roles of reviewers and editors and reviewers also confuse the timing of when edits should be made. By this I mean we see many reviewers operating at the sentence and word level in the initial review of the protocol concept document. The protocol concept document is where you have to get all the big picture details ironed out regarding study design and conduct before you sweat the details at the paragraph and sentence level. Based on our observations and interviews, we suggest that better guidance is needed to define the roles for reviewers and editors and time points for application.

We recommend that performance criteria and subsequent measurement should be enacted to help review teams better understand and appreciate the need to attend to the intellectual attributes of a planned research trial. When I ask people sitting in my workshops how many protocol amendments they average per study the response is always very similar: a collective groan followed by the retort, "we average way too many."  So it seems like it is time to change methods. It is at this point in the conversation with clients that I invoke the great Albert Einstein quote regarding the working definition for insanity: to continue to deploy the same methods, but then expect different outcomes.

Our assessment of work practices at a number of pharmaceutical companies suggests that reviewer discipline is poor in that reviewers fail to participate at all or in meaningful ways in early draft reviews and also extend reviews of certain protocol sections without really improving communication quality. When a protocol goes through 6 rounds of review, then something is likely not right and when the Background section in the protocol is actively reviewed in all 6 rounds, then for sure something is not right.

We recommend that late in the protocol development process guidance regarding the line-level review of text should be enacted to ensure precision and consistency. We find huge inconsistencies and ambiguities with three heavily used modal verbs: may, should, and must.  We also find that many protocols do a poor job of characterizing aspects of agency for instance, who has responsibility for decisions or ensurance of patient data integrity. We also find that reviewers rarely consider the temporal frame of reference for information in a protocol. We find huge inconsistencies where in a given paragraph the language may start in present tense, move to future tense, and then back to the present tense.

03 July 2012

Importance of Language and Writing Style in a Clinical Study Report

Here's one of our most popular posts from the archives:


How important is language and writing style in a clinical study report?  I was recently asked this question by a medical writer working for one of my McCulley/Cuppan clients. The writer is dealing with a team that seems to obsess over every word in every draft and the writer is looking for some help in how to address the situation.


Here is my response to the question:


You are asking about lexical and syntactical elements of writing (the third element of writing is grammatical.) 


Lexical pertains to the words (vocabulary) of a language. In the context of clinical research we need to talk about several applied lexicons of scientific phraseology that apply broadly to science and then narrowly to a specific therapeutic area. The admittedly most distinctive feature of any clinical study report is the application of specific scientific and technical prose. So, language is very important in a CSR to avoid lexical ambiguity (why I so love statisticians and their demands for careful use of language when describing statistical observations) in order to allow the reader to derive the intended meaning.


My experience suggests that many people in Pharma think attention to syntactical elements (style) means they are either eliminating ambiguity or improving clarity of message. Rarely is this the case.


You have heard me say before that style does not matter in the type of writing represented in clinical study reports submitted to regulatory authorities in the US and elsewhere.

My position is supported by current discourse theory. Discourse theory states that, as a rule in scientific writing, meaning is largely derived from the precise use of key scientific words, not how these words are strung together. It is the key words that create the meta-level knowledge of the report. Varying style does little to aid or impede comprehension.


What happens is people often chase and play around with the style of document. Largely they are looking to manipulate an advanced set of discourse markers specific for clinical science writing or some subset specific to a therapeutic discipline. Discourse markers are the word elements that string together the key scientific words and help signal transitions within and across sentences. These discourse markers are the elements that provide for style. There are macro markers (those indicating overall organization) and micro markers (functioning as fillers, indicating links between sentences, etc.) Comprehension studies show that manipulating discourse markers--that is, messing with style--in most instances does not influence reader comprehension. It is worth noting that manipulation of macro markers appears to have some impact on comprehension for non-native speakers of English (why it is worth using textual advanced organizers to help with document readability.)


So the net-net is: there is little fruit to be picked from messing with style in a clinical study report. Put review focus on the use and placement of key terms.


This is a bit of a non-sequitur to the question, but a concept I’d like to share. To derive meaning from scientific text, readers will rely on their prior knowledge, and cues provided by the key terms and data they encounter or fail to find in a sentence, paragraph, table, or section of a clinical study report. So what I’d really prefer to get people thinking about is the semantical elements of their documents. Semantics is fundamentally about encoding knowledge and how you as an author enable the reader to process your representation of knowledge in a meaningful way. Semantics is about how much interpretive space you provide to the reader in a document by what you say and equally important, by what you do not say. Of course you cannot get to the point of thinking about semantics unless you see clinical study reports as something more than just a warehouse for data.



01 May 2012

Peer Review Revisited: It Really Needs to Change

Back in June of 2010 I made a posting here on our blog that considered peer review to be a largely bankrupt way to screen and validate research worthy of publication. This post extends the discussion I started in that post.


An important point to consider is that an increasing number of studies and journal editors are suggesting that peer review is a failure as currently practiced in the life sciences.


As food for thought consider the following:
  1. This is old data, but worthy of attention. In a survey, only 8% of the members of Sigma Xi, the Scientific Research Society, agreed that peer review works well as currently applied. (Chubin and Hackett, 1990).
  2. As a tool to filter out science worthy of publishing, peer review may be blocking the flow of innovation. (Horrobin, 2001) Horrobin spoke strongly when he suggested the peer review system in a non-validated charade. Here's a link to his article  http://www.nature.com/nbt/journal/v19/n12/full/nbt1201-1099.html
  3. Richard Smith's article: "Classical peer review: an empty gun" cites the Drummond Rennie quote: "If peer review was a drug it would never be allowed onto the market because we have no convincing evidence of its benefits but a lot of evidence of its flaws." (Rennie is a deputy editor of the Journal of the American Medical Association and a supporting force behind the international congresses of peer review.)
  4. In late April 2012 Carl Zimmer reported in the New York Times that according to a study made by PubMed that the number of articles retracted from scientific journals increased from 3 in 2000 to 180 in 2009. Here is link to his piece: "Sharp Rise in Retractions Prompts Calls for Reform"
  5. A couple more articles worth checking out
    1. "What's wrong with peer review"
    2. "Peer review is f***ed up -- let's fix it"