Showing posts with label writing in the life sciences. Show all posts
Showing posts with label writing in the life sciences. Show all posts

05 March 2013

Time to Reconsider How You Write for the Regulatory Reader

With the continuing expansion of on screen tools for analyzing, manipulating, and using technical data it is worthwhile to take a moment and consider the implications of how we think about documents and document design in 2013.

Let me use as the starting point for this discussion my position that in the world of regulatory writing it is clearly time to retire the classic notion of a document that has been around since the Irish monks hung out in European monasteries scribing the ancient texts in Latin on bound pages of vellum. So, stop thinking about and judging documents as something going from page 1 to n and constrained by the classic measurements of “Letter size” and “A4.” 

Electronic regulatory submissions that are compiled for viewing on screen using a tool like Global Submit Review must be characterized by definitions decidedly different than what worked for an Irish monk. Think three dimensional.

Now you must think of documents in the manner of Suzanne Briet. In 1952, Briet created the following working definition for a document—“A document is the physical evidence that supports a fact.” I suggest her definition describes the mental model we must  now apply to regulatory submission documents in 2013. Documents are now defined by the user, not you. Why is it that way? Because of the tools they now use to navigate electronic documents.

The culture of reading in the regulatory domain has really evolved over the past five years. As Christine Rosen states in her article, People of the Screen, published in The New Atlantis:

“Every technology is both an expression of a culture and a potential transformer of it. In bestowing the power of uniformity, preservation, and replication, the printing press inaugurated an era of scholarly revision of existing knowledge. From scroll, to codex, to movable type, to digitization, reading has evolved and the culture has changed with it.”

It is important to keep in mind that research shows there is a distinct change in behavior when reading on-screen versus reading within the framework of “Letter size” and “A4.”

Some of the differences are relatively subtle and others are rather profound. For instance, the research of cognitive scientists like Mary C. Dyson, Andrew Dillon and many others has looked closely at how physical text layout impacts on screen reading. Papers from Megan Fitzgibbons, Stuart Moulthrop, and others consider behavior tendencies and strategies for searching large complex documents for specific pieces of information and the use of hypertext attributes (I'd call this the working application of Briet’s definition of a document.)

At this point in time I have met few in the medical writing industry who ever consider how there narratives and tabular displays impact the on-screen user. They just kept on building document content like they always have. I have yet to hear a medical writer talk about how paragraph length and density of detail impact on-screen reading speed.

I have met fewer still who consider the implications of reading pathways through and between documents as the on-screen reader now make considerable use of bookmarks, hyperlinks, and key term searches. They just keep on publishing to only third-level headers while the documents actually subordinate content to the 5th or 6th level header and many writers toss in hyper-links without much regard for utility of the link to the end-user. Keep in mind this quote from Megan Fitzgibbons:

Literacy is a key component of the information seeker's side of the equation, because the abilities to locate, read, and evaluate texts are the basis of successful information gathering processes.”

The net-net I am trying to bring across here is that for information professionals to form effective principles of document design they must reconsider regulatory agents’ needs, attitudes, and ways of working with documents in the “electronic document age” and that information professionals must also understand the technical capabilities of the document interface tools in prevalent use by regulatory agents so that they can design documents today that will meet the changing needs of the reader 12 to 36 months from now.

03 February 2012

Need a New Mental Model for Regulatory Documents

Wow……I have been away from this blog a whole lot longer than intended. Those competing interests….like you all understand….are the bane of my existence.

For the past couple months I have been looking closely at how people think about the “vehicles” used to communicate with regulatory health agencies. I am using the word vehicle here because I am trying to divorce myself from the notion of document, in particular, the notion of “a document.” In the modern times of on-screen reading and linked files, what is a document anyhow? To me it is the entire corpus somebody may be able to access, not just one slice of that body.

In my consulting/training interactions at McCulley/Cuppan, I find that the majority of people I interface with in the client setting operate within the mind set of individual documents (some have even smaller boundaries and operate by document sections) that are stand alone with well defined boundaries (pages and page counts.)

I want to argue that the vehicle of communication for regulatory submissions is not a document. It is the full and complete dossier submitted by the sponsor. Documents are just placeholders where I go to get a piece or pieces of information that help answer my questions. I want to argue that the regulatory reader does not see a dossier as a set of documents. Rather they see a dossier as a corpus of information that they will use to answer question and make decisions. The contents are just vehicles they peruse to get what they want.

Applying my working model means you stop seeing documents as “stand alone” and stop saying “this document has to tell a story.” I’d also like you to stop using the word document. That word has baggage I am trying to jettison. Instead I want people to view their work at least as “modules” and preferably as vehicles that help a user to answer very specific questions. Bottom line, a research report is just a part of the constellation that tells the stories. Note the plural form as we have many stories to tell in a dossier, not just one.

Applying my working model means you stop seeing your work as being like a novella—something to be read from page 1 to page n. Applying my working model means you see your body of work as something that is read in a coordinate manner that is defined by very narrowly defined aspect rules of inclusion/exclusion. Applying my model means you stop seeing pages and sections and you start seeing concepts and topics.

My argument is that the selective professional reader at regulatory health agencies cares little about documents, sections, pages, and data tables. I am suggesting such readers care solely about making informed decisions and where in the submission dossier they find vehicles that can get answers to their concept and topic questions.

06 November 2011

Why do we do some of the things we do in clinical study reports? Part 1

Over the past three months I have been doing a series of workshops focused on the authorship of clinical research reports for a couple clients. This work has me posing questions about two very specific notions
  1. Why do people still demand that a clinical research report intended for incorporation into regulatory submission packages must be written as “stand-alone” documentation?
  2. Why do we still organize clinical research reports by the long-standing convention of Introduction, Objectives, Methods, Results, Discussion, and Conclusion?

In this post let’s consider the notion of the “stand-alone” clinical study report.

First question I have is why must these documents be stand alone? The term “stand alone” generally refers to a document or device that is self-contained and does not require any other document or device to fully function. I argue that a clinical study report submitted to a drug regulatory agency is part of a larger corpus and as such the report should never be construed as a stand-alone document.

It is a given that most submissions made to drug regulatory agencies contain multiple research studies. So you will never have a “stand-alone” research study. So an individual study is part of a corpus of clinical research that best operates collectively. Also regulatory submissions have multiple sections as mandated by guidance that are topoi (places of argument) for different and often integrated attributes of clinical research drawn from this corpus. So clearly the clinical study report is not a stand-alone document in the instance of interpretation and argumentation.

Another point, a clinical study report incorporated into a regulatory submission will have numerous appendices including the final version research protocol and the statistical analysis plan. So I struggle to understand the premise that says you must incorporate extensive elements of the protocol and stats plan into the body of the clinical study report in order to assure the report is a stand-alone document. I truly fail to see the merit of this premise.

My last point is why do so many feel it is necessary to write the Introduction Sections to clinical study reports in the classic inductive form of rhetoric that moves from describing the disease condition, to the therapeutic void, to the chemical or biological construct of the drug under study and how this drug’s pharmacological action addresses a therapeutic void. The standard response is, “well we want to make this a stand-alone document.” Why? All the rhetorical moves you are trying to make in the Introduction will happen for sure elsewhere in the drug submission package in the appropriate topos. So why waste your time here, at the low level of the clinical study report? In my way of thinking the study report Introduction is intended to convey only two points:
  1. Why are you doing the study (the answers to what questions do you want at the end of the study that you do not have now?)
  2. What previous work has informed the design of this study?

12 October 2011

Notes on Managing Reviews Presentation and Discussions at DIA Clinical Forum

We had an interesting set of presentations and discussions yesterday at the DIA (Drug Information Association) Clinical Forum in Basel, Switzerland related to managing reviews. Here are my summary notes on the session.

I spoke along with Barry Drees from Trilogy Medical Writing and Rhys Stoneman from GlaxoSmithKline. We each covered aspects of improving review outcomes for clinical research documentation.

I addressed the topic that to affect change you must look at the root causes of poor review outcomes not merely the symptoms of poor outcomes. Barry and Rhys talked about how their organizations--a contract writing house  and a large multinational pharmaceutical company, respectively--look to improve review outcomes.

I talked about the underlying root cause of most poor review outcomes fit into two boxes

  1. No formal training in review methods
  2. Poor reviewer discipline

Note: Jessica will get my presentation slide set posted to the M/C web site.

I stressed that without training most people are left to inspect and edit documents and generally are not sensitive to how rhetorical situations vary with audience and document genre.  I talked about reviewer discipline is difficult to modify without  making process changes and applying performance mandates.

Barry  talked about methods he uses to influence behavior of review teams. He had a great line on one of his slides addressing lengthy review cycles: "We do not have enough time or money to do it right, but there is always time and money to do it over."

Rhys talked about the steps taken to transform work practices and belief systems related to the authorship and review of clinical documents. We did a lot of support and training work at GSK on this project, so I have seen firsthand the changes. It is fair to use the descriptor, dramatic, to characterize how they have reformed work practices and outcomes.

Not surprisingly, the topic of greatest interest to the audience was issues related to reviewer discipline and what can be done to modify or blunt the impact of bad behavior.

28 June 2011

Difficulties Assessing the Value-Added of Professional Communicators

Here is a post originally published in 2008, but the topic is worth repeating.

As mentioned in previous installments of this discussion, we at McCulley/Cuppan believe that the role of the professional communicator can add considerable value to the research and development process. But we recognize that in order to justify the need (and the added expense) for this expanded role, managers have the daunting task of trying to quantify the value added of such professionals to the organization.
The professional communicator must add value to a company’s information processes and products in order to justify their presence within the pharmaceutical organization. Mead (1998) defines the concept of value quite simply: “Value can be defined as the benefit of an activity minus its cost.” However, to apply that concept of value to the role of a communicator or science writer is not so simple. The value of the communicator in the life science research enterprise is not easy to determine for one principal reason: what value does one place upon a timely, efficient, and effective regulatory submission documents?
The problem is that communication in the life science industries does not lend itself to easy analysis against the traditional measures of professionalism that are routinely applied in other aspects of research or in other writing settings for that matter.
To measure the benefits of the professional communicator’s activity within the life science research organization, and in particular within a research project team, it is necessary to turn to the existing body of research illustrating the ways in which communicators provide project teams with valuable input and experience that enhance the overall quality, timeliness, and labor allocation to the tasks of research and reporting research. Quantitative and qualitative research methods, including case studies and surveys, offer data to demonstrate the significant effect professional communicators have on both organizational processes and products. However, little research and few publications directly address the roles and value of the professional writer within the life science research industries.
It is thus necessary for savvy managers to cast a wide net and look to other fields for relevance to the context of life science research. Managers should consider the task of writing in the work of engineering, aerospace, and computer industries, industries that share an intense document development environment similar to what we see in the life science research environment. These industries are similar to the life sciences in that the document product is used by regulators or buyers to create an informed opinion of the company’s proposed product or service. The caveat we must offer is that there is a thin volume of literature assessing the value added by communicators to these organizations, so the existing body of case studies may not be sufficient. Managers may need to seek out others in their organization or industry that have made use of communicators on their project teams.
Professional communicators can contribute to the understanding of the value added by documenting their work and comparing their tasks and targets with company benchmarks. Communicators must document their tasks over the life of the project because it is not possible to assess value added simply by looking at the documents that writers produce. Jong (1997) points out the problems in the “inspection model of quality control,” a model that when applied to documentation, focuses merely on the cost of writing and occassionaly the cost of review. This model is “inherently vulnerable to error,” as significant costs and errors may be missed. Jong claims, “The best way to improve the quality of the output is to improve the quality of the input” (40). Improving quality of input suggests that researchers carefully consider how they present the logic of their interpretations, how they design information to satisfy readers’ needs, and how they represent the resolution of issues within the framework of their document or sets of documents. Communicators can facilitate this consideration and ensure that the documents are logical, complete, and meet readers’ needs. One way to demonstrate these skills to managers is by comparing “before-and-after” documents that show how the communicator improved the logic and readability of initial drafts or how a communicator involved from the beginning of a project can better convey the resolution of issues than a writer brought in at the end of the project.
The variety of roles for writing in the life science research environment makes it difficult to employ a simple model for calculating the value added of writing specialists where the principal output is an informational product. As Fisher (1998) states, “The profession of technical communication is difficult to define in scope” (186). Within the pharmaceutical and life science research industries, writers and communicators have very different roles in various enterprises. Some are primarily writers, others function principally as editors, some coordinate the compiling of documents for registration filings, some facilitate team-based document development, and some concentrate on knowledge management. The challenge is to understand how professional communicators can contribute to efficiently and effectively producing the desired outcome: a high quality document product that helps in the conveyance of knowledge or the advancement of work on drugs and medical devices. Professional communicators must be able to defend their roles to management. By providing “before-and-after” versions of the documents and recording their tasks and timelines (thus enabling managers to compare those with previous projects), professional communicators may be able to reverse the current trend of having writing as a wholly separate task from research and may be able to start a new trend of utilizing professional communicators.
Works Cited
Fisher, J. “Defining the Role of a Technical Communicator in the Development of Information 
Systems.” IEEE Transactions on Professional Communication 41 (1998): 186-199.
Jong, S. “The Quality Revolution and Technical Communication.” Intercom 44 (1997): 39-41.
Mead, J. “Measuring the Value Added by Technical Documentation: A Review of Research and Practice.” Technical Communication, Third Quarter (1998): 353-379.


Originally published on our Knowledge Management blog

03 May 2011

A New Outlook is Needed Regarding Document Review

Companies need to become more evaluative and methodical regarding their own work practices.
The costs associated with planning, authoring and reviewing research reports and regulatory submission documents are difficult to determine. If we consider the direct time and costs invested in authoring, reviewing, and publication preparation, then a conservative estimate on the cost of the final report will range from $50,000 to $200,000. If we add in the opportunity costs for time spent on endless draft versions versus other more productive professional work, then the costs start to skyrocket. When you add up the collective costs across a volume of documents generated in a year, well, now you are talking about some really big numbers.

Why in the life sciences are the gross inefficiencies of review practices tolerated? Is it the old "Out of sight, out of mind" approach? Then again, it may be comfort in a bias for action: "Hey we hit the deadline, so no worries, the end justifies the means."

Much can be done in terms of specific actions to shine a spotlight on the inefficiencies of review and encourage effective work practices:
  •  Articulate and rely upon meaningful document quality standards and and best-of-craft guidance for executing effective reviews. Emphasize shared standards over individual preferences.
  • Define the scope, purpose, audiences, and argumentative strategy for the document before drafting. In turn, use this early document planning to guide reviews. In other words, practice "Aim, ready, fire!" as the standard documentation method.
  • Define reviewer focus and responsibilities, acknowledging unique and strategic expertise. Involve specific reviewers for specific purposes. Inform all reviewers regarding roles and points of review focus and the differences between reviewer and editor. Problems of word choice, style preferences, transcription accuracy, and format should only be handled by the writer/editors and not made the objective, intentional or otherwise, of review.

We do not believe that changing non-productive practices is an easy matter, or companies would have already done so. We do believe that recognizing non-productive review practices should be an object of focus for more organizations. We understand that collaborating to develop complex documents with sound arguments involves difficult cognitive and social practices. If a company establishes the goal of producing quality documentation through efficient and effective review practices, it will find that it must do a lot of work to counter the ingrained tendencies of people to focus on low-level stylistic edits as opposed to high-level strategic and rhetorical concerns.

18 April 2011

How the Most Sophisticated Documentation Groups Operate

At the apex of our version of the Documentation Capability Maturity Model are the Level 6 "Optimizing" groups. These are very sophisticated writing groups that are continually looking at ways to enhance work practices and processes so as to better serve their customer needs.

At this level, the job descriptions for all the subject matter experts contain extensive descriptions regarding their roles and responsibilities in the development of high quality document products. Work performance is not merely judged on how well they execute design and conduct of studies, but also on how effective they documents are in supporting organizational strategies and economic objectives.

At this level, the writing groups rely upon carefully defined document quality standards that reach well beyond style guides and template preferences. These groups articulate detailed guidance for executing effective strategic reviews. All understand the importance of working to shared standards over individual preferences. Reviewers authenticate and sign off for documents meeting strategic intentions and communication quality. If problems arise downstream, then the reviewers are held as culpable for the problems.

Documentation project management inside a Level 6 writing group tracks the amount of time along with other parameters applied by individuals to planning, authoring, and reviewing documents. Performance is always reviewed for "lessons-learned" at the end of all major writing projects.

At this level, there is a clear commitment to the assessment of document usability for the target audience and even testing of  document designs for certain types of documents (such as clinical study protocols)  early in the document life cycle.

These writing groups take full advantage of authoring tools to assure information is effectively generated once and then repurposed to other documents as a drug or device asset moves forward in the development life cycle.

Lastly, these very sophisticated groups make time in their busy schedules for innovation both in terms of work practice and work tools.

I do not know of any groups in the pharma or med device industries who have the above mentioned attributes. Do you?

31 March 2011

Still More on What Sophisticated Writing Groups Do

Continuing on the discussion of the last two posts regarding how we categorize the level of sophistication of writing groups. The next stop on the documentation capability sophistication chain is Level 5--Managed and sustainable.

The Level 5 writing organization applies a broad range of sustainable best-of-craft work practices.

At this level, the planning of business critical documents occurs in parallel with the planning of clinical research studies. Writing teams always deploy document prototyping techniques, such as populating sections of a clinical study report after the study protocol has been completed and planning the report results sections once the statistical analysis plan is finalized. Level 5 writing teams can tell you how many pages will be in their study report even before LPLV because there has been so much planning. Level 5 writing teams always map arguments across the sections of a clinical study.

Level 5 writing teams are aware of agile authoring techniques, but have not yet deployed these work practices. Level 5 writing teams clearly understand that repurposing information involves a whole lot more than merely cut and paste.

Level 5 writing groups clearly understand what strategic review means. The teams articulate and rely upon defined document quality standards and guidance for executing effective reviews. They always define reviewer roles and responsibilities acknowledging unique and strategic expertise.They recognize that the problems of word choice, style preferences, transcription accuracy, and format should be passed onto the writer/editors and not made a focus of review.

Level 5 writing teams routinely solicit document user information and maintain databases to help them track and understand usability and readability statistics on all of their documents. At Level 5, teams engage in root cause analysis to ascertain why questions were received from regulatory agencies. Level 5 writing teams apply standards and measures to the task of document authorship and review that are well down the highway from the simple metrics of time and draft numbers. Level 5 writing teams always engage in a lessons learned session at the end of each documentation project and such sessions are not seen as merely an activity to be filed and forgotten. Process and practice is tweaked and refined for the next time.

21 March 2011

More on Sophistication of Writing Groups

In our McCulley/Cuppan version of a Documentation Capability Maturity Model, the fourth level is called Organized and Repeatable as suggested by JoAnn Hackos in her various books. But perhaps a better term to use in place of repeatable is consistent, as at the fourth level, the application of well-defined work practices is much more consistent across documentation projects. In these organizations, the majority of team members operate by the credo: we recognize some of our processes and work practices represent "best of craft" and we know they will get us through any crisis.

At this level, the writing group does keep project tracking data in a simple database. Unfortunately,  most of the data only tracks time and draft iterations. These remain the only parameters used to create project milestones.

At this level there is some recognition that the role of the medical writer involves more than "just writing" and on some teams writers are seen as knowledge managers and they are actively involved in team meetings well before data base lock. However there remains credibility issues for the writing group in the broader organization where writers are often seen as necessary evil and only "just write" the reports.

A fourth level writing organization routinely uses pre-writing planning and project kick-off meetings to shape team expectations. At this level, the writers are more aware of document design considerations that impact usability, but little effort goes into mounting discussions with teams about document design during the pre-writing planning. This remains a discussion item for draft review.

Little attempt is made to formally collect information from document end-users about readability and usability of the documents submitted to them. Any information collected happens on a casual basis and is largely applied in ineffective ways.

Some of the belief statements found in the Level 4 Writing Group are as follows:

  • We are surprised and even sometimes mad at our document end-users when we get questions from them regarding information that was incorporated into submitted documents.
  • We recognize that we cannot just have meetings where we talk about the data, that we need to have meetings where we plan how and what we are going to say about the data in the reports, but this does not always happen.
  • We have good pre-writing planning and review tools, but it is a struggle to get the subject matter experts to actually use them.
  • Reviewers still spend too much time editing and not reviewing because they believe editing style and word choice help to make a document significantly better.
  • During the review process many reviewers still feel compelled to revisit sections already reviewed in an earlier draft version of the document.
  • Team members recognize best practice review calls for different roles and points of focus during the review process, but many still do not follow the guidance.
  • Sometimes we get stuck in our processes and still like to make all documents "look just like the last one that got approved."

16 March 2011

So How Sophisticated is Your Writing Group?

From time to time I have talked about the sophistication of writing groups in the pharma and medical device industries. My position is that for the most part, writing work practices in the life sciences are well removed from "best of craft" work practices.

In my authoring workshops, I offer the portrayal that most writing groups in pharma and medical device companies rate only a 2 or 3 for sophisticated work practices on a six point scale. I argue that most are rudimentary at best in terms of sophistication.

This usually gets me a couple of the desired guffaws from the people in the room. I remind them, that just because you are really, really sophisticated in the conduct of science does not mean you are equally sophisticated in the tasks associated with reporting on this science.

The six levels in our writing sophistication system are based on the parameters as created by JoAnn Hackos, The scale of sophistication is as follows:

  1. Oblivious
  2. Ad-hoc
  3. Rudimentary
  4. Organized and repeatable
  5. Managed and sustainable
  6. Optimizing

We have created criteria for each level that differs from what Hackos did for the software world. Our criteria for Rudimentary, where we think most writing groups fall, is characterized as follows:

  • We use style guides and templates for all of our documents and routinely make decisions on what to do based upon previous documents "approved" by senior management.
  • We always coordinate on design and basic messages and worry about writing style across documents in a development program so that we can assure consistency in terms of appearance, style, and common messages.
  • We make use of documentation project management to  assign resources and ensure documentation projects meet timelines and budgets.
  • We recognize that documentation team performance varies across teams and we DO NOT know the performance factors having the greatest influence.
  • We DO NOT systematically track user feedback regarding readability and usability of our documents. 
The credo for rudimentary groups is: "We always follow our routines except when we panic."

The belief statements at this level would include the following:

  • We are supposed to develop information strategies for our reports before we write them, but we can never get the Subject Matter Experts to take the process seriously.
  • We have lots of meetings to talk about what the data means, but we rarely have a meeting to talk about how we will represent the data in our report and never talk ahead of time on how we will represent the implications for what we see or fail to see in the data.
  • We don't have time to talk about how we want to design arguments in our reports. We have more important things to do.
  • We have no idea how big a document will be until after it is written.
  • Just write everything you have to say and we'll fix it during review.
  • Anybody with the similar professional training as I have will want to read a report in exactly the manner I choose to read it. 

So where does your group stack up?

04 January 2011

Importance of language and writing style in a clinical study report

How important is language and writing style in a clinical study report?  I was recently asked this question by a medical writer working for one of my McCulley/Cuppan clients. The writer is dealing with a team that seems to obsess over every word in every draft and the writer is looking for some help in how to address the situation.


Here is my response to the question:


You are asking about lexical and syntactical elements of writing (the third element of writing is grammatical.) 


Lexical pertains to the words (vocabulary) of a language. In the context of clinical research we need to talk about several applied lexicons of scientific phraseology that apply broadly to science and then narrowly to a specific therapeutic area. The admittedly most distinctive feature of any clinical study report is the application of specific scientific and technical prose. So, language is very important in a CSR to avoid lexical ambiguity (why I so love statisticians and their demands for careful use of language when describing statistical observations) in order to allow the reader to derive the intended meaning.


My experience suggests that many people in Pharma think attention to syntactical elements (style) means they are either eliminating ambiguity or improving clarity of message. Rarely is this the case.


You have heard me say before that style does not matter in the type of writing represented in clinical study reports submitted to regulatory authorities in the US and elsewhere.

My position is supported by current discourse theory. Discourse theory states that, as a rule in scientific writing, meaning is largely derived from the precise use of key scientific words, not how these words are strung together. It is the key words that create the meta-level knowledge of the report. Varying style does little to aid or impede comprehension.


What happens is people often chase and play around with the style of document. Largely they are looking to manipulate an advanced set of discourse markers specific for clinical science writing or some subset specific to a therapeutic discipline. Discourse markers are the word elements that string together the key scientific words and help signal transitions within and across sentences. These discourse markers are the elements that provide for style. There are macro markers (those indicating overall organization) and micro markers (functioning as fillers, indicating links between sentences, etc.) Comprehension studies show that manipulating discourse markers--that is, messing with style--in most instances does not influence reader comprehension. It is worth noting that manipulation of macro markers appears to have some impact on comprehension for non-native speakers of English (why it is worth using textual advanced organizers to help with document readability.)


So the net-net is: there is little fruit to be picked from messing with style in a clinical study report. Put review focus on the use and placement of key terms.


This is a bit of a non-sequitur to the question, but a concept I’d like to share. To derive meaning from scientific text, readers will rely on their prior knowledge, and cues provided by the key terms and data they encounter or fail to find in a sentence, paragraph, table, or section of a clinical study report. So what I’d really prefer to get people thinking about is the semantical elements of their documents. Semantics is fundamentally about encoding knowledge and how you as an author enable the reader to process your representation of knowledge in a meaningful way. Semantics is about how much interpretive space you provide to the reader in a document by what you say and equally important, by what you do not say. Of course you cannot get to the point of thinking about semantics unless you see clinical study reports as something more than just a warehouse for data.



22 December 2010

Not knowing when good is good enough in writing regulatory documentation has a huge cost


We do not talk much on this blog regarding the use of language or the application of terms in science writing. Principal reason is that much of what we see in regulatory submission documents is genuinely “good enough.” However, others do not necessarily see it that way. I want to share with you how discussions in review roundtables can end up getting focused at really absurd levels of detail with a misapplied sense of establishing quality communication. 

In our consulting work, we try to be disciplined during our document reviews and only comment on language when it truly obscures or alters meaning. Being grammatically perfect in regulatory submission documents is a nice notion, but in practice consumes way too much time and organizational energy and will yield little in terms of outcomes.

We share this point with people all the time...but at times the advice goes unheeded and even worse…at times people just do not know when to move on and address real big concerns in their documents.

A case in point is a situation I observed regarding a long winded discussion in a review meeting over the use of the term “very critical”. The term “critical” in a medical sense means: of a patient's condition having unstable and abnormal vital signs and other unfavorable indicators. In theory, the meaning of critical is a black-or-white proposition without qualifications regarding gradation. Something is either critical or it is not. Therefore there should be no adverbs, like “very” in front of the term “critical” to connote a measureable degree of criticality. In this roundtable review the team got caught up in a 30 minute discussion that involved only two people arguing whether to use the term “very critical” or change it to “critical.”

Being pragmatic, I’d have to say: “Guys what are you thinking? You hold a team hostage for 30 minutes to argue over grammatical accuracy? To argue over something that will not matter when and if read by a regulatory reviewer. There were 10 professionals sitting in the room and 8 did nothing for 30 minutes. Cost of salaries alone is enough argument to say “Forget about it, let’s move on…we cannot afford to argue over such insignificant detail.” When we add in the opportunity cost (what these 10 people collectively could have been doing with their time), then for sure you have to make the argument.

This above episode gets played out time and time again in review sessions all over the pharma and medical device industries and is the reason why I am steadfast in my position that the vast majority of people involved in authorship and review do not know the answer to the question “How do you know when is good, good enough?” The end result is inordinate amounts of time can be applied at the wrong level of detail in reports and submission documents.

30 November 2010

More thoughts on the limited sophistication of documentation practices in the life sciences

I mentioned in this post that I consider the documentation practices for creation of regulatory submission documents in most pharma and medical device enterprises to rather unsophisticated. My position is largely driven by comparing observations of documentation practices to descriptions of varying levels of documentation maturity we have developed. Our descriptors have their roots in the work presented by Joann Hackos in her book: Managing Your Documentation Projects.

Like Hackos, our documentation practices-maturity model is a six point scale ranging from: Level 0Oblivious to Level 5Optimizing. My observations suggest that the vast majority of regulatory writing falls into the 2.5 range, which is between Rudimentary and Organized/Repeatable.
The Rudimentary documentation organization is one where the vast majority of effort is placed to ensure documentation consistency. All documents are generated utilizing well characterized templates. Document strategic review is largely absent as energy is applied to ensuring structural (that is, grammar and format) accuracy and consistency. Work practices are highly individualized and there is little application of meaningful estimating of document size or time lines.

The belief statements of organizations working at the rudimentary level include the following:
  • All writers manage their own projects
  • We would like to know more about our reading audience, but nobody takes the time to learn more….so we “suppose” what they want in our documents
  • Our users are just like us…….I am a subject matter expert, so every other person educated like me will read documents just like I do
  • We talk about what the data means, but we rarely talk about how to represent this meaning in our documents until we are into round table reviews
  • We have little concern with how many rounds of review go into creating the final version of a document….we strongly endorse the credo “the end justifies the means”
  • Planning document content before actually writing a report is just busy work
  • We routinely reverse engineer document development timelines from the stated publication deadline and not from the scale and scope of the intended document
  • We care what the customer thinks of our documents, but we do not use any organizationally applied quality standards. Standards are principally driven by teams and their senior reviewers
  • File and forget–we do not take time to collectively reflect on documentation work practices practices
The belief statements of organizations working at the organized and repeatable level include the following:
  • We have begun to study our document users but see little value in a concerted effort to collect information on how well our documents “satisfice” their needs
  • We are surprised or even indignant when we get questions from our regulatory user looking for information that we included in our submission package
  • The quality of our document project management is inconsistent, but we are okay with that because that is reality and each project is unique
  • We do not see meaningful metrics beyond time for documentation projections…if we meet the deadline, then ways of working had to be good
  • Nobody knows what others are doing in the process of review…the only way they know is via what may be discussed during a round table review
  • We believe in our described “ways of working” until faced with unexpected situations, then we panic and call for “all hands on deck”
  • We can easily get caught up in “process” at the expense of “product”
Now let’s contrast the above with belief statements of organizations working at the optimized level which include the following:
  • We always engage in collaborative pre-writing planning of documents to make sure we fulfill strategic purpose of any given document
  • We are thoroughly committed to understanding our document users and we work to systematically collect information from them
  • We know how big a document will be even before we write it
  • We always do end-of-project analysis to collect lessons learned and then disseminate this information across the organization
  • We maintain a database of critical documentation work practice parameters and benchmark all documentation projects
  • We recognize that accuracy and consistency are just the start of ensuring quality….we have expanded focus to quality of argument and document usability
  • We are learning how to be innovative and not let the process control us

Originally published on our Knowledge Management blog

    13 November 2010

    Designing the architecture of the argument in development reports

    Kirk Livingston, a teacher and a medical writer working largely in the medical device industry, as well as a fellow blogger at LivingstonContent, shared this comment on my previous post regarding poor rhetorical shaping of arguments in research reports.
    There’s a lot of work involved with producing solid, well-reasoned conclusions. Can it even be accomplished as an “authoring team” or is it the work of an individual? Recent research about medical device companies in Minnesota suggests communication teams are chronically understaffed. So–who has time to come to the right conclusions? Thanks for the thoughtful post.
    I agree there is considerable work that goes into producing solid, well-reasoned conclusions. I am certain the work can indeed be accomplished by an authoring team. The caveat here is, it can be accomplished as long as the team engages in truly collaborative authorship work practices and makes use of pre-writing planning tools to help shape the argument.

    I am not so sure that writing teams are chronically understaffed. I think the real issue here is the limits of interest and skill that team members may have towards the task of writing. As I reflect on 17 years of work associated with the authorship of regulatory documentation, I am convinced adding numbers to the equation will have little bearing on the rhetorical qualities of any given document. Larger writing teams will likely yield only emotional comfort--the notion of safety in numbers.

    Producing high quality documents is a function of knowing what you want the document to do for you, a sense of where arguments must be played out in a document, and what writing tools to use in order to get true collaboration and sharpen everyone's focus to achieve the objectives you want to document to support.

    Producing high quality documents in the forum of pharmaceutical and medical device research requires understanding how to build out the red thread of logic in a research report. In pre-writing planning it starts with something as simple as building a table that is to be filled in by the authoring team. The table has three columns to be completed by the team:

    Primary & Secondary Objectives | Conclusions | Key Data

    You then have one row in the table for each objective.

    The team's task is then is to build out conclusions about achievement of each objective and what data warrants that conclusion.  A simple but powerful writing tool that helps a team to lend considerable shape to the architecture of the argument that must be represented within a report.


    Originally published on our Knowledge Management blog

    24 September 2010

    6 Questions Regarding Medical Writing as a Profession

    A follow-up to some earlier posts regarding medical writing as a profession. I have six questions I am pondering about the notion that medical writing is a profession. I welcome your thoughts on these questions.
    1. Why should medical writers attend to professionalization issues? Why not?
    2. Why is professional status necessary or desirable? Why not?
    3. If you want to use the term professional, then who should set the standards and minimum qualifications for use of the term "professional"? How should the standards be established? Does an AMWA or EMWA certification make you a professional?
    4. What can be learned from other professions that have achieved a professional identity?
    5. Some observers will suggest that seeking professional status is merely a form of elitism where desire is to control knowledge and restrict access?
    6. What theories/practices inform medical writers as authorities and thus warrant professional status?

    Originally published on our Knowledge Management blog

      31 March 2010

      More On Why Do So Many Feel Complex Language Is Needed To Have Good Scientific Writing

      I made post a few weeks back addressing the topic of why some authors in the life sciences feel compelled to construct documents that are dense and difficult to read. Here's a link to that post.  The post generated quite a bit of discussion in a medical writing discussion group I belong to on LinkedIn.  The discussion there has motivated me to do some further investigation on this topic. I want to share with you some interesting pieces of information that I hope impacts how you think about what constitutes "good writing."

      I came across a series of papers by J. Scott Armstrong, who is at the Wharton School of Business at the University of Pennsylvania. In these papers, Armstrong considers the broad question that academic communications should enhance knowledge and, therefore, researchers should invest energy in developing understandable ways to present their findings.

      In his paper, Unintelligible Management Research and Academic Prestige, Armstrong explores the notion that if the goal of successful communication is to share information with others and, if science places a premium on successful communication, then, all other things being equal, journals should prefer articles that are clearly written to those that are not. Armstrong concluded from the studies presented in this paper that clear communication of one's research is not the norm in the prestigious journals he examined, nor is it widely appreciated in his small test of the academic community at three universities.

      In a another paper, Research on Scientific Journals:Implications for Editors and Authors Armstrong states the following: "A review of editorial policies of leading journals and of research relevant to scientific journals revealed conflicts between 'science' and 'scientists'. Owing to these conflicts, papers are often weak on objectivity and replicability. Furthermore, papers often fall short on importance, competence, intelligibility, or efficiency."

      In yet another paper, Barriers to Scientific Contributions: The Author’s Formula, published in the journal Behavioral and Brain Sciences, Armstrong, with tongue in cheek, describes a set of rules that authors can use to increase the likelihood and speed of acceptance of their manuscripts. Authors should: (1) not pick an important problem, (2) not challenge existing beliefs, (3) not obtain surprising results, (4) not use simple methods, (5) not provide full disclosure, and (6) not write clearly.


      Originally published on our Knowledge Management blog

      26 August 2009

      What’s Wrong with PowerPoint as a Document Authoring Tool?

      In our McCulley/Cuppan consulting work we recently had a new client invite us to work with an authoring team on applying best practice to the planning, writing, and reviewing of a regulatory submission document. The document was going to be a significant piece of work, requiring over 500 pages and involving multiple authors across several scientific disciplines.

      At the first meeting, the project leader announced she wants to continue the use of PowerPoint (PPT) as the document planning tool. Reasoning for this approach was in part because she and other team members already invested considerable time and effort in generating a 540+ slide deck representing data and messages to be in the regulatory document and PPT was a very familiar tool from extensive use in developing presentations.

      I have to say we were mildly surprised by the demand to use PPT as the primary tool to plan and outline such a large, complex document. We have encountered other organizations using PPT for document planning purposes, but never on such a large scale. On the surface, the choice of PPT as the tool to produce initial draft documents seems reasonable. It is familiar to many, provides an authoring environment that produces output that can appear on screen as an outline, can be commented on in oral or remote review, and can be easily augmented and updated. All of these apparent benefits would support an argument for using PPT as an outlining tool to plan any and all documents. However, the use of this tool does not readily scale to developing large, complex technical documents.

      Christine Haas, Karen Schriver, Thomas Huckin, Edward Tufte, and others tell us much about how readers interact with and read texts. From this collective body of work we have learned some things that can help us produce texts in an effective manner that, equally, is perceived as of very high quality. The prime method is to use tools that enable the design and review of texts as you expect your readers to engage in the reading and analysis of the text.

      Successful collaborative authoring is significantly rooted in careful and thorough front-end planning. Choices of authoring tools are among the critical aspects of the document planning process, as tool choices impact (enabling or constraining) every other aspect of the planning and documentation processes. As authors and managers of authors, it is incumbent upon all of us to choose tools that accommodate our desired set of outcomes. Authors and managers must be cognizant of authoring tools that accommodate not only themselves and their ways of working, but others as well.

      It is our position that use of PPT for document planning negatively impacts all potential collaborative authoring and review outcomes. Our claim assumes that the goal of the work is to generate an effective document, economically produced, that meets or exceeds end-user expectations.

      I have outlined here key advantages and disadvantages of using PPT to plan and facilitate documentation of a multi-year, complex pharmaceutical development work.

      PPT Advantages
      • use is a habituated format; it’s familiar.
      PPT Disadvantages
      • presentations constrain data reporting rather than facilitate collaborative/interpretive processes (see my previous blog post on PowerPoint presentations).
      • format creates/maintains a huge but fragmented vision of the process and product, impacting output (see Schriver and Tufte).
      • PPT does not scale well to large documents as it limits information organization and searching is cumbersome, impacting review and the authors' ability to migrate material from PPT into a document-based format.
      • PPT presentations do not accommodate major revisions/reorganization; impacting logic, content, and organization of the ultimate document product (see Schriver).
      • the output decreases in clarity as the number of slides increases, impacting author/reviewer interpretation (see Tufte).
      • PPT output is not a document; conversion to MS Word format is inefficient, time-consuming, and expensive.
      The point seems clear in the choice of using PPT as an authoring tool is that familiarity wins, despite that, from a business, work, or quality perspective, the disadvantages of PPT clearly outweigh this single advantage. Eschewing proven document authoring platforms for the familiar may have unintended consequences and bear a high tariff.

      Originally published on our Knowledge Management blog

      02 August 2009

      The McCulley/Cuppan Standards Development Process We Use with Our Clients

      As I mentioned in a previous post, in our McCulley/Cuppan consulting work we find the prevalent standards used to determine the “success” of a document are largely driven by simple measures of accuracy and then a passel of “home brewed” concepts for characteristics of document that are largely idiosyncratic ideas about what matters to the reader.

      When you have 10 people reviewing a document you will end up with at least 12 opinions about the quality of the document (incongruous number is intentional as sometimes you have a reviewer offering more than one opinion that often conflict) and ways of describing quality that are all over the map. People use different terms to describe quality and if they actually use the same term, then it is highly unlikely that they will use the same definition for the term. So the first problem faced in the review process is the vocabulary used to describe quality attributes in a document.

      When writers and reviewers compose or edit text, they continually make decisions that concern semantics—the meaning their words convey—and syntax—the way the words are arranged and other structural elements of the document. However, writers and reviewers often base these decisions on assumptions that have not been tested with technically-oriented adult readers or complex, data-rich, technical documents. Worse yet is many assumptions have never been tested to determine validity. Thus, there are actions ordered by writers and reviewers that may not in fact have the expected effects on a readers' performance (we know this is certainly true with one very important reading audience for pharmaceutical and medical device companies—the regulatory agency, like FDA). So the second problem faced in the review process is to understand what document elements have a meaningful impact on semantics and where to focus time and attention on the syntactical elements of a document.

      The first thing we do with a client is an examination of the terms used formally (such as in guidance documents) and informally (such as review comments in documents) to describe quality. This will give us a sense of how the organization views quality and how sophisticated they may be in trying to create a common platform that describes document quality for the organization.

      The second thing we do is to provide clients with the terms McCulley/Cuppan uses to describe document quality and why the concepts underlying these terms are extremely important to help determine document communication quality. A very important consideration is that the terms should speak the user’s language, with words, phrases and concepts familiar to the user, rather than system-oriented terms.

      We spend a huge amount of time talking about semantics. The term "semantics" refers to the study of how language conveys meaning. Using a broad definition of semantics, we help our clients learn how to focus on different features of a document. Features like word choice, the position of information in document sections, paragraphs and in documents as a whole, idea importance, and the visual representation of data.

      Then we work with a client team to create and vet working definitions for the various quality standards.

      We then roll out the standards in a workshop setting and show people how the standards are applied to the types of documents they have and will produce.


      Originally published on our Knowledge Management blog

      16 July 2009

      How Do You Measure Communication Quality?

      One of the truisms we see in our McCulley/Cuppan consulting work is that rounds of document review tend to go until the point when the document must be sent somewhere. That's why we say that in the pharmaceutical industry, the opportunities for making changes to a document are virtually limitless. The problem driving this situation is most people involved with authoring and reviewing process do not have good markers to inform them of the overall communication quality of a document.  So without good markers they are left to utilize really poor markers to help them measure document quality. Markers like: grammatical soundness; how many people have reviewed the document; how many rounds of review; and how many comments leveled on the text and data in the document. Unfortunately, these markers have little correlation in the case of grammatical soundness and, for the other three, no correlation whatsoever to the communication quality of a document.

      To paraphrase Steve Jong in his paper (you can read it hereYou Get What You Measure—So Measure Quality: "if you don't measure it, you'll never get it."  This is so true with document communication quality.  In order to measure communication quality you have to employ meaningful markers. We find our clients typically employ only two markers that are useful: accuracy and compliance. Unfortunately, neither of these do much to measure the quality of argument, soundness of logic, or overall usability of a document for the end-user. There are some useful markers to consider for measuring these document attributes. More on these markers in my next post.


      Originally published on our Knowledge Management blog

      18 May 2009

      Why the Focus on Review Practices?

      Recently, I was involved on an interesting project at a Top-Ten pharmaceutical company. The project entailed assessing prevalent review practices applied to people working within one of the R&D groups. What I examined was the complete review record (from first draft to final draft) for various research reports produced within this group. The assessment involved a quantitative and a qualitative analysis of review performance. Following are some of my thoughts and observations.

      As anyone who’s familiar with this blog knows, improving document review practices is of great concern to us at McCulley/Cuppan. Why? Why do we, and our clients, keep coming back to the topic of review performance? The following observations on a recent consulting project provide some insight as to why review is, or needs to be, a central focus for improving knowledge propagation and dissemination.

      In this project we analyzed the effectiveness of a team in moving from conceptualizing to finalizing a document. We did this through an extensive analysis of their review commentary generated through different stages of document development. The findings were consistent with what we’ve seen over the past years of assessing review practices for other clients: some good, some bad, and some ugly.

      Bottom-line--we found considerable room for improvement.

      Following are some examples of what happens when resources and tools are misapplied during the review process.

      Senior Management as Spell-Checker? On this project looking at four business critical documents we found that throughout multiple drafts of each document (even up through final draft) there were edits for word choice, punctuation, verb tense, and spelling made by senior management, including the group vice president. Let me repeat that--the vice president of the research group focused on making spelling edits. Why is senior management focusing on basic edits to structure? That is one expensive copy editor. Should a senior official in a group be bogged down at the line level making edits? Is that the best use of their time, talent, and insights? I think not. If that is their focus, then who is responsible for keeping the arguments presented in the documents strategic and logical? This is a common practice; perhaps there is some thought that tweaking grammar improves the rhetorical and semantical structure of a document. Rather I think it is merely a matter that these are easy elements to fix versus considering how well a document fulfills the intended logic and strategy.

      Simultaneous review What happens when you send a document to multiple people at the same time with the same review instructions (which is often merely "please review")? You get massive duplication of edits. To the tune of hundreds of same or similar edits per document. Then on top of that, the authors of these four documents had to deal with a variety of syntactical or lexical edits (structure and word choice) made to the same piece of text, but with slight variations. Whose edit do you choose? A common practice we find is the edit made by the individual with the higher pay grade tends to trump all other recommendations.

      Chaos reigns supreme When a document is reviewed by upwards of 20 people through multiple drafts, (and I mean multiple--like 5-8 rounds of review!!) and they receive little guidance or control to what may and may not do in the review process, then chaos often reigns supreme. We find that work is constantly revisited with everyone making continuous edits throughout the document--this is why we say the opportunities to revise a document are virtually limitless. A case in point: we looked at one document that moved through eight rounds of review. Yes, you read that correctly--eight rounds of review. In tracking review comments for just one section of this document (yes, the following numbers reflect review comments for only one of the sections in this document) we find the following review performance:

      Draft 1-- 14 comments; Draft 2-- 55 comments/119 edits; Draft 3-- 97 comments/765 edits; Draft 4-- 42 comments/578 edits; Draft 5-- 37 comments/423 edits; Draft 6--15/comments/98 edits; Draft 7-- 37 comments/153 edits; and Draft 8-- 99 comments/272 edits

      Clearly on this project, the team had problems with establishing what they wanted to accomplish within this particular report section and how to establish when good is good enough.

      I know some readers of this post may say--"oh my gosh, that kind of performance would never happen with our document reviews." Keep in mind, I mentioned at the start of this post that such outcomes are all too common.


      Originally published on our Knowledge Management blog