08 December 2011

Why do we still organize clinical research reports by the IMRAD convention?

Here is Part 2 of the questions I posed in my previous blog post.

I have been thinking about this question for some time now. I remain curious as to why we still organize clinical research reports by the long-standing convention of Introduction, Objectives, Methods, Results, Discussion, and Conclusion? The IMRAD form of organization. The year 1665 is often cited as the origin of the platform commonly referred to as scientific papers. It was not until the second half of the 1800s that these documents moved in the direction of “theory – experiment – discussion.” Then starting in the early 1950s the IMRAD structure became the prominent norm for the structure of a scientific paper. 

The supposed reason for the IMRAD structure is that this organizational approach facilitates literature review, allowing readers to navigate articles more quickly to locate material relevant to their purpose. It has been suggested that the IMRAD structure effectively supports a reordering that eliminates unnecessary detail, and allow the reader to assess a well-ordered and noise free presentation of the relevant and significant information. I can see how this argument may apply to research manuscripts published in Journals, but it certainly does not apply to the kinds of study reports submitted to regulatory agency health authorities.  

Unfortunately, the supposedly neat order of the IMRAD arrayed report rarely corresponds to how regulatory readers consider using documents to help them make decisions. That is, decisions about how study results support or fail to support broad or narrowly defined questions/arguments. People are surprised to learn that health authority review agents rarely read study reports. They certainly engage with study reports and make use of the content. But read these documents? No, that is an approach not normally taken by the health authority review agent. 

Frankly, I cannot blame them. After all, the IMRAD form of organization does little to help them get answers to their questions or support their approach to making decisions.

Over the past 50 years, the idealized sequence of the IMRAD structure has on occasion been criticized for being too rigid and simplistic. Perhaps the most famous castigation was made by Peter Medawar in the early 60s. He criticized the IMRAD design for not giving a realistic representation of the thought processes of the writing scientist. And then in the mid 80s A. G. Gross wrote a paper titled: “The form of an experimental paper: A realization of the myth of induction” that challenged the notion of the IMRAD structure as well. 

I am right there too in questioning the merits of this approach. At least when it comes to study reports submitted to health authorities.

I believe the reason for rigid adherence to the IMRAD approach for research document organization is the precedence of the past. Not that it affords the most effective stucture for the reader to get what they want from a document.

As for me—I’d prefer a Question & Answer approach—something along the lines of organizing the report by study objectives with the study methods appropriately subordinated at the back of the report. Give me a report organized as follows: Objective—Results—Discussion—Assignment of Significance. Each section of the report deals with only one study objective and information is presented in the sequence I have suggested. Then the last section of the report would be Conclusions where you integrate and contextualize the study findings to answer the really big question of so what do all these findings mean?

So what do you think is the most effective form of organization for a regulatory clinical study report?

06 November 2011

Why do we do some of the things we do in clinical study reports? Part 1

Over the past three months I have been doing a series of workshops focused on the authorship of clinical research reports for a couple clients. This work has me posing questions about two very specific notions
  1. Why do people still demand that a clinical research report intended for incorporation into regulatory submission packages must be written as “stand-alone” documentation?
  2. Why do we still organize clinical research reports by the long-standing convention of Introduction, Objectives, Methods, Results, Discussion, and Conclusion?

In this post let’s consider the notion of the “stand-alone” clinical study report.

First question I have is why must these documents be stand alone? The term “stand alone” generally refers to a document or device that is self-contained and does not require any other document or device to fully function. I argue that a clinical study report submitted to a drug regulatory agency is part of a larger corpus and as such the report should never be construed as a stand-alone document.

It is a given that most submissions made to drug regulatory agencies contain multiple research studies. So you will never have a “stand-alone” research study. So an individual study is part of a corpus of clinical research that best operates collectively. Also regulatory submissions have multiple sections as mandated by guidance that are topoi (places of argument) for different and often integrated attributes of clinical research drawn from this corpus. So clearly the clinical study report is not a stand-alone document in the instance of interpretation and argumentation.

Another point, a clinical study report incorporated into a regulatory submission will have numerous appendices including the final version research protocol and the statistical analysis plan. So I struggle to understand the premise that says you must incorporate extensive elements of the protocol and stats plan into the body of the clinical study report in order to assure the report is a stand-alone document. I truly fail to see the merit of this premise.

My last point is why do so many feel it is necessary to write the Introduction Sections to clinical study reports in the classic inductive form of rhetoric that moves from describing the disease condition, to the therapeutic void, to the chemical or biological construct of the drug under study and how this drug’s pharmacological action addresses a therapeutic void. The standard response is, “well we want to make this a stand-alone document.” Why? All the rhetorical moves you are trying to make in the Introduction will happen for sure elsewhere in the drug submission package in the appropriate topos. So why waste your time here, at the low level of the clinical study report? In my way of thinking the study report Introduction is intended to convey only two points:
  1. Why are you doing the study (the answers to what questions do you want at the end of the study that you do not have now?)
  2. What previous work has informed the design of this study?

21 October 2011

Designing Regulatory Submission Documents for Decision Making

I have made blog posts from time to time talking about users of various document genre represented in communication about life science research. I believe that for many, my portrayal of how people engage with documents presents a different picture than what they hold in their imagination. At least I hope it is the case that they are designing documents to a meaningful mental model, as that helps to explain why so many documents I examine are rather ineffectual communication vehicles. Unfortunately I believe most documents are generated according to the model of precedence--"what did we do last time?" I think this is generally the only question consistently asked in the life sciences regarding document design.

In this post I want to talk about designing documents to satisfy readers who must make decisions, like a regulatory health agency reviewer. These reviewers must decide whether a drug, biologic, or device can be marketed as desired by the sponsoring company.

There is nice body of work in the cognitive sciences that describes the mind set and reading style of the selective professional reader who is reading documents in order to make a decision or a set of decisions. It is clear from this research that these readers are not empty cups waiting to be filled up with whatever you want to send their way in your documents. Not only is the professional reader very selective in what they will read, they are also very critical. That is, they read against the information you have submitted in your documents looking for insufficiencies and weaknesses.

Remember that the selective professional reader working in the the health regulatory agencies is a very sophisticated reader and in many instances they know what type of information they need right after they review the requested marketing claims for the drug or the medical device.

Here is the part that many people miss in creating their documents. No matter how logical and detailed your information may be, if you have not correctly anticipated what the selective professional reader needs in terms of information and the necessary level of detail, then you have failed. It is that simple.

Research shows clearly that decision-making is a function of schema-based cognitive processes deployed by the selective professional reader. The process is really well understood now and suggests that all documents  to be used in a decision-making environment should be designed to support a schema-based reading style versus the approach of reporting information or summarizing findings.

Schema activation is at the heart of decision making. Schema provide the interpretive framework for a reader to pass judgement on data and written discourse. In the process of reviewing drug and device submissions, the selective professional regulatory reader will make use of many, many schemata during the course of the decision-making process. Each schema they deploy will have multiple slots for information. They will want to fill every slot and they search documents and data bases looking for the details associated with the various slots in a particular schema. For example, a specific schema can be characterized by this question: "will the indicated drug dose lead to an undesirable toxicity profile in the elder, under weight, and hepatically compromised patient?" They will then examine the documents presented to find all the information they desire in making the decision whether the drug label language regarding dosing is acceptable or needs to be modified. And they read only looking for what they believe are the necessary details. In essence, they read at the virtual exclusion of anything within the document(s).

This is why we talk about  the selective professional reader scanning documents looking for very specific information and now using some sophisticated search tools to navigate through documents looking for specific pieces of information and data.

Much of what the selective professional reader at regulatory agencies do is further characterized as an attribute-based search. The attributes are often compared against known standards, but may also be subjective standards. The attributes they are considering are largely tied into the product label claims the sponsor wants to make.

So the net-net here is that life science research professionals need to reconsider what they believe to be acceptable document design parameters. It is essential to build out effective working models for how the selective professional reader using your documents makes decisions and what schemata are being deployed in this process and then design documents that support these specific reading methods.

Presentation: Why Most Document Reviews are Not Really Reviews

View Greg's presentation slide set from the recent DIA Clinical Forum:

12 October 2011

Notes on Managing Reviews Presentation and Discussions at DIA Clinical Forum

We had an interesting set of presentations and discussions yesterday at the DIA (Drug Information Association) Clinical Forum in Basel, Switzerland related to managing reviews. Here are my summary notes on the session.

I spoke along with Barry Drees from Trilogy Medical Writing and Rhys Stoneman from GlaxoSmithKline. We each covered aspects of improving review outcomes for clinical research documentation.

I addressed the topic that to affect change you must look at the root causes of poor review outcomes not merely the symptoms of poor outcomes. Barry and Rhys talked about how their organizations--a contract writing house  and a large multinational pharmaceutical company, respectively--look to improve review outcomes.

I talked about the underlying root cause of most poor review outcomes fit into two boxes

  1. No formal training in review methods
  2. Poor reviewer discipline

Note: Jessica will get my presentation slide set posted to the M/C web site.

I stressed that without training most people are left to inspect and edit documents and generally are not sensitive to how rhetorical situations vary with audience and document genre.  I talked about reviewer discipline is difficult to modify without  making process changes and applying performance mandates.

Barry  talked about methods he uses to influence behavior of review teams. He had a great line on one of his slides addressing lengthy review cycles: "We do not have enough time or money to do it right, but there is always time and money to do it over."

Rhys talked about the steps taken to transform work practices and belief systems related to the authorship and review of clinical documents. We did a lot of support and training work at GSK on this project, so I have seen firsthand the changes. It is fair to use the descriptor, dramatic, to characterize how they have reformed work practices and outcomes.

Not surprisingly, the topic of greatest interest to the audience was issues related to reviewer discipline and what can be done to modify or blunt the impact of bad behavior.

10 September 2011

Why Do Life Science Research Groups Fail to Improve Review Practices?

I've been away from the posting entries here for much of the summer as I had quite a bit going on both with McCulley/Cuppan projects and some personal writing projects. One of those writing projects is a collaboration with my long-standing colleague at the University of Delaware, Stephen Bernhardt. We worked together on a paper I was looking to submit to the Journal of Business and Technical Communication (JBTC) summarizing our examination at two large pharmaceutical companies of review practices applied to clinical research reports written to support new drug registration submissions.

The paper is titled: Missed Opportunities in the Review and Revision of Clinical Study Reports. It is scheduled to be published next April in JBTC.  

The paper summarizes findings derived from formal interviews, examinations of review efforts on succeeding drafts of various clinical study reports, observations of roundtable discussions of draft reports, and our formal assessments of document quality between early draft and final report versions.

The key finding is that we found document review was too focused on low-level edits as opposed to global revisions that would improve the arguments and address audience concerns. We also found that time-consuming document reviews did not lead to demonstrable improvement in report quality, with evident and important problems left unattended from draft to draft. The interviews showed most reviewers felt that over 80% of their review effort attended to intellectual versus structural aspects of the document where as our assessment of those review efforts showed that nearly 75% of review efforts in fact attended to numerical and grammatical accuracy and other structural aspects of the documents. Further, the interviews showed that very few individuals had any meaningful insight into what other reviewers attempted to accomplish during their course of their reviews.

Reviews generated voluminous remarks and edits, but the vast majority of both in-text edits and marginal comments addressed data integrity, simple reordering of information, and low-level features of style and language. Few remarks in any round of review addressed construction, completeness, or representation of arguments, logic trails linking purpose and objectives to discussions and conclusions, resolution of difficult issues, or study design rationales that would satisfy a skeptical regulatory reader.

The volume and type of review remarks were relatively similar between early drafts and late drafts. Reviewers spread similar remarks in similar proportions throughout the documents, whether the review was of an early or late draft.

Reviews did not significantly improve communication quality, as measured by assessment of initial draft compared to final report version. The review effort apparently had little impact on the communication quality of the final version of the reports we examined.

Troublesome issues regarding soundness or completeness of evidence, irregularities in study conduct, and interpretation of data frequently went unaddressed in final reports.

A theme throughout the literature is that document review frequently brings into play conflicting or competing purposes. We clearly saw this too in our examinations.

We have now done extensive assessment of review performance in eight pharmaceutical companies. The problems mentioned above are clearly widespread.

Clearly there is need for companies to become more evaluative and methodical regarding their own work practices, in part because the costs associated with planning, authoring and reviewing individual research reports are substantial. If we consider the time and costs of authoring, review, and publication preparation, we estimate that a final clinical research report might range in cost from $50,000 to well over $200,000. Now if we add in the opportunity costs for the time spent in inefficient review efforts (most companies stipulate in their SOPs that reviews are to be completed in two rounds yet few ever attain that standard) and having to respond to regulatory agency inquiries because of poor communication quality of a research report, then the costs for a final version clinical research report quickly swell and the cost-per-page of the final report approaches $2,500/page. Numbers I find incredible.

Even more astounding is the continued tolerance for such wholesale inefficiencies within these companies. The big question is why. Why do these life science organizations tolerate such costs? I recognize that changing non-productive, conditioned, inefficient practices is not an easy matter and that a company must do a lot of work to counter poor reviewer discipline and the ingrained tendencies of review teams to focus on low-level stylistic edits as opposed to high-level rhetorical concerns. However, other industries have placed a premium on good review performance and invested the effort to change practice and behavior to achieve the desired outcomes. 

28 July 2011

When is good regulatory writing "good enough"?

Here's another blog post worth repeating (originally published in 2010):

We do not talk much on this blog regarding the use of language or the application of terms in science writing. Principal reason is that much of what we see in regulatory submission documents is genuinely “good enough.” However, others do not necessarily see it that way. I want to share with you how discussions in review roundtables can end up getting focused at really absurd levels of detail with a misapplied sense of establishing quality communication. 

In our consulting work, we try to be disciplined during our document reviews and only comment on language when it truly obscures or alters meaning. Being grammatically perfect in regulatory submission documents is a nice notion, but in practice consumes way too much time and organizational energy and will yield little in terms of outcomes.

We share this point with people all the time...but at times the advice goes unheeded and even worse…at times people just do not know when to move on and address real big concerns in their documents.

A case in point is a situation I observed regarding a long winded discussion in a review meeting over the use of the term “very critical”. The term “critical” in a medical sense means: of a patient's condition having unstable and abnormal vital signs and other unfavorable indicators. In theory, the meaning of critical is a black-or-white proposition without qualifications regarding gradation. Something is either critical or it is not. Therefore there should be no adverbs, like “very” in front of the term “critical” to connote a measureable degree of criticality. In this roundtable review the team got caught up in a 30 minute discussion that involved only two people arguing whether to use the term “very critical” or change it to “critical.”

Being pragmatic, I’d have to say: “Guys what are you thinking? You hold a team hostage for 30 minutes to argue over grammatical accuracy? To argue over something that will not matter when and if read by a regulatory reviewer. There were 10 professionals sitting in the room and 8 did nothing for 30 minutes. Cost of salaries alone is enough argument to say “Forget about it, let’s move on…we cannot afford to argue over such insignificant detail.” When we add in the opportunity cost (what these 10 people collectively could have been doing with their time), then for sure you have to make the argument.

This above episode gets played out time and time again in review sessions all over the pharma and medical device industries and is the reason why I am steadfast in my position that the vast majority of people involved in authorship and review do not know the answer to the question “How do you know when is good, good enough?” The end result is inordinate amounts of time can be applied at the wrong level of detail in reports and submission documents.

11 July 2011

New Mobile Site Launched

We've added a mobile version of our McCulley/Cuppan website. The mobile site will offer regularly updated tips and information on upcoming events (check there for details on Greg's upcoming DIA presentation), as well as details on our background and services.

You should be automatically redirected to our mobile site when visiting our full McCulley-Cuppan.com website on your phone's browser, but if not, the address for the mobile site is mcculley-cuppan.mynetworksolutions.mobi.

28 June 2011

Difficulties Assessing the Value-Added of Professional Communicators

Here is a post originally published in 2008, but the topic is worth repeating.

As mentioned in previous installments of this discussion, we at McCulley/Cuppan believe that the role of the professional communicator can add considerable value to the research and development process. But we recognize that in order to justify the need (and the added expense) for this expanded role, managers have the daunting task of trying to quantify the value added of such professionals to the organization.
The professional communicator must add value to a company’s information processes and products in order to justify their presence within the pharmaceutical organization. Mead (1998) defines the concept of value quite simply: “Value can be defined as the benefit of an activity minus its cost.” However, to apply that concept of value to the role of a communicator or science writer is not so simple. The value of the communicator in the life science research enterprise is not easy to determine for one principal reason: what value does one place upon a timely, efficient, and effective regulatory submission documents?
The problem is that communication in the life science industries does not lend itself to easy analysis against the traditional measures of professionalism that are routinely applied in other aspects of research or in other writing settings for that matter.
To measure the benefits of the professional communicator’s activity within the life science research organization, and in particular within a research project team, it is necessary to turn to the existing body of research illustrating the ways in which communicators provide project teams with valuable input and experience that enhance the overall quality, timeliness, and labor allocation to the tasks of research and reporting research. Quantitative and qualitative research methods, including case studies and surveys, offer data to demonstrate the significant effect professional communicators have on both organizational processes and products. However, little research and few publications directly address the roles and value of the professional writer within the life science research industries.
It is thus necessary for savvy managers to cast a wide net and look to other fields for relevance to the context of life science research. Managers should consider the task of writing in the work of engineering, aerospace, and computer industries, industries that share an intense document development environment similar to what we see in the life science research environment. These industries are similar to the life sciences in that the document product is used by regulators or buyers to create an informed opinion of the company’s proposed product or service. The caveat we must offer is that there is a thin volume of literature assessing the value added by communicators to these organizations, so the existing body of case studies may not be sufficient. Managers may need to seek out others in their organization or industry that have made use of communicators on their project teams.
Professional communicators can contribute to the understanding of the value added by documenting their work and comparing their tasks and targets with company benchmarks. Communicators must document their tasks over the life of the project because it is not possible to assess value added simply by looking at the documents that writers produce. Jong (1997) points out the problems in the “inspection model of quality control,” a model that when applied to documentation, focuses merely on the cost of writing and occassionaly the cost of review. This model is “inherently vulnerable to error,” as significant costs and errors may be missed. Jong claims, “The best way to improve the quality of the output is to improve the quality of the input” (40). Improving quality of input suggests that researchers carefully consider how they present the logic of their interpretations, how they design information to satisfy readers’ needs, and how they represent the resolution of issues within the framework of their document or sets of documents. Communicators can facilitate this consideration and ensure that the documents are logical, complete, and meet readers’ needs. One way to demonstrate these skills to managers is by comparing “before-and-after” documents that show how the communicator improved the logic and readability of initial drafts or how a communicator involved from the beginning of a project can better convey the resolution of issues than a writer brought in at the end of the project.
The variety of roles for writing in the life science research environment makes it difficult to employ a simple model for calculating the value added of writing specialists where the principal output is an informational product. As Fisher (1998) states, “The profession of technical communication is difficult to define in scope” (186). Within the pharmaceutical and life science research industries, writers and communicators have very different roles in various enterprises. Some are primarily writers, others function principally as editors, some coordinate the compiling of documents for registration filings, some facilitate team-based document development, and some concentrate on knowledge management. The challenge is to understand how professional communicators can contribute to efficiently and effectively producing the desired outcome: a high quality document product that helps in the conveyance of knowledge or the advancement of work on drugs and medical devices. Professional communicators must be able to defend their roles to management. By providing “before-and-after” versions of the documents and recording their tasks and timelines (thus enabling managers to compare those with previous projects), professional communicators may be able to reverse the current trend of having writing as a wholly separate task from research and may be able to start a new trend of utilizing professional communicators.
Works Cited
Fisher, J. “Defining the Role of a Technical Communicator in the Development of Information 
Systems.” IEEE Transactions on Professional Communication 41 (1998): 186-199.
Jong, S. “The Quality Revolution and Technical Communication.” Intercom 44 (1997): 39-41.
Mead, J. “Measuring the Value Added by Technical Documentation: A Review of Research and Practice.” Technical Communication, Third Quarter (1998): 353-379.


Originally published on our Knowledge Management blog

03 May 2011

A New Outlook is Needed Regarding Document Review

Companies need to become more evaluative and methodical regarding their own work practices.
The costs associated with planning, authoring and reviewing research reports and regulatory submission documents are difficult to determine. If we consider the direct time and costs invested in authoring, reviewing, and publication preparation, then a conservative estimate on the cost of the final report will range from $50,000 to $200,000. If we add in the opportunity costs for time spent on endless draft versions versus other more productive professional work, then the costs start to skyrocket. When you add up the collective costs across a volume of documents generated in a year, well, now you are talking about some really big numbers.

Why in the life sciences are the gross inefficiencies of review practices tolerated? Is it the old "Out of sight, out of mind" approach? Then again, it may be comfort in a bias for action: "Hey we hit the deadline, so no worries, the end justifies the means."

Much can be done in terms of specific actions to shine a spotlight on the inefficiencies of review and encourage effective work practices:
  •  Articulate and rely upon meaningful document quality standards and and best-of-craft guidance for executing effective reviews. Emphasize shared standards over individual preferences.
  • Define the scope, purpose, audiences, and argumentative strategy for the document before drafting. In turn, use this early document planning to guide reviews. In other words, practice "Aim, ready, fire!" as the standard documentation method.
  • Define reviewer focus and responsibilities, acknowledging unique and strategic expertise. Involve specific reviewers for specific purposes. Inform all reviewers regarding roles and points of review focus and the differences between reviewer and editor. Problems of word choice, style preferences, transcription accuracy, and format should only be handled by the writer/editors and not made the objective, intentional or otherwise, of review.

We do not believe that changing non-productive practices is an easy matter, or companies would have already done so. We do believe that recognizing non-productive review practices should be an object of focus for more organizations. We understand that collaborating to develop complex documents with sound arguments involves difficult cognitive and social practices. If a company establishes the goal of producing quality documentation through efficient and effective review practices, it will find that it must do a lot of work to counter the ingrained tendencies of people to focus on low-level stylistic edits as opposed to high-level strategic and rhetorical concerns.

18 April 2011

How the Most Sophisticated Documentation Groups Operate

At the apex of our version of the Documentation Capability Maturity Model are the Level 6 "Optimizing" groups. These are very sophisticated writing groups that are continually looking at ways to enhance work practices and processes so as to better serve their customer needs.

At this level, the job descriptions for all the subject matter experts contain extensive descriptions regarding their roles and responsibilities in the development of high quality document products. Work performance is not merely judged on how well they execute design and conduct of studies, but also on how effective they documents are in supporting organizational strategies and economic objectives.

At this level, the writing groups rely upon carefully defined document quality standards that reach well beyond style guides and template preferences. These groups articulate detailed guidance for executing effective strategic reviews. All understand the importance of working to shared standards over individual preferences. Reviewers authenticate and sign off for documents meeting strategic intentions and communication quality. If problems arise downstream, then the reviewers are held as culpable for the problems.

Documentation project management inside a Level 6 writing group tracks the amount of time along with other parameters applied by individuals to planning, authoring, and reviewing documents. Performance is always reviewed for "lessons-learned" at the end of all major writing projects.

At this level, there is a clear commitment to the assessment of document usability for the target audience and even testing of  document designs for certain types of documents (such as clinical study protocols)  early in the document life cycle.

These writing groups take full advantage of authoring tools to assure information is effectively generated once and then repurposed to other documents as a drug or device asset moves forward in the development life cycle.

Lastly, these very sophisticated groups make time in their busy schedules for innovation both in terms of work practice and work tools.

I do not know of any groups in the pharma or med device industries who have the above mentioned attributes. Do you?

31 March 2011

Still More on What Sophisticated Writing Groups Do

Continuing on the discussion of the last two posts regarding how we categorize the level of sophistication of writing groups. The next stop on the documentation capability sophistication chain is Level 5--Managed and sustainable.

The Level 5 writing organization applies a broad range of sustainable best-of-craft work practices.

At this level, the planning of business critical documents occurs in parallel with the planning of clinical research studies. Writing teams always deploy document prototyping techniques, such as populating sections of a clinical study report after the study protocol has been completed and planning the report results sections once the statistical analysis plan is finalized. Level 5 writing teams can tell you how many pages will be in their study report even before LPLV because there has been so much planning. Level 5 writing teams always map arguments across the sections of a clinical study.

Level 5 writing teams are aware of agile authoring techniques, but have not yet deployed these work practices. Level 5 writing teams clearly understand that repurposing information involves a whole lot more than merely cut and paste.

Level 5 writing groups clearly understand what strategic review means. The teams articulate and rely upon defined document quality standards and guidance for executing effective reviews. They always define reviewer roles and responsibilities acknowledging unique and strategic expertise.They recognize that the problems of word choice, style preferences, transcription accuracy, and format should be passed onto the writer/editors and not made a focus of review.

Level 5 writing teams routinely solicit document user information and maintain databases to help them track and understand usability and readability statistics on all of their documents. At Level 5, teams engage in root cause analysis to ascertain why questions were received from regulatory agencies. Level 5 writing teams apply standards and measures to the task of document authorship and review that are well down the highway from the simple metrics of time and draft numbers. Level 5 writing teams always engage in a lessons learned session at the end of each documentation project and such sessions are not seen as merely an activity to be filed and forgotten. Process and practice is tweaked and refined for the next time.

21 March 2011

More on Sophistication of Writing Groups

In our McCulley/Cuppan version of a Documentation Capability Maturity Model, the fourth level is called Organized and Repeatable as suggested by JoAnn Hackos in her various books. But perhaps a better term to use in place of repeatable is consistent, as at the fourth level, the application of well-defined work practices is much more consistent across documentation projects. In these organizations, the majority of team members operate by the credo: we recognize some of our processes and work practices represent "best of craft" and we know they will get us through any crisis.

At this level, the writing group does keep project tracking data in a simple database. Unfortunately,  most of the data only tracks time and draft iterations. These remain the only parameters used to create project milestones.

At this level there is some recognition that the role of the medical writer involves more than "just writing" and on some teams writers are seen as knowledge managers and they are actively involved in team meetings well before data base lock. However there remains credibility issues for the writing group in the broader organization where writers are often seen as necessary evil and only "just write" the reports.

A fourth level writing organization routinely uses pre-writing planning and project kick-off meetings to shape team expectations. At this level, the writers are more aware of document design considerations that impact usability, but little effort goes into mounting discussions with teams about document design during the pre-writing planning. This remains a discussion item for draft review.

Little attempt is made to formally collect information from document end-users about readability and usability of the documents submitted to them. Any information collected happens on a casual basis and is largely applied in ineffective ways.

Some of the belief statements found in the Level 4 Writing Group are as follows:

  • We are surprised and even sometimes mad at our document end-users when we get questions from them regarding information that was incorporated into submitted documents.
  • We recognize that we cannot just have meetings where we talk about the data, that we need to have meetings where we plan how and what we are going to say about the data in the reports, but this does not always happen.
  • We have good pre-writing planning and review tools, but it is a struggle to get the subject matter experts to actually use them.
  • Reviewers still spend too much time editing and not reviewing because they believe editing style and word choice help to make a document significantly better.
  • During the review process many reviewers still feel compelled to revisit sections already reviewed in an earlier draft version of the document.
  • Team members recognize best practice review calls for different roles and points of focus during the review process, but many still do not follow the guidance.
  • Sometimes we get stuck in our processes and still like to make all documents "look just like the last one that got approved."

16 March 2011

So How Sophisticated is Your Writing Group?

From time to time I have talked about the sophistication of writing groups in the pharma and medical device industries. My position is that for the most part, writing work practices in the life sciences are well removed from "best of craft" work practices.

In my authoring workshops, I offer the portrayal that most writing groups in pharma and medical device companies rate only a 2 or 3 for sophisticated work practices on a six point scale. I argue that most are rudimentary at best in terms of sophistication.

This usually gets me a couple of the desired guffaws from the people in the room. I remind them, that just because you are really, really sophisticated in the conduct of science does not mean you are equally sophisticated in the tasks associated with reporting on this science.

The six levels in our writing sophistication system are based on the parameters as created by JoAnn Hackos, The scale of sophistication is as follows:

  1. Oblivious
  2. Ad-hoc
  3. Rudimentary
  4. Organized and repeatable
  5. Managed and sustainable
  6. Optimizing

We have created criteria for each level that differs from what Hackos did for the software world. Our criteria for Rudimentary, where we think most writing groups fall, is characterized as follows:

  • We use style guides and templates for all of our documents and routinely make decisions on what to do based upon previous documents "approved" by senior management.
  • We always coordinate on design and basic messages and worry about writing style across documents in a development program so that we can assure consistency in terms of appearance, style, and common messages.
  • We make use of documentation project management to  assign resources and ensure documentation projects meet timelines and budgets.
  • We recognize that documentation team performance varies across teams and we DO NOT know the performance factors having the greatest influence.
  • We DO NOT systematically track user feedback regarding readability and usability of our documents. 
The credo for rudimentary groups is: "We always follow our routines except when we panic."

The belief statements at this level would include the following:

  • We are supposed to develop information strategies for our reports before we write them, but we can never get the Subject Matter Experts to take the process seriously.
  • We have lots of meetings to talk about what the data means, but we rarely have a meeting to talk about how we will represent the data in our report and never talk ahead of time on how we will represent the implications for what we see or fail to see in the data.
  • We don't have time to talk about how we want to design arguments in our reports. We have more important things to do.
  • We have no idea how big a document will be until after it is written.
  • Just write everything you have to say and we'll fix it during review.
  • Anybody with the similar professional training as I have will want to read a report in exactly the manner I choose to read it. 

So where does your group stack up?

11 March 2011

More on Review--Surveys Show Performance Has a Long Ways to Go

In a poll on the McCulley/Cuppan website  we posed the question: "How satisfied are you with the review performance in your organization?"

The response revealed what we've consistently seen in our consulting work at McCulley/Cuppan:  
87% of respondents saw room for improvement in the review performance of their organization.

Though this was an informal poll with 38 respondents,  23 of the respondents (60%) were either Unsatisfied or Very Unsatisfied and only 3 respondents were Very Satisfied. 

Here's the breakdown:
  • 3 (8%) Very Satisfied
  • 2 (5%) Satisfied
  • 10 (26%) Somewhat Satisfied
  • 11 (29%) Unsatisfied
  • 12 (32%) Very Unsatisfied
In a survey at a pharma company we asked 139 individuals to identify their personal successes and frustrations with review performance as experienced in that company. The results were quite telling:
    • Listed frustrations outnumbered successes 2.5 to 1
    • 13.5% of respondents cited poor reviewer discipline as their greatest frustration
    • 19.5% of respondents cited collaborative pre-writinng planning of key messages as the root cause of successful document review projects.

    02 March 2011

    Visualizing Argumentation

    I want to share with you a book that is definitely worth exploring: Visualizing Argumentation: Software Tools for Collaborative and Educational Sense-Making.  Here's a link to the book reference page on Amazon.

    The point of these tools is to support user decision-making with visual prompts that summarize the "pro" and "con" arguments on any given topic. In our McCulley/Cuppan consulting we've been huge advocates of this type of approach for years now. If you intend to make a regulatory submission document message-focused and issue-driven, then you have to create carefully crafted arguments.

    Constructing arguments and at the same time understanding them is not easy, especially when working in a collaborative environment. A good argument in any research or regulatory report is a structure of messages linked in inferential or evidential relationships that supports your conclusions. Getting all the pieces and underlying propositions pulled together is not an easy task. Hence using visualization tools.

    Visualization of arguments is well known in the research community as the most effective means to help foster understanding and improve critical thinking. The concept of argument mapping goes back to J. H. Wigmore and the approach of mapping remains a routine authoring tool in the legal community. I am suggesting it needs to become a more common tool in the pharmaceutical and medical device writing communities.

    So back to the book: the book talks about an interesting software-based approach that is light years ahead of the tabular approach we have used for years.

    22 February 2011

    Be a Reader to Be an Effective Reviewer

    When I first started working at McCulley/Cuppan, Greg gave me the task of reviewing a clinical study report, not only as a way to gauge my reviewing skills, but as a starting place for lessons on judging document quality using the McCulley/Cuppan Document Standards. The first instruction he gave me was "Read before you edit."

    Read before you edit.

    Sounds simple, but when asked to review a document, most people start off by looking for mistakes. I remember sitting there, trying to pay attention to the document when all I could see was a misspelled word. All too often we see ourselves as that teacher of grade school with the red pen in hand, correcting verb tense and punctuation errors. We don't see the document at the meta-level so we miss the message and logic of the document.

    Think about how you read a book, an article, or this blog. You read for meaning. If I leave a TyPo or too, you still understand that this post is about editing and reviewing documents. Now consider your habits when someone asks you to look at a document. Even if they say they want you to make sure that a paragraph makes logical sense, your eye probably catches all the typos and then proceeds to analyze the logic.

    The truth is that inspection is easier than reviewing and is a quick way to show your contribution to a document. A way to say Look, see all these good edits I made? The reality is that lots of time is wasted, especially on first or second drafts, fixing nit-picky mistakes in grammar and punctuation on sentences that will be cut out of the document before the third draft. If you read a document (or section of a document) once all the way through, you will avoid re-reading sentences again and again just to get the meaning and avoid the need to revert your changes back to the original sentence.

    During the last few months, I've been assessing a number of documents to judge reviewer performance as part of my work at McCulley/Cuppan. What I see all too often is that a reviewer will correct a sentence, changing a word or two, only to go back and change the sentence again within that same review session or in a subsequent draft. It is apparent many reviewers read to fix grammatical and stylistic errors. They are reading and analyzing each word individually versus parsing the sentence or paragraph for meaning. So a given word might seem to be the wrong choice at first glance, but if the reviewer were to read the entire paragraph, then the author's chosen word may be seen as appropriate. Style is a much overrated, overwrought topic during reviews. This is especially important when you read documents written by multiple people, where writing style and word usage is likely to vary widely.

    But whether you are inspecting (editing for punctuation and grammar) or reviewing (assessing for purpose, logic, and content), reading a document in its entirety before making revisions places you closer to how the reader is going to engage with the document. In the assessments I have done, I have seen so many reviewers miss the forest because they were busily engaged inspecting branches on trees.

    15 February 2011

    Editing When You Should Be Reviewing Costs Serious Money

    The idealized model of document project management describes an iterative process of information planning, content specification, authorship, review, and publishing. Working to develop best-of-craft work practices aligned with this model can help organizations meet their goals efficiently and at lower costs.

    Most organizations in the life sciences do not carefully consider their true costs to produce a document. When all the hidden costs associated with review are added in, the cost-per-page to produce a final version document becomes significant. If you add in opportunity costs for staff time dedicated to additional rounds of review (even if a document meets a specified deadline for the final version) then the cost-per-page skyrockets. Many organizations choose to ignore this point, which is why in my presentations on strategic review I have a slide with the Great Pyramids of Giza and the caption: “We can do anything we want as long as we have a 24/7 schedule and an expendable supply of labor.”

    The goal for review is to improve document communication quality—not grammatical accuracy or data integrity, which is done via inspection. To be most effective and efficient, reviews need to be strategic and orchestrated. This means the primary focus of review by subject matter experts should be on the intended messages and arguments and testing the document to ensure it minimizes the prospects for a qualified reader to construct alternative meanings. Tasks not accomplished if you are editing a document for grammar and style. 

    Our systematic analysis of review practices at numerous companies strongly suggests that while significant time and energy is expended on document review, the collective effort is generally over-sized in relation to the improvements made in the communication quality of the documents. That is, there are more people and more rounds of review than should be needed to get to a final version document. Another way to look at it is that reviews do not really move the communication quality meter very far given how much time and resources are thrown at the task. 

    Our analysis suggests that review performance is hampered by poorly defined expectations for what the team wants to accomplish in a document, that reviewers do not apply systematic means of analysis to their review (meaning team reviews are largely ad-hoc feeding frenzies), and that reviewers stray from attending to the messages and arguments of a document, instead attending to matters truly related to publishing standards. In other words, reviewers stop being reviewers and become editors. Editing when you should be reviewing comes with serious costs.

    04 February 2011

    Build Documents Following Logic of What the Reader is Trying to "Do"


    I ended my previous post stating that documentation teams have to recognize their documents must be built following the underlying logic of what the reader is trying to "do" with the document. By this I mean there are different design choices for document content depending on what the reader is intending to accomplish through accessing the document. 


    If your reader is making decisions versus merely being informed about a body of work, then this greatly impacts document content and a major difference between regulatory medical writing and the medical writing associated with the development of manuscripts.


    For instance, if you are creating a summary-level regulatory submission document, the logic of what this document is to "do" for the user will dictate design decisions. A summary-level document is not intended to report results; it is intended to synthesize study findings in a way so as to build the information base for the therapeutic activity and safety of a drug or device in various patient populations. The summary documents are meant to address issues associated with development work and candidly represent limits of current understanding. It is in the summary documents that one must argue that observed differences in the data (or lack of differences) are determined reliably and provide a low probability of alternative results upon further study. In creating summary documents authoring teams must recognize that successful arguments need to establish warranted claims that respond to the known or theorized questions or concerns of the reader. 


    A second example, if you are writing a risk management plan, like a RiskMAP for FDA, then the document must be designed to clearly demonstrate the logic underlying the risk assessment and the quantitative and qualitative assignment of risk. The document must clearly lay out the strategy to monitor and characterize the nature, frequency, and severity of the risks associated with the use of a product. Ultimately the document has to characterize why the proposed plan and the applied tools are the most appropriate to minimize a health outcome risk potential on a go forward basis. 


    My final example, if you are writing a clinical research protocol, then you must recognize the logic of this document is not only to demonstrate study design and when assessments will be made in or out of the clinic. It is essential the document also be seen as a vehicle to represent potential benefit in relation to risk (what an institutional review board is looking to assess when they consider the merits of a research study.) Also a protocol must clearly establish the agents and actors engaged in the conduct of the research trial and the assignment of decision-making to the actors (all users of the document need to know this information.) In my review of protocols, I routinely find these documents generally obscure actors (who will do what) to varying degrees of opacity and often leave assignment of decision-making as implied versus explicit. In such instances, the document is clearly not written reflecting the underlying logic of what the document is intended to “do” in the hands of the users of the document.

    02 February 2011

    Regulatory Writing Must Be "Fit for Function" Not Perfect

    Documents for the medical regulatory audience do not have to be perfect. Rather, they have to be "fit for the intended function."

    I suggest to you that the majority of people associated with drug and device development do not either understand or appreciate this concept. I also suggest to you that the majority do not understand that you apply different document designs to different document genres. Which in turn means you have different standards for different genre. Fit for function means you do not work to one standard.

    Fit for function means a document is built and judged by standards that are determined by audience and the task or role the document will take when in the hands or on the computer screen of the specified audience.

    A few weeks back I was working with a development team that judges all of their work by one standard. No matter what they are writing, the team judges their handiwork by their personal standards for a research manuscript. This one standard is applied to clinical study reports, protocols, and investigator brochures. I know this to be the case because review comments are often predicated with statements like: "Well, when I was writing manuscripts we always did.............in the Discussion Section. We need to do that in this report." or "I think the reviewers at the agency would find this very interesting to read. So let's expand the discussion here." and "You have a hyphen break at the end of the line in this paragraph. We cannot have mistakes like this in our documents."

    Fit for function places focus squarely on the elements that matter, that is, the underlying logic of what the document is to convey or enable the reader to do. So in the world of drug and device development, this means authoring teams have to move away from the notions that when creating documents for regulatory readers, they are either simply reporting data or writing for an audience interested in knowledge acquisition. Working to either of those standards guarantees that your documents will not be fit for function. Rather, documents must be built following the underlying logic of what the reader is trying to do with the document.

    More on how we characterize the logic of document genres in my next post.

    12 January 2011

    Something to consider: using intranets as Knowledge Management tool to support collaborative pre-writing planning

    Jakob Nielsen is a name some of you may recognize. He is a self-appointed guru on all matters regarding usability of web sites and web-mediated work tools. You can find much of his work summarized on his web site titled Useit.

    Last week Nielsen came out with his list of Top 10 designed business Intranets for 2011. Here's the link.

    I noted with interest Nielsen's comment "If there's anything that has been overused, abused, and hyped beyond the level of cliché, it's "knowledge management." Thus, it might be better to say that many of this year's winners were strong in "managing knowledge" on their intranets." 

    Managing knowledge. This is a concept where I see drug and medical device teams struggle all the time. One reason for the struggle is the slow uptake of tools that can help foster an effective work environment and change deeply ingrained cultural practices.

    I believe intranets, such as WIKIs are great tools that should be deployed at the project team level. These platforms become incredibly valuable from the moment a clinical development plan is written until well after a dossier is submitted to the regulatory authorities. 

    While it is true that knowledge management is not a technology issue, effort must still be spent in providing a suitable environment to facilitate knowledge capture and sharing. 

    I am suggesting the use of team specific intranets as a way to promote cultural change in an organization, at the level of knowledge-sharing activities and also for shaping broader work behaviors. In most companies where I train or consult, little has really changed since the 1960s in how people approach the planning and authoring of documents.  Using intranets, like a WIKI, can quickly get a team applying best practices in terms of pre-writing planning.

    09 January 2011

    Minimal Time and Effort Should be Applied to the Creation of the Clinical Study Report Synopsis



    How much time and effort to apply to the creation of the clinical study report synopsis?


    This is another question I am asked on a regular basis and a line of discussion that repeatedly comes up when I am working with clients to help streamline work practices. I usually draw slack jaws accompanied by an incredulous stare as I give my answer: 


    "The amount of time should be minimal, involve no more than three people, the level of effort better be next to nothing, and the time should be no more than an hour to create and a whole lot less time to review."


    My reasoning is very simple and straightforward.........."You apply time and effort to the development of a product in relationship to the product's strategic value. The value of the CSR Synopsis to the regulatory reader is virtually zero."


    Think about it. The CSR Synopsis affords little utility to what the reviewers are looking to accomplish when they choose to enter the framework (that is, the document) of an individual clinical study. If a regulatory reviewer wants a “snapshot” of a study, they will likely take a contextualized snapshot at a higher level of a drug submission dossier. That is, the documents in Module 2. They do not enter the framework of the study report to get generalized or summarized information. They are at the Module 5 level and embedding themselves in a clinical study report because they are seeking answers to narrowly defined questions.


    At McCulley/Cuppan we have queried regulatory reviewers about how they "use" a study report synopsis. Their responses support the premise I have laid out for you in the above paragraph.


    So this gets us back to the question on time and effort. Why generate a study report synopsis with every draft? Why allow the full team to look at the document? Talk about wasting time and energy. 


    In our assessment of review practices at pharmaceutical and medical device companies we see the same review pattern played out time and time again. The study report synopsis is generated with the first draft and in the review process it consistently garners the attention of the full review team (many of whom never make it all the way through the results sections during the course of their reviews.) Same thing happens on each subsequent draft.


    Given the synopsis is but a summary of the body of work presented in the study report, it should not be generated until that body of work is completed and signed off as "good to go." The synopsis does not even warrant review. It should be critiqued. Critique is a comparative read. A reading to ensure that the synopsis accurately and appropriately portrays the sum total of key details of the study. The critique process requires at best two people and certainly no more than three. All being subject matter experts drawn from the key clinical disciplines represented in the research study.

    04 January 2011

    Importance of language and writing style in a clinical study report

    How important is language and writing style in a clinical study report?  I was recently asked this question by a medical writer working for one of my McCulley/Cuppan clients. The writer is dealing with a team that seems to obsess over every word in every draft and the writer is looking for some help in how to address the situation.


    Here is my response to the question:


    You are asking about lexical and syntactical elements of writing (the third element of writing is grammatical.) 


    Lexical pertains to the words (vocabulary) of a language. In the context of clinical research we need to talk about several applied lexicons of scientific phraseology that apply broadly to science and then narrowly to a specific therapeutic area. The admittedly most distinctive feature of any clinical study report is the application of specific scientific and technical prose. So, language is very important in a CSR to avoid lexical ambiguity (why I so love statisticians and their demands for careful use of language when describing statistical observations) in order to allow the reader to derive the intended meaning.


    My experience suggests that many people in Pharma think attention to syntactical elements (style) means they are either eliminating ambiguity or improving clarity of message. Rarely is this the case.


    You have heard me say before that style does not matter in the type of writing represented in clinical study reports submitted to regulatory authorities in the US and elsewhere.

    My position is supported by current discourse theory. Discourse theory states that, as a rule in scientific writing, meaning is largely derived from the precise use of key scientific words, not how these words are strung together. It is the key words that create the meta-level knowledge of the report. Varying style does little to aid or impede comprehension.


    What happens is people often chase and play around with the style of document. Largely they are looking to manipulate an advanced set of discourse markers specific for clinical science writing or some subset specific to a therapeutic discipline. Discourse markers are the word elements that string together the key scientific words and help signal transitions within and across sentences. These discourse markers are the elements that provide for style. There are macro markers (those indicating overall organization) and micro markers (functioning as fillers, indicating links between sentences, etc.) Comprehension studies show that manipulating discourse markers--that is, messing with style--in most instances does not influence reader comprehension. It is worth noting that manipulation of macro markers appears to have some impact on comprehension for non-native speakers of English (why it is worth using textual advanced organizers to help with document readability.)


    So the net-net is: there is little fruit to be picked from messing with style in a clinical study report. Put review focus on the use and placement of key terms.


    This is a bit of a non-sequitur to the question, but a concept I’d like to share. To derive meaning from scientific text, readers will rely on their prior knowledge, and cues provided by the key terms and data they encounter or fail to find in a sentence, paragraph, table, or section of a clinical study report. So what I’d really prefer to get people thinking about is the semantical elements of their documents. Semantics is fundamentally about encoding knowledge and how you as an author enable the reader to process your representation of knowledge in a meaningful way. Semantics is about how much interpretive space you provide to the reader in a document by what you say and equally important, by what you do not say. Of course you cannot get to the point of thinking about semantics unless you see clinical study reports as something more than just a warehouse for data.