31 December 2012

Thoughts on the Knowledge-managing Medical Writer


I have written from time to time here and elsewhere regarding my vision for what should constitute a good working definition of the regulatory medical writing profession in the 2000s. I thought I’d end the year by sharing a few more attributes that make up my working definition. I also want to contrast this vision with my observations of what I see as the most common mind-set for the medical writing profession. I place these considerations before you to see whether they resonate or perhaps play but a very flat note. Comments and counterarguments are always appreciated.

I argue that to succeed in 2013 and the years forward, a regulatory medical writer must see their role as a knowledge manager, not as a writer—that is, not as a scribe of clinical narratives or descriptive text in various types of documents. To play off the work of Metz Bemer in her article Technically It’s All Communication: Defining the Field of Technical Communication I too suggest that many aspects of knowledge management are indeed a sub-discipline of regulatory medical writing. So in this piece I will continually refer to the “knowledge-managing medical writer.”

My observations during the past 10-12 years of many medical writers in many organizations suggest few see or operate in a role beyond “a scribe of clinical narratives or descriptive text in various types of documents.” They do not act in the manner I assign to the knowledge-managing medical writer. This older vision of a medical writer worked in the past and may work in certain situations and with certain document genre today, but will fall to the edges of the road as we move forward in time.

Over the years I have become acquainted with some individuals who really do recognize the importance of knowledge management and representing greater value than “just a scribe.” I do not use this term “scribe” derisively, but in the context of how many clinical development or regulatory submission teams and organizations treat medical writing resources—they are just an afterthought brought in late in the development or strategy life cycles to attend to the matters of writing up the details. Writers may be wrongly characterized to such a model, but then again many are correctly cast. I see the situation as similar to what played out in the Irish monasteries of the 10th century—monastic scribes working in the background arduously producing manuscripts.

I argue that as part of knowledge management, regulatory medical writers at a minimum have a role in advising teams on the effective design of documents for the highly selective professional reader at regulatory agencies. I know there are some reading this piece who fully understand the implications of what reading research shows—and shows with ample evidence I might add—that successful research reports and submission dossiers are predicated on more than just good study results and an adequate template (though this does make the task of writing significantly easier.) A brief digression here: I do not consider whether a submission package is or is not approved as the only parameter of success, but I am not going to describe my working definition of “successful documents” in this piece.

Okay, back to the point—most reading this piece will likely agree that successful documents routinely require more than just someone sitting at the computer who has an advanced degree in the life sciences, great attention to detail, good analytical skills, above average MS Word skills, and good command of the English language prevalent in medical writing (by-the-way: I suggest these attributes make for a reasonable working definition of a scribe.) My observations suggest that not many writers take on the role of advising their teams on effective document design. A role that I fully associate with the knowledge-managing medical writer.

I suggest “successful” clinical research reports require close, carefully orchestrated and well-articulated design intentions and regulatory submission documents demand significantly more consideration of such intentions within and across the “corpus” of various Module 2 submission documents. Therefore, I argue that medical writers need to carry the “knowledge torch” regarding how to consider and then act out various design intentions in regulatory submission documents.

Now please keep in mind, by “design” I do not mean making a document look pretty or to comply with a template. These are mechanical attributes of a document. By design, I mean how one builds and shapes arguments and how one creates division and hierarchy to convey meaning to the busy professional reader in complex, technically demanding sets of data and information. These are among the semantical attributes of a document.

My observations suggest many writers retreat from such tasks. They are very comfortable staying in the background operating in the manner of the monastic scribe versus the manner of the knowledge‑managing regulatory medical writer. They are most comfortable working to the model of “tell me what you want and I’ll write it” and the dictate of “here—write your document just like this one because it was approved by senior management.” I suggest this is an authoring approach that would appear quite comforting to an 11th century Irish monk writing manuscripts in the monastery at Le Mont Saint-Michel.

So looking towards 2013 and years forward, I suggest that some of the things really good knowledge-managing regulatory medical writers will do are as follows.

  • These writers understand that the mantra of “well this is how we have always done it” is not a meaningful metric—especially since regulatory agents in public forums and private sessions talk about how they sometimes gasp and choke their way through data and documents in submission packages. Good regulatory medical writers clearly understand that they are writing for a decision-making reader and look to get their teams to understand the implication of errant document design decisions for this type of reader. These writers understand how to design documents to satisfy the question-based inquiry of the decision-making reader of regulatory submission documents.

  • Knowledge-managing regulatory medical writers truly understand the working definition of the term “concise” and look to influence their teams to recognize not only the importance of concise writing but the hallmarks as well. These writers know that writing style must vary by document genre and that you cannot simply take a “one shoe will fit all feet” approach to both style and level of detail as you move across various document genres. Their writing is marked by brevity of statement and is free from low-value elaboration and superfluous detail.

  • Knowledge-managing medical writers recognize there is marked difference between descriptive text and expository text and assiduously avoid redundant representation of data in a textual form. These writers understand that much of descriptive text at best adds bulk and at worst adds noise to a document. They look to educate their teams to recognize much of traditional scientific writing style brings no added value to the domain of regulatory documentation, but will consume considerable amounts of time and energy to produce and manage within their documents.


I end this piece with a working definition that I feel is a good fit for the knowledge-managing medical writer—“the knowledge managing writer utilizes a range of strategies and practices with a team to identify, create, represent, and enable adoption of insights and experiences to foster effective work practices to create high-quality document products.”

05 September 2012

Unproductive Review Practices

This is another post from our archives, but is pertinent to the document assessment work we've been doing this summer:

In "Why the Focus on Review Practices?," my colleague Jessica Mahajan highlights the observation made in our McCulley/Cuppan consulting that reviewers, who are expected to work toward enhancing document quality to improve its effectiveness, tend to direct much of their review effort to mechanical and stylistic document elements (syntax, word choice, punctuation, grammar) at the expense of the intellectual work the document is supposed to do. One of my previous posts "How Do We Get People to Apply Improved Work Practices?" explores ways to motivate change when change would provide significant benefits to both individual and organization. In turn, I have a theory about why we continually see subject matter expertise for review applied to the task of copy-editing, and why that practice is so hard to change. The theory is built around how we:
  • Learn to write.
  • Learn to review.
  • Ask for review.

How We Learn to Write
Think about how you learned to write. If your experience was like that of kids I visit in middle and high schools, then your teachers tried to encourage you to write in a context and with a purpose. Unfortunately, they likely ended up using rubrics that are all about structure, word usage, and typography. A rubric that had little regard for how well writing fulfilled purpose and satisfied reader's needs. A rubric I saw recently (I collect these things, and this one was typical) graded students on everything but content, and as long as the writing followed the specified form it got top marks (an A in this instance). A really interesting paper on black holes, and the physics behind them (which may have been beyond some readers, but worked really hard to make the ideas accessible to a varied reader audience) got a B because of errors in typography. More popular are the 5 or 6 equally weighted measures called writing traits. Students are given points for:
  1. Ideas and Content
  2. Organization
  3. Voice
  4. Word Choice
  5. Sentence Fluency
  6. Conventions
Just look at this: how can the assemblage of ideas and content bear value no greater than word choice and sentence fluency? When our ideas are given so little weight (~17% here), is it any wonder that people attend to form over function?

This is how we learn to write--texts we create are based on finished models that rarely tell us what makes them good models. Further, we are never given insight to the process of crafting and iteratively refining a text, that is, a model for what should be in place in a first-draft document, versus a second draft. In most learning environments, documents are judged based on how well they adhere to rules constructed for talking about how language should work.

Unfortunately, this approach does not change when we get into higher education. Some courses in technical degree programs have a writing component. But if you test out of freshman composition (where the previous description is still pretty accurate) then at best you may get one required course in technical communication. This might be taught by a creative writing student who is mostly interested in finishing their MFA and thinks that learning to "write" a well-organized memo (form) should be one of the four major projects students will prepare for the course. Because creative writing and technical writing don't have much to offer each other, right?

There are exceptions to this scenario, but unfortunately the above is probably a pretty good description of the rule. More to the point, grading is hard (especially when there are not good rules for anything other than grammar and punctuation), and most students are primarily interested in receiving top marks. So students simply want to know what they have to do to get the top mark. The model is a finished document that is good enough--in terms of content, organization, voice, word choice, sentence fluency, and conventions--to get a top mark. Likely the target given to the students addresses only five of the aforementioned six attributes. The one left out is content. So the student focus and energy goes into fulfilling these five attributes.
As an informative aside, when departments ask for help in training their people (students or employees), the most frequent initial and typical request is to "just help them with their grammar". This despite the fact that we know when we focus on grammar, the quality of writing, measured by what the writing does, goes down.
Learning to write in the workplace is slightly different. Here we're given a finished document, and told to 'make it look like this'. The document is complete, but bears no annotation or associated guidance to suggest what attributes of the document makes superior and worthy of the status of a 'model'. In a worst case scenario, someone's internal dialog might go something like this, "I'll look pretty stupid if I ask my boss what it is about this document that makes it a good model--so I won't. I mean, it is obvious, right? And besides, I (choose all that apply: a. got A's in English, b. have a PhD, c. have written technical reports before, d. I have all the data in the report) so... I must be okay, right?"

In our McCulley/Cuppan consulting, we constantly see a model used for constructing documents where new documents are based on old ones. Authors endeavor to make new reports as complete as possible before asking anyone to give them a look. I can recall several instances where authors were told to write until they had nothing else to say and then their supervisor would be ready to look at the report. This approach to writing--to model a preexisting document and to make it as complete as possible before bringing extra eyes in to help--sets up a workplace dynamic that sabotages the potential for productive change.

How We Learn to Review
How we learn to review follows the model of how we learn to write. In school, students construct papers that respond to prompts and are graded. We spend our time learning how to construct sentences that are grammatically correct, forgetting that people can get over a misspelled word or two or a tense problem if we have something to say. Often the only thing teachers can use to distinguish one useful response to the prompt from another are the mechanical elements of a sentence. And they can't give everyone an A. That would be grade inflation, or worse!

Papers are returned to students with lots of blood (well, red ink since lots of teachers still like those pens) that identify words misspelled, grammar errors and organization problems. In other words, the students work is 'assessed', but not reviewed. And I can't blame the teachers--this is what they were trained to do and to help students effectively learn to communicate well via the written word involves a lot of reading (not fun or easy, I promise). Identifying the mechanical problems in a document is easiest and fastest, which is a consideration when you have thirty or more papers to read in an evening. I have a colleague teaching so many sections that she's got 115 papers to read at one sitting!
The problem is compounded by the fact that in competitive societies we're taught not to collaborate. Rare is the teacher that has students collaborating on projects or written work, though thankfully, this is changing. We learn not to share our answers with others ('cause that's cheating). What we practice in school is what we bring to the workplace, supplemented with observations and suggestions from people who review with us that help us to construct new models. In terms of document review, we start with the models we got from our teachers: fix the typos, suggest alternative wording, and massage the format.

Since our colleagues use this model too, we stick with it. In other words, we do what is familiar. We also have to do something during the review. In the absence of more specific instructions we have to let people know we put reasonable effort into the review exercise. After all, review is an activity. One of the ways to measure the extent of our activity is to count up the total number of review remarks we have left on a document. The more remarks, the better job we did as a reviewer. So we turn our attention to verifying the numbers in the report are accurate and make to 'dot the i, cross the t, and don't forget that comma'.

We are conditioned to be reactive reviewers--we respond to what is present in a document, not what is missing. We are conditioned to operate on the concept of a finished document, no matter where the document sits in the drafting process. Even with a draft one we start at page one and we work straight through the document until we're finished--that is, finished with the document, out of time, or out of energy. We see this all the time in our assessment of review practice. There is straight-line decline in the frequency of review remarks per page as you move through the document. We see Draft 1 report synopses and summaries overloaded with review remarks even though in the body of the report the Discussion Section is only 30% completed and there is no Conclusion Section yet.

Through conditioning in the workplace, we have no sense of strategic review. The prevalent strategy is to simply get the review done so we can 'get back to our day job'. We often have a reverential belief that all it takes to succeed with a scientific or technical report is to just get the data right. That is, make sure the data are accurate. We are also conditioned to think that all you have to do is get the study design right--the rest does not mean too much. That is, the report is merely a repository for data. So we are conditioned to discount the value of scientific reports because constructing well-written, clear, concise, and informative documents takes time away from our 'day job' of conducting science.

In other posts we've talked about the importance of review and the huge commitment organizations have made to review. You would think that, since it is so important, more time would be spent on training people to become more effective reviewers--particularly during their professional training. Yet we don't see this. We've not found a single academic program offering credentials in technical communication or medical writing that offers a course in review (as opposed to editing)--yet the complexity and difficulty of review would certainly warrant one.

Most reviewers learn to review on the job. How do we know? We've asked thousands working in various organizations covering a broad spectrum of disciplines, and we've read others who've asked. Further, a quick survey of the most popular books in Technical and Professional Communication and Medical Writing devote little real estate to the topic of review. In a three hundred page text, we find less that 5% devoted to review. Yet review is certainly more than 5% of the process.

How We Ask for Review
When we analyze review practices and products for our clients we look at more than just the documents under review. We also assess how people communicate about review and the tools they use to facilitate review. Typically communication regarding the task of review is a simple request: "Please review this by such-and-such (a date)." We rarely find instructions from the author to help inform reviewers: "Please have a look at section x because I'm really having trouble explaining y." We'll post a longer description of this topic, but the point to be made here is that authors rarely help their reviewers with instructions/review requests that focus reviewers on what would help the document and authors advance their work.

Our assessment of review practices suggests that the collective review effort does little to improve the document's communication quality. It likely will improve accuracy of the data, compliance with a template, and contain sentences that are all grammatically correct. But the conveyance of messages and the logic of arguments may remain murky or even suspect. Given everything I have said up to this point, why would you expect a different outcome than this?

The Theory
So here is the theory: Expensive subject matter experts are reduced to copy-editing because that is what they know best (they come into the professional box with plenty of conditioning from the academy), it is familiar, it is what everybody else does, and their organization hasn't offered them a better alternative. Further, the situation won't get any better because even if (when) they find a better alternative they're too busy to change (they have their day job to do and besides they have too many documents to review to be fettered by revising ways of working), and even if they wanted to change things, the organization's leadership wouldn't buy into it.

Fortunately, much can be done to really 'move the needle on the meter' and improve individual and organizational ways of working when it comes to the task of document review. I know this to be the case from the consulting and training work we do as we've helped a number of organizations improve review practices and document quality.

14 August 2012

Much can be Done to Improve Reviewing of Research Protocols


I have been away from the blog for most of the summer. It has not been an intended absence, rather I have been caught in a whole lot of work and personal obligations that kept me away from making posts here. 

We have been doing several projects this summer related to the planning, writing, and reviewing of clinical research protocols. In this post I want to share some thoughts about review.

We believe that protocol reviewers need a more a refined mental model for how to review drafts of their documents—how to concentrate on the big-picture rhetorical concerns and set aside attending to edits and stylistic corrections during the early rounds of review and then attend exclusively to these elements in the final rounds of review.

In work with several clients, we find that reviewers confuse the roles of reviewers and editors and reviewers also confuse the timing of when edits should be made. By this I mean we see many reviewers operating at the sentence and word level in the initial review of the protocol concept document. The protocol concept document is where you have to get all the big picture details ironed out regarding study design and conduct before you sweat the details at the paragraph and sentence level. Based on our observations and interviews, we suggest that better guidance is needed to define the roles for reviewers and editors and time points for application.

We recommend that performance criteria and subsequent measurement should be enacted to help review teams better understand and appreciate the need to attend to the intellectual attributes of a planned research trial. When I ask people sitting in my workshops how many protocol amendments they average per study the response is always very similar: a collective groan followed by the retort, "we average way too many."  So it seems like it is time to change methods. It is at this point in the conversation with clients that I invoke the great Albert Einstein quote regarding the working definition for insanity: to continue to deploy the same methods, but then expect different outcomes.

Our assessment of work practices at a number of pharmaceutical companies suggests that reviewer discipline is poor in that reviewers fail to participate at all or in meaningful ways in early draft reviews and also extend reviews of certain protocol sections without really improving communication quality. When a protocol goes through 6 rounds of review, then something is likely not right and when the Background section in the protocol is actively reviewed in all 6 rounds, then for sure something is not right.

We recommend that late in the protocol development process guidance regarding the line-level review of text should be enacted to ensure precision and consistency. We find huge inconsistencies and ambiguities with three heavily used modal verbs: may, should, and must.  We also find that many protocols do a poor job of characterizing aspects of agency for instance, who has responsibility for decisions or ensurance of patient data integrity. We also find that reviewers rarely consider the temporal frame of reference for information in a protocol. We find huge inconsistencies where in a given paragraph the language may start in present tense, move to future tense, and then back to the present tense.

03 July 2012

Importance of Language and Writing Style in a Clinical Study Report

Here's one of our most popular posts from the archives:


How important is language and writing style in a clinical study report?  I was recently asked this question by a medical writer working for one of my McCulley/Cuppan clients. The writer is dealing with a team that seems to obsess over every word in every draft and the writer is looking for some help in how to address the situation.


Here is my response to the question:


You are asking about lexical and syntactical elements of writing (the third element of writing is grammatical.) 


Lexical pertains to the words (vocabulary) of a language. In the context of clinical research we need to talk about several applied lexicons of scientific phraseology that apply broadly to science and then narrowly to a specific therapeutic area. The admittedly most distinctive feature of any clinical study report is the application of specific scientific and technical prose. So, language is very important in a CSR to avoid lexical ambiguity (why I so love statisticians and their demands for careful use of language when describing statistical observations) in order to allow the reader to derive the intended meaning.


My experience suggests that many people in Pharma think attention to syntactical elements (style) means they are either eliminating ambiguity or improving clarity of message. Rarely is this the case.


You have heard me say before that style does not matter in the type of writing represented in clinical study reports submitted to regulatory authorities in the US and elsewhere.

My position is supported by current discourse theory. Discourse theory states that, as a rule in scientific writing, meaning is largely derived from the precise use of key scientific words, not how these words are strung together. It is the key words that create the meta-level knowledge of the report. Varying style does little to aid or impede comprehension.


What happens is people often chase and play around with the style of document. Largely they are looking to manipulate an advanced set of discourse markers specific for clinical science writing or some subset specific to a therapeutic discipline. Discourse markers are the word elements that string together the key scientific words and help signal transitions within and across sentences. These discourse markers are the elements that provide for style. There are macro markers (those indicating overall organization) and micro markers (functioning as fillers, indicating links between sentences, etc.) Comprehension studies show that manipulating discourse markers--that is, messing with style--in most instances does not influence reader comprehension. It is worth noting that manipulation of macro markers appears to have some impact on comprehension for non-native speakers of English (why it is worth using textual advanced organizers to help with document readability.)


So the net-net is: there is little fruit to be picked from messing with style in a clinical study report. Put review focus on the use and placement of key terms.


This is a bit of a non-sequitur to the question, but a concept I’d like to share. To derive meaning from scientific text, readers will rely on their prior knowledge, and cues provided by the key terms and data they encounter or fail to find in a sentence, paragraph, table, or section of a clinical study report. So what I’d really prefer to get people thinking about is the semantical elements of their documents. Semantics is fundamentally about encoding knowledge and how you as an author enable the reader to process your representation of knowledge in a meaningful way. Semantics is about how much interpretive space you provide to the reader in a document by what you say and equally important, by what you do not say. Of course you cannot get to the point of thinking about semantics unless you see clinical study reports as something more than just a warehouse for data.



01 May 2012

Peer Review Revisited: It Really Needs to Change

Back in June of 2010 I made a posting here on our blog that considered peer review to be a largely bankrupt way to screen and validate research worthy of publication. This post extends the discussion I started in that post.


An important point to consider is that an increasing number of studies and journal editors are suggesting that peer review is a failure as currently practiced in the life sciences.


As food for thought consider the following:
  1. This is old data, but worthy of attention. In a survey, only 8% of the members of Sigma Xi, the Scientific Research Society, agreed that peer review works well as currently applied. (Chubin and Hackett, 1990).
  2. As a tool to filter out science worthy of publishing, peer review may be blocking the flow of innovation. (Horrobin, 2001) Horrobin spoke strongly when he suggested the peer review system in a non-validated charade. Here's a link to his article  http://www.nature.com/nbt/journal/v19/n12/full/nbt1201-1099.html
  3. Richard Smith's article: "Classical peer review: an empty gun" cites the Drummond Rennie quote: "If peer review was a drug it would never be allowed onto the market because we have no convincing evidence of its benefits but a lot of evidence of its flaws." (Rennie is a deputy editor of the Journal of the American Medical Association and a supporting force behind the international congresses of peer review.)
  4. In late April 2012 Carl Zimmer reported in the New York Times that according to a study made by PubMed that the number of articles retracted from scientific journals increased from 3 in 2000 to 180 in 2009. Here is link to his piece: "Sharp Rise in Retractions Prompts Calls for Reform"
  5. A couple more articles worth checking out
    1. "What's wrong with peer review"
    2. "Peer review is f***ed up -- let's fix it"

08 April 2012

Article Published: Missed Opportunities in the Review and Revision of Clinical Study Reports

It has been some time since the last blog post. I am not taking time off from posts. Rather, my workload and travel schedule have created a hectic environment where I have found it difficult to carve out time to create and upload useful posts.

This post is an unabashed personal plug for an article I coauthored with my colleague Stephen Bernhardt. The paper: Missed Opportunities in theReview and Revision of Clinical Study Reports appears in the April issue of the Journal of Business and Technical Communication

In the paper we look to further the understanding of review practices as applied to large and complex technical documentation. Specifically, the paper describes several case studies where we examined closely the review efforts of teams in the process of finalizing clinical study reports.  Our title suggests we see considerable room for improvement of review practices.

We see our work as an extension of other studies of review practices in professional settings. 
  • Paradis, Dobrin, and Miller (1985) first established document review as a critical site of contentious interaction in a research and development environment (Exxon). These authors contrasted document review from the opposing perspectives of managers and employees, detailing the ways that individuals often worked without shared purpose or expectations, resulting in frustrating differences of perception, feelings of resentment, and the need to substantially rework documents during successive reviews. 
  • Van der Geest and van Gemert (1997) pinpointed a general sense of frustration with review processes in several Dutch companies, noting that “both writers and reviewers find reviewing the most cumbersome stage in the process of text production, given that many parties are involved in it and that it is loaded with different expectations” (p. 445).
  • Henry (2000), working with data gathered by many interns at various professional sites, found that reviews were “fraught with second guessing” and required “interpretations of organizational culture to the ends of adequately and appropriately delivering discursive products” (p. 65).
  • And several studies (Henry, 2000; Katz, 1998; Paradis et al., 1985) discuss the activity of “document cycling,” an activity in which documents pass through multiple and sometimes conflicting reviews, as various reviewers weigh in with commentary.

A theme throughout the literature is that document review frequently brings into play conflicting or competing purposes. In addition to improving a document, review sometimes functions to evaluate worker performance (Couture & Rymer, 1991) or to discipline individuals (Henry, 2000, p. 81). Thus, interpersonal dynamics are frequently in play. Katz (1998), in particular, highlighted how review processes can function positively as a way to socialize new workers, helping them learn how to both write and work successfully within the local culture. Henry (2000) shared this concern for how interns come to understand the ways that organizations perform work.

The position we take in this paper is that changing nonproductive, conditioned, inefficient practices is NOT an easy matter, or companies would have already done so. We suggest that recognizing nonproductive review practices and understanding the causes for such practices should be an object of focus for more organizations. We understand that collaborating to develop complex documents with sound arguments involves difficult cognitive and social practices. But if a company establishes the goal of producing quality documentation through efficient and effective review practices, it will find that it must do a lot of work to counter the ingrained tendencies of review teams to focus on low-level stylistic edits as opposed to high-level rhetorical concerns.

27 February 2012

Inefficient Meeting Practices Cost Money

As I read the meeting manifesto by Al Pittampalli, Read This Before Our Next Meeting (now available to read for free from Amazon), I noticed many similarities between what Pittampalli writes and what Greg has been advising clients for years. Mainly, that there is a high cost to inefficient and ineffective work practices.

Anyone working in corporate America will be able to relate to the book, and, hopefully, learn from it. For us at McCulley/Cuppan, the focus is, as always, on how those in biomedical R&D can improve work practices.

Below are a few highlights from the book that relate to what we consider best practices (bolded text is quoted from the book).


The Modern Meeting moves fast and ends on schedule.
"Traditional meetings seem to go on forever, with no end in sight." How often have you felt this way? And how often do these meetings accomplish the intended purpose of the meeting, if there was a clearly stated purpose?

What we've seen over the years is that review meetings, those scheduled to discuss a document in person, last for hours, going from page 1 to page n through a document, with hours wasted on word choice and often leaving the author feeling overwhelmed. When the time limit is up on this type of meeting, more meetings are scheduled that will follow the same pattern.

The Modern Meeting limits the number of attendees.
More often than not, executives are involved in these document review meetings, even on meetings focused on editing and not strategic review, along with anyone who ever had anything to do with the project. How many man-hours are wasted sitting in a meeting instead of working on discovery or development? As Greg mentioned in his post "Editing When You Should be Reviewing Costs Serious Money", "when all the hidden costs associated with review are added in, the cost-per-page to produce a final version document becomes significant."

Meetings should be scheduled for a set amount of time, a time limit that is short enough to prevent repetitive attacks on one word or phrase (nitpicking), but long enough to actually allow for the goal of the meeting to be accomplished. Only the people who are directly responsible for a decision or must act on that decision should be in the meeting. Just because a person provided a line of text to the document does not mean that they should attend the meeting. If a person feels they must be included, provide them with a meeting outline and/or a meeting recap.

The Modern Meeting rejects the unprepared. 
Often meetings are scheduled to review a draft of a document, but the meeting ends with only a few pages of that 100 page document having been marked up. Then more meetings are scheduled and the cycle continues.

Meetings should have a clearly defined purposed. If a meeting has a clearly declared purpose, the leader of the meeting should be able to provide a list of items to accomplish for the meeting and the time allotted for each action item. This helps the scheduler of the meeting weed out people whose presence is unnecessary. This also ensures everyone is prepared for the meeting. According to Pittampalli, if you aren't prepared, you shouldn't attend.

The Modern Meeting produces committed action plans.
At the end of a well-organized meeting, there should be a committed action plan, not just a deadline. Too often with our clients we've seen meetings that produce reams of notes for the authors of the document with a deadline that keeps moving as more notes are piled into the author's inbox.

If meetings, and work practices in general, are efficient and focused only on the contributions of those directly involved, there is less opportunity for circular arguments and nitpicking and less contradictory comments for the author to wade through. Plus there is more time to actually work.


For more tips on improving meetings, follow Al Pittampalli on Twitter at @Pittampalli or view his blog.

07 February 2012

More on What is a Document?

So what is a document?

In response to my last blog post, I have been asked by several individuals—"so then what is a document?"

My short answer—"I do not know for sure."

Now for the long answer.

The widely accepted definition for a document is as a textual record. This definition served us well in the past. But now with digital records, semiotics, and information retrieval tools; I am not sure the definition meets the needs of how we communicate in 2012.

As early as the 1930s Paul Otlet, an Information Scientist of considerable renown, suggested that the definition of documents also include digital images and even three dimensional objects. I am not prepared to toss all the elements Otlet describes into the mix. But I am prepared to suggest that documents are organized physical evidence and as such the organization transcends the classic definition for a document as this vehicle is a less relevant communication medium in 2012 than it was is 1982. I do not have a preferred term, I wish I did, but I do suggest we attempt to move away from the term document as it suggests a domain for organized physical evidence that does not match the reality of the digital age.

Suzanne Briet suggested a definition some time ago that a document is evidence in support of a fact. I rather like this notion. She makes the point that documents should not be viewed as being concerned with texts, but with access to the evidence. I suggest this is the essence of all regulatory writing that I talk about often in this Blog. If one considers the models in place for electronic drug submissions, thinking in the classic terms of 8.5 x 11 and A4 is really not very useful.

Rather it is better to be thinking in terms of taxonomies of information or perhaps even semiotics. Semiotics is the study of signs, indication, designation, signification, and communication. Semiotics is closely related to the field of linguistics. I look at semiotics as a valid attribute for this discussion because the life sciences are driven by numbers and what are numbers, but signs and the significance of those signs.

Then there is Michael Buckland who talks about how a key characteristic of “information-as-knowledge” is that it is intangible: one cannot touch it or measure it in any direct way. Knowledge, belief, and opinion are personal, subjective, and conceptual. Therefore, to communicate them, they have to be expressed, described, or represented in some physical way, as a signal or communication.

What we are really talking about happening in regulatory submission packages is the conveyance of knowledge.  This conveyance often transcends the boundaries of a traditional text, that is, a document as it is generally defined. The Briet notion of "evidence in support of a fact" works well as a definition of a document especially if we change the quote to read "evidence is support of a claim."


03 February 2012

Need a New Mental Model for Regulatory Documents

Wow……I have been away from this blog a whole lot longer than intended. Those competing interests….like you all understand….are the bane of my existence.

For the past couple months I have been looking closely at how people think about the “vehicles” used to communicate with regulatory health agencies. I am using the word vehicle here because I am trying to divorce myself from the notion of document, in particular, the notion of “a document.” In the modern times of on-screen reading and linked files, what is a document anyhow? To me it is the entire corpus somebody may be able to access, not just one slice of that body.

In my consulting/training interactions at McCulley/Cuppan, I find that the majority of people I interface with in the client setting operate within the mind set of individual documents (some have even smaller boundaries and operate by document sections) that are stand alone with well defined boundaries (pages and page counts.)

I want to argue that the vehicle of communication for regulatory submissions is not a document. It is the full and complete dossier submitted by the sponsor. Documents are just placeholders where I go to get a piece or pieces of information that help answer my questions. I want to argue that the regulatory reader does not see a dossier as a set of documents. Rather they see a dossier as a corpus of information that they will use to answer question and make decisions. The contents are just vehicles they peruse to get what they want.

Applying my working model means you stop seeing documents as “stand alone” and stop saying “this document has to tell a story.” I’d also like you to stop using the word document. That word has baggage I am trying to jettison. Instead I want people to view their work at least as “modules” and preferably as vehicles that help a user to answer very specific questions. Bottom line, a research report is just a part of the constellation that tells the stories. Note the plural form as we have many stories to tell in a dossier, not just one.

Applying my working model means you stop seeing your work as being like a novella—something to be read from page 1 to page n. Applying my working model means you see your body of work as something that is read in a coordinate manner that is defined by very narrowly defined aspect rules of inclusion/exclusion. Applying my model means you stop seeing pages and sections and you start seeing concepts and topics.

My argument is that the selective professional reader at regulatory health agencies cares little about documents, sections, pages, and data tables. I am suggesting such readers care solely about making informed decisions and where in the submission dossier they find vehicles that can get answers to their concept and topic questions.