30 December 2009

Designing Documents for the Regulatory Reader

Over the years I have heard numerous presentations or read articles by regulators at FDA that suggest the documents submitted by pharmaceutical and medical device companies are not designed to support the reader tasks of the regulatory reviewer. When considering these comments, I’d suggest the core problem is that these documents are written from the perspective of the writers, with little consideration given over to what a reader is trying “to do” with these documents.

Here are some generalizations about regulatory readers and corresponding document design actions:
  1. Regulatory readers work under a spotlight, in agencies subject to governmental scrutiny, with high levels of public criticism. Therefore it is important to write discussions to bring in focus the troublesome study issues of the development program. Highlight the issues as questions the regulators are likely to ask and provide responses to the questions, appropriate rationales for the responses, and then explicitly point to the corresponding data that underlies or supports your response.

  2. Regulatory readers may be in the same broad field as the industry scientists who developed the drug or device, but they likely will not have the same level of specialization. Therefore, explain key decisions regarding development programs, study designs and methods of analysis. It is important to provide scientific and regulatory context to help support scientific expectations and reasoning.

  3. Each regulatory reader brings specialized expertise to a team evaluation, where multiple reviewers work with different sections of the marketing request application for a drug or a device and arrive at various perspectives on the filing. Therefore authors must assume global use of their documents with multiple readers taking multiple perspectives. Authors must provide entry context at the beginning of main sections. That is, orient readers with purpose and main messages at the beginning of sections.

  4. Regulatory readers are swimming in a sea of documentation. Make it easy for them to navigate your document. Carefully consider how discussions will be sequenced within sections and across sections of the reports as well as summary documents. Assume your regulatory reader applies a coordinate review to your document. Design pages for easy skimming and scanning. Use informative headers and deep subsection structuring.

  5. Each regulatory reader has individual ways of working—individualized practices, work habits, and standards that have evolved as they have become expert reviewers. A common point is that they all read “against the text” by looking for weaknesses in the conduct of science, logic of arguments, and unsupported positions. This means authors must be aware that the regulatory reader is largely looking for what is not there in the document. It is therefore important to design documents assuming multiple points of entry for the reader. Critical information may need to be summarized in several locations to ensure the reader does not miss critical arguments. Repetition of information must be carefully managed so as not to become redundant.

Originally published on our Knowledge Management blog

01 October 2009

Make workplace training effective

I’ve been facilitating quite a few workshops lately related to the concept of strategic review. At each session I’ve had numerous people comment to me that they found the workshop to be amongst the best training experiences of their professional lives. One person commented, “too bad the rest of the training I experience cannot be as practical and applicable to my work world.” I’ve heard comments like this time and again over the years. It is unfortunate that so much money and time is invested in “training” to yield little to nothing in terms of outcomes. An outcome that is readily avoidable when you design training using assumptions about how adults learn.

Research shows that adults in the workplace setting learn best in interactive, problem-based, applied situations that incorporate learning through simulations, case studies, and participant presentations. Adult learners need to see that the professional development learning and their day-to-day activities are related and relevant. New skills and concepts must demonstrate practical value and allow people to alter established work practices. In our training sessions we also center discussions on client documents, workplace situations, and real-life case studies. Adults want to be the origin of their own learning and will resist learning activities they believe are an attack on their competence. Thus, professional development needs to give participants some control over the what, who, how, why, when, and where of their learning.


Originally published on our Knowledge Management blog

26 August 2009

What’s Wrong with PowerPoint as a Document Authoring Tool?

In our McCulley/Cuppan consulting work we recently had a new client invite us to work with an authoring team on applying best practice to the planning, writing, and reviewing of a regulatory submission document. The document was going to be a significant piece of work, requiring over 500 pages and involving multiple authors across several scientific disciplines.

At the first meeting, the project leader announced she wants to continue the use of PowerPoint (PPT) as the document planning tool. Reasoning for this approach was in part because she and other team members already invested considerable time and effort in generating a 540+ slide deck representing data and messages to be in the regulatory document and PPT was a very familiar tool from extensive use in developing presentations.

I have to say we were mildly surprised by the demand to use PPT as the primary tool to plan and outline such a large, complex document. We have encountered other organizations using PPT for document planning purposes, but never on such a large scale. On the surface, the choice of PPT as the tool to produce initial draft documents seems reasonable. It is familiar to many, provides an authoring environment that produces output that can appear on screen as an outline, can be commented on in oral or remote review, and can be easily augmented and updated. All of these apparent benefits would support an argument for using PPT as an outlining tool to plan any and all documents. However, the use of this tool does not readily scale to developing large, complex technical documents.

Christine Haas, Karen Schriver, Thomas Huckin, Edward Tufte, and others tell us much about how readers interact with and read texts. From this collective body of work we have learned some things that can help us produce texts in an effective manner that, equally, is perceived as of very high quality. The prime method is to use tools that enable the design and review of texts as you expect your readers to engage in the reading and analysis of the text.

Successful collaborative authoring is significantly rooted in careful and thorough front-end planning. Choices of authoring tools are among the critical aspects of the document planning process, as tool choices impact (enabling or constraining) every other aspect of the planning and documentation processes. As authors and managers of authors, it is incumbent upon all of us to choose tools that accommodate our desired set of outcomes. Authors and managers must be cognizant of authoring tools that accommodate not only themselves and their ways of working, but others as well.

It is our position that use of PPT for document planning negatively impacts all potential collaborative authoring and review outcomes. Our claim assumes that the goal of the work is to generate an effective document, economically produced, that meets or exceeds end-user expectations.

I have outlined here key advantages and disadvantages of using PPT to plan and facilitate documentation of a multi-year, complex pharmaceutical development work.

PPT Advantages
  • use is a habituated format; it’s familiar.
PPT Disadvantages
  • presentations constrain data reporting rather than facilitate collaborative/interpretive processes (see my previous blog post on PowerPoint presentations).
  • format creates/maintains a huge but fragmented vision of the process and product, impacting output (see Schriver and Tufte).
  • PPT does not scale well to large documents as it limits information organization and searching is cumbersome, impacting review and the authors' ability to migrate material from PPT into a document-based format.
  • PPT presentations do not accommodate major revisions/reorganization; impacting logic, content, and organization of the ultimate document product (see Schriver).
  • the output decreases in clarity as the number of slides increases, impacting author/reviewer interpretation (see Tufte).
  • PPT output is not a document; conversion to MS Word format is inefficient, time-consuming, and expensive.
The point seems clear in the choice of using PPT as an authoring tool is that familiarity wins, despite that, from a business, work, or quality perspective, the disadvantages of PPT clearly outweigh this single advantage. Eschewing proven document authoring platforms for the familiar may have unintended consequences and bear a high tariff.

Originally published on our Knowledge Management blog

24 August 2009

How Do You Know When Good is Good Enough?

How do you know when good is good for any document you may produce?

I ask this question in every workshop I facilitate. Generally the response is a head nod followed by the comment "Yes, that is the question.....I wish I had the answer." There is the rub. We rarely sit back and consider the notion of what attributes we need in place to have a high quality communication product. 

Often we will work on a document until we run out of time (I am convinced the only marker used by the majority of people authoring documents in the pharmaceutical or medical device world is time.) We do this because we have not defined what document quality "is."

Defining document communication quality means developing expectations or standards of quality. Standards can be applied at the level of an individual, team, or an organization. Defined standards or definitions of quality are prerequisites for measuring quality. If standards don’t exist, they must be designed.

Standards are explicit statements of expected quality. They may apply to writing and reviewing practice as well as to the document product. In terms of writing, document standards communicate expectations for how a particular document will communicate to the user what is to be known or to be done. In essence the standards establish the parameters that ensure a document achieves the desired results.

Originally published on our Knowledge Management blog

02 August 2009

The McCulley/Cuppan Standards Development Process We Use with Our Clients

As I mentioned in a previous post, in our McCulley/Cuppan consulting work we find the prevalent standards used to determine the “success” of a document are largely driven by simple measures of accuracy and then a passel of “home brewed” concepts for characteristics of document that are largely idiosyncratic ideas about what matters to the reader.

When you have 10 people reviewing a document you will end up with at least 12 opinions about the quality of the document (incongruous number is intentional as sometimes you have a reviewer offering more than one opinion that often conflict) and ways of describing quality that are all over the map. People use different terms to describe quality and if they actually use the same term, then it is highly unlikely that they will use the same definition for the term. So the first problem faced in the review process is the vocabulary used to describe quality attributes in a document.

When writers and reviewers compose or edit text, they continually make decisions that concern semantics—the meaning their words convey—and syntax—the way the words are arranged and other structural elements of the document. However, writers and reviewers often base these decisions on assumptions that have not been tested with technically-oriented adult readers or complex, data-rich, technical documents. Worse yet is many assumptions have never been tested to determine validity. Thus, there are actions ordered by writers and reviewers that may not in fact have the expected effects on a readers' performance (we know this is certainly true with one very important reading audience for pharmaceutical and medical device companies—the regulatory agency, like FDA). So the second problem faced in the review process is to understand what document elements have a meaningful impact on semantics and where to focus time and attention on the syntactical elements of a document.

The first thing we do with a client is an examination of the terms used formally (such as in guidance documents) and informally (such as review comments in documents) to describe quality. This will give us a sense of how the organization views quality and how sophisticated they may be in trying to create a common platform that describes document quality for the organization.

The second thing we do is to provide clients with the terms McCulley/Cuppan uses to describe document quality and why the concepts underlying these terms are extremely important to help determine document communication quality. A very important consideration is that the terms should speak the user’s language, with words, phrases and concepts familiar to the user, rather than system-oriented terms.

We spend a huge amount of time talking about semantics. The term "semantics" refers to the study of how language conveys meaning. Using a broad definition of semantics, we help our clients learn how to focus on different features of a document. Features like word choice, the position of information in document sections, paragraphs and in documents as a whole, idea importance, and the visual representation of data.

Then we work with a client team to create and vet working definitions for the various quality standards.

We then roll out the standards in a workshop setting and show people how the standards are applied to the types of documents they have and will produce.


Originally published on our Knowledge Management blog

16 July 2009

How Do You Measure Communication Quality?

One of the truisms we see in our McCulley/Cuppan consulting work is that rounds of document review tend to go until the point when the document must be sent somewhere. That's why we say that in the pharmaceutical industry, the opportunities for making changes to a document are virtually limitless. The problem driving this situation is most people involved with authoring and reviewing process do not have good markers to inform them of the overall communication quality of a document.  So without good markers they are left to utilize really poor markers to help them measure document quality. Markers like: grammatical soundness; how many people have reviewed the document; how many rounds of review; and how many comments leveled on the text and data in the document. Unfortunately, these markers have little correlation in the case of grammatical soundness and, for the other three, no correlation whatsoever to the communication quality of a document.

To paraphrase Steve Jong in his paper (you can read it hereYou Get What You Measure—So Measure Quality: "if you don't measure it, you'll never get it."  This is so true with document communication quality.  In order to measure communication quality you have to employ meaningful markers. We find our clients typically employ only two markers that are useful: accuracy and compliance. Unfortunately, neither of these do much to measure the quality of argument, soundness of logic, or overall usability of a document for the end-user. There are some useful markers to consider for measuring these document attributes. More on these markers in my next post.


Originally published on our Knowledge Management blog

11 June 2009

Unproductive Review Practices: Why They're Still Around Even Though People Know Better

In "Why the Focus on Review Practices?," my colleague Jessica Mahajan highlights the observation made in our McCulley/Cuppan consulting that reviewers, who are expected to work toward enhancing document quality to improve its effectiveness, tend to direct much of their review effort to mechanical and stylistic document elements (syntax, word choice, punctuation, grammar) at the expense of the intellectual work the document is supposed to do. One of my previous posts "How Do We Get People to Apply Improved Work Practices?" explores ways to motivate change when change would provide significant benefits to both individual and organization. In turn, I have a theory about why we continually see subject matter expertise for review applied to the task of copy-editing, and why that practice is so hard to change. The theory is built around how we:
  • Learn to write.
  • Learn to review.
  • Ask for review.

How We Learn to Write
Think about how you learned to write. If your experience was like that of kids I visit in middle and high schools, then your teachers tried to encourage you to write in a context and with a purpose. Unfortunately, they likely ended up using rubrics that are all about structure, word usage, and typography. A rubric that had little regard for how well writing fulfilled purpose and satisfied reader's needs. A rubric I saw recently (I collect these things, and this one was typical) graded students on everything but content, and as long as the writing followed the specified form it got top marks (an A in this instance). A really interesting paper on black holes, and the physics behind them (which may have been beyond some readers, but worked really hard to make the ideas accessible to a varied reader audience) got a B because of errors in typography. More popular are the 5 or 6 equally weighted measures called writing traits. Students are given points for:
  1. Ideas and Content
  2. Organization
  3. Voice
  4. Word Choice
  5. Sentence Fluency
  6. Conventions
Just look at this: how can the assemblage of ideas and content bear value no greater than word choice and sentence fluency? When our ideas are given so little weight (~17% here), is it any wonder that people attend to form over function?

This is how we learn to write--texts we create are based on finished models that rarely tell us what makes them good models. Further, we are never given insight to the process of crafting and iteratively refining a text, that is, a model for what should be in place in a first-draft document, versus a second draft. In most learning environments, documents are judged based on how well they adhere to rules constructed for talking about how language should work.

Unfortunately, this approach does not change when we get into higher education. Some courses in technical degree programs have a writing component. But if you test out of freshman composition (where the previous description is still pretty accurate) then at best you may get one required course in technical communication. This might be taught by a creative writing student who is mostly interested in finishing their MFA and thinks that learning to "write" a well-organized memo (form) should be one of the four major projects students will prepare for the course. Because creative writing and technical writing don't have much to offer each other, right?

There are exceptions to this scenario, but unfortunately the above is probably a pretty good description of the rule. More to the point, grading is hard (especially when there are not good rules for anything other than grammar and punctuation), and most students are primarily interested in receiving top marks. So students simply want to know what they have to do to get the top mark. The model is a finished document that is good enough--in terms of content, organization, voice, word choice, sentence fluency, and conventions--to get a top mark. Likely the target given to the students addresses only five of the aforementioned six attributes. The one left out is content. So the student focus and energy goes into fulfilling these five attributes.
As an informative aside, when departments ask for help in training their people (students or employees), the most frequent initial and typical request is to "just help them with their grammar". This despite the fact that we know when we focus on grammar, the quality of writing, measured by what the writing does, goes down.
Learning to write in the workplace is slightly different. Here we're given a finished document, and told to 'make it look like this'. The document is complete, but bears no annotation or associated guidance to suggest what attributes of the document makes superior and worthy of the status of a 'model'. In a worst case scenario, someone's internal dialog might go something like this, "I'll look pretty stupid if I ask my boss what it is about this document that makes it a good model--so I won't. I mean, it is obvious, right? And besides, I (choose all that apply: a. got A's in English, b. have a PhD, c. have written technical reports before, d. I have all the data in the report) so... I must be okay, right?"

In our McCulley/Cuppan consulting, we constantly see a model used for constructing documents where new documents are based on old ones. Authors endeavor to make new reports as complete as possible before asking anyone to give them a look. I can recall several instances where authors were told to write until they had nothing else to say and then their supervisor would be ready to look at the report. This approach to writing--to model a preexisting document and to make it as complete as possible before bringing extra eyes in to help--sets up a workplace dynamic that sabotages the potential for productive change.

How We Learn to Review
How we learn to review follows the model of how we learn to write. In school, students construct papers that respond to prompts and are graded. We spend our time learning how to construct sentences that are grammatically correct, forgetting that people can get over a misspelled word or two or a tense problem if we have something to say. Often the only thing teachers can use to distinguish one useful response to the prompt from another are the mechanical elements of a sentence. And they can't give everyone an A. That would be grade inflation, or worse!

Papers are returned to students with lots of blood (well, red ink since lots of teachers still like those pens) that identify words misspelled, grammar errors and organization problems. In other words, the students work is 'assessed', but not reviewed. And I can't blame the teachers--this is what they were trained to do and to help students effectively learn to communicate well via the written word involves a lot of reading (not fun or easy, I promise). Identifying the mechanical problems in a document is easiest and fastest, which is a consideration when you have thirty or more papers to read in an evening. I have a colleague teaching so many sections that she's got 115 papers to read at one sitting!
The problem is compounded by the fact that in competitive societies we're taught not to collaborate. Rare is the teacher that has students collaborating on projects or written work, though thankfully, this is changing. We learn not to share our answers with others ('cause that's cheating). What we practice in school is what we bring to the workplace, supplemented with observations and suggestions from people who review with us that help us to construct new models. In terms of document review, we start with the models we got from our teachers: fix the typos, suggest alternative wording, and massage the format.

Since our colleagues use this model too, we stick with it. In other words, we do what is familiar. We also have to do something during the review. In the absence of more specific instructions we have to let people know we put reasonable effort into the review exercise. After all, review is an activity. One of the ways to measure the extent of our activity is to count up the total number of review remarks we have left on a document. The more remarks, the better job we did as a reviewer. So we turn our attention to verifying the numbers in the report are accurate and make to 'dot the i, cross the t, and don't forget that comma'.

We are conditioned to be reactive reviewers--we respond to what is present in a document, not what is missing. We are conditioned to operate on the concept of a finished document, no matter where the document sits in the drafting process. Even with a draft one we start at page one and we work straight through the document until we're finished--that is, finished with the document, out of time, or out of energy. We see this all the time in our assessment of review practice. There is straight-line decline in the frequency of review remarks per page as you move through the document. We see Draft 1 report synopses and summaries overloaded with review remarks even though in the body of the report the Discussion Section is only 30% completed and there is no Conclusion Section yet.

Through conditioning in the workplace, we have no sense of strategic review. The prevalent strategy is to simply get the review done so we can 'get back to our day job'. We often have a reverential belief that all it takes to succeed with a scientific or technical report is to just get the data right. That is, make sure the data are accurate. We are also conditioned to think that all you have to do is get the study design right--the rest does not mean too much. That is, the report is merely a repository for data. So we are conditioned to discount the value of scientific reports because constructing well-written, clear, concise, and informative documents takes time away from our 'day job' of conducting science.

In other posts we've talked about the importance of review and the huge commitment organizations have made to review. You would think that, since it is so important, more time would be spent on training people to become more effective reviewers--particularly during their professional training. Yet we don't see this. We've not found a single academic program offering credentials in technical communication or medical writing that offers a course in review (as opposed to editing)--yet the complexity and difficulty of review would certainly warrant one.

Most reviewers learn to review on the job. How do we know? We've asked thousands working in various organizations covering a broad spectrum of disciplines, and we've read others who've asked. Further, a quick survey of the most popular books in Technical and Professional Communication and Medical Writing devote little real estate to the topic of review. In a three hundred page text, we find less that 5% devoted to review. Yet review is certainly more than 5% of the process.

How We Ask for Review
When we analyze review practices and products for our clients we look at more than just the documents under review. We also assess how people communicate about review and the tools they use to facilitate review. Typically communication regarding the task of review is a simple request: "Please review this by such-and-such (a date)." We rarely find instructions from the author to help inform reviewers: "Please have a look at section x because I'm really having trouble explaining y." We'll post a longer description of this topic, but the point to be made here is that authors rarely help their reviewers with instructions/review requests that focus reviewers on what would help the document and authors advance their work.

Our assessment of review practices suggests that the collective review effort does little to improve the document's communication quality. It likely will improve accuracy of the data, compliance with a template, and contain sentences that are all grammatically correct. But the conveyance of messages and the logic of arguments may remain murky or even suspect. Given everything I have said up to this point, why would you expect a different outcome than this?

The Theory
So here is the theory: Expensive subject matter experts are reduced to copy-editing because that is what they know best (they come into the professional box with plenty of conditioning from the academy), it is familiar, it is what everybody else does, and their organization hasn't offered them a better alternative. Further, the situation won't get any better because even if (when) they find a better alternative they're too busy to change (they have their day job to do and besides they have too many documents to review to be fettered by revising ways of working), and even if they wanted to change things, the organization's leadership wouldn't buy into it.

Fortunately, much can be done to really 'move the needle on the meter' and improve individual and organizational ways of working when it comes to the task of document review. I know this to be the case from the consulting and training work we do as we've helped a number of organizations improve review practices and document quality.


Originally published on our Knowledge Management blog

06 June 2009

How Do We Get People to Apply Improved Work Practices?

I am always thinking about how we can get people to really apply improved work practices. In our work situation, people readily accept the ideas and principles we share regarding ways of working on knowledge elicitation and document development, but then I see people routinely retreat to old ways of working. Lots can be done to help counter these tendencies.

A couple ideas come from  a great post I read this morning by Elizabeth Harrin on the blog Project Management Tips.  Her post addresses the very interesting question of  "how do you make lessons-learned stick?"  Two points in her article stand out for me:
  1. Do not wait until the end of the project to review lessons learned.
  2. Make it difficult for people to do things the old way.
Great concepts and unfortunately, concepts I rarely see implemented.
We get involved in client projects that carry on for months or years--frankly, it is really too late to wait until the project is over to collect lessons learned. I will endeavor to apply Point One in all of my project work with clients. I think we take this approach at McCulley/Cuppan on projects, but not in a consistently applied formal way.

Point Two is a really interesting concept shared by Ms. Harrin, but one that takes steely determination and solid support from the highest levels of the organization. In my work, I find the underlying issue behind this point to be that people just hate leaving their comfort zones, even if they recognize their personal ways of working are suboptimal. Point Two is all about making a retreat to the recesses of comfort zones an uncomfortable process. I recall a number of years ago we had a client who wanted the organization to stop printing and filing paper versions of reports and memos. So they did two things: took away the personal file cabinets and limited access to departmental printers. Initially the situation was like being on the famous British Naval vessel, The Bounty, but all quickly learned to reinvent their personal ways of working, those that could not tolerate the demanded change in habits soon left the organization.

These two points connect back  to another important concept I picked up on a post by Wendy Wickham on the blog: In the Middle of the Curve.  She suggests that business training typically is centered on developing knowledge (she refers to it as brainpower) and takes little time to actually help people understand how to manage it. This is so true. It is one thing to intellectualize ideas and concepts, but the challenge is to actually apply the ideas and concepts. Pausing mid-stream in a project can help and changing work environments to prevent a retreat to old habits can too (though this approach comes pre-loaded with the potential for encouraging bad morale and other social issues that can sink a project real fast.)


Originally published on our Knowledge Management blog

18 May 2009

Why the Focus on Review Practices?

Recently, I was involved on an interesting project at a Top-Ten pharmaceutical company. The project entailed assessing prevalent review practices applied to people working within one of the R&D groups. What I examined was the complete review record (from first draft to final draft) for various research reports produced within this group. The assessment involved a quantitative and a qualitative analysis of review performance. Following are some of my thoughts and observations.

As anyone who’s familiar with this blog knows, improving document review practices is of great concern to us at McCulley/Cuppan. Why? Why do we, and our clients, keep coming back to the topic of review performance? The following observations on a recent consulting project provide some insight as to why review is, or needs to be, a central focus for improving knowledge propagation and dissemination.

In this project we analyzed the effectiveness of a team in moving from conceptualizing to finalizing a document. We did this through an extensive analysis of their review commentary generated through different stages of document development. The findings were consistent with what we’ve seen over the past years of assessing review practices for other clients: some good, some bad, and some ugly.

Bottom-line--we found considerable room for improvement.

Following are some examples of what happens when resources and tools are misapplied during the review process.

Senior Management as Spell-Checker? On this project looking at four business critical documents we found that throughout multiple drafts of each document (even up through final draft) there were edits for word choice, punctuation, verb tense, and spelling made by senior management, including the group vice president. Let me repeat that--the vice president of the research group focused on making spelling edits. Why is senior management focusing on basic edits to structure? That is one expensive copy editor. Should a senior official in a group be bogged down at the line level making edits? Is that the best use of their time, talent, and insights? I think not. If that is their focus, then who is responsible for keeping the arguments presented in the documents strategic and logical? This is a common practice; perhaps there is some thought that tweaking grammar improves the rhetorical and semantical structure of a document. Rather I think it is merely a matter that these are easy elements to fix versus considering how well a document fulfills the intended logic and strategy.

Simultaneous review What happens when you send a document to multiple people at the same time with the same review instructions (which is often merely "please review")? You get massive duplication of edits. To the tune of hundreds of same or similar edits per document. Then on top of that, the authors of these four documents had to deal with a variety of syntactical or lexical edits (structure and word choice) made to the same piece of text, but with slight variations. Whose edit do you choose? A common practice we find is the edit made by the individual with the higher pay grade tends to trump all other recommendations.

Chaos reigns supreme When a document is reviewed by upwards of 20 people through multiple drafts, (and I mean multiple--like 5-8 rounds of review!!) and they receive little guidance or control to what may and may not do in the review process, then chaos often reigns supreme. We find that work is constantly revisited with everyone making continuous edits throughout the document--this is why we say the opportunities to revise a document are virtually limitless. A case in point: we looked at one document that moved through eight rounds of review. Yes, you read that correctly--eight rounds of review. In tracking review comments for just one section of this document (yes, the following numbers reflect review comments for only one of the sections in this document) we find the following review performance:

Draft 1-- 14 comments; Draft 2-- 55 comments/119 edits; Draft 3-- 97 comments/765 edits; Draft 4-- 42 comments/578 edits; Draft 5-- 37 comments/423 edits; Draft 6--15/comments/98 edits; Draft 7-- 37 comments/153 edits; and Draft 8-- 99 comments/272 edits

Clearly on this project, the team had problems with establishing what they wanted to accomplish within this particular report section and how to establish when good is good enough.

I know some readers of this post may say--"oh my gosh, that kind of performance would never happen with our document reviews." Keep in mind, I mentioned at the start of this post that such outcomes are all too common.


Originally published on our Knowledge Management blog

09 May 2009

Knowledge Management and Negotiating Meaning in Technical and Scientific Reports

Came across this blog posting by Steve Barth on the topic of working definitions for the term Knowledge Management. Take a look.

Barth talks about the list of working definitions for knowledge management compiled by Ray Sims. I absolutely agree with Barth that it is much more effective to have multiple definitions for a concept than just a single idea. When Philip and I talk about knowledge management, our discussions often begin with the qualifier "it depends on......."
A quote in Barth's piece I want to share with you here:
I prefer multiple definitions because clarity and agreement cannot be assumed. Meaning must be negotiated and confirmed. Even if it's only a temporary agreement or working definition for the task at hand, that represents a position triangulated from the multiple points of view of all participants. You'll have a much better commitment to success if consensus was earned rather than enforced.
Meaning must be negotiated and confirmed. This is an important concept not just for developing a working definition for a term like knowledge management, but it is also an approach critical to the conveyance of knowledge in scientific and technical report. We see the process of negotiated meaning played out, sometimes really well and other times quite poorly, in our daily work in the pharmaceutical industry. Those who buy into the notion that technical and research reports do more than just report data will endeavor to ensure their documents convey a clear sense of meaning. They will work during the drafting and review sessions to refine meaning and not just refine grammar and presentation.
Barth's comment that best outcomes occur through the process of consensus and not enforcement is important to keep in mind. My interest in this approach is to help ensure that objective assessment of a body of work is seen through relatively clear eyes and unfettered minds. See what Dave Snowden has to say about our inability to maintain clear, focused objectivity. In particular I see Snowden's effect called "self-confirmation" interfere with the process of constructing meaning in documents. Snowden describes self-confirmation as follows:
Self confirmation our ignoring any evidence that disturbs our pre-judgements or hypotheses and means rationalization means that we only search for data that will support the pre-judgements.
In our McCulley/Cuppan work we have seen far too many times discussions on what to "say" in a document driven by personality, where it is the loudest or longest exhortations or often pay grade of the commentator that will decide what message or messages will or will not appear in a document. Certainly not a best practice approach for effective management and representation of knowledge in business documents.


Originally published on our Knowledge Management blog

20 April 2009

Improving the Practice of Document Review

Document reviews should be used as a tool to build quality into research and technical reports. In most handbooks for professional writers, review is recommended for clear and simple reasons: it is intended to identify problems and suggest improvements that enable an organization to produce documents that accomplish its goals and meet readers’ needs. It is true that science creates devices and drugs, but it is the documents that secure product approval and registration from the FDA and other regulatory agencies.

To create high quality documents in the most efficient manner, reviews must take place at various stages of document development. No matter the stage, all reviews should be strategic—that is they need to address the fundamental question of whether the document makes the right argument about the data described in the report. Reviewers should ask if the document stands up to challenge and fully justifies its conclusions. They should ask whether the reader is given enough context to understand the positions expressed in the document.

Review allows subject matter experts and upper management to add information that may not be available to authors. Review offers an opportunity for building consensus across functions within an organization.

Review is a process of evaluation that focuses on the functional elements of a document (what the document is supposed to ‘do’ or supposed to ‘say’). We can characterize the major purposes of review in descending order of importance as follows:
  • Attending to purpose in terms of confirming content matches purpose of the document; logic of the arguments are complete and relevant, and the organization of the document content will readily support what the reader wants to do with the document.
  • Attending to audience in terms of confirming precision of the discussion (semantics); sufficient contextual information; and ease of navigation.
  • Attending to compliance in terms of confirming accuracy and completeness of content; consistency of style; and reasonably well-structured grammar.
Successful collaborative document development and review practices always include the following attributes:
  1. Involvement of critical stakeholders early, defining their roles and responsibilities.
  2. Articulation of the targeted scope, purpose(s), and message(s) for the final document.
  3. Shared quality standards for the final document product and formally described procedural agendas for the who, what, when, and why of review.
  4. Identify and plan phases of review and associated priorities.

Originally published on our Knowledge Management blog

01 April 2009

How Do People At FDA Read Documents On-screen?

With the substantial move to submitting electronic documents versus paper documents to FDA, it is useful to pause and consider how somebody actually reads a large complex technical document on screen.

Research from reading theory, human factors studies, cognitive psychology, and technical communication have helped us at McCulley/Cuppan develop a set of assumptions regarding reading behaviors for online texts. We are unaware of any studies looking directly at the ways in which regulatory reviewers approach electronic texts. However, there is research that examined the ways in which readers use electronic documents. From these studies and our FDA reviewer interview data we have constructed a set of assumptions about online reading behaviors deployed by readers in regulatory agencies like FDA. 

Research suggests that users do not respond passively to a system but instead have goals and expectations from which they make inferences and predictions (Marchionini). While users possess mental models, abilities, and preferences that are unique, the regulatory reader is largely very familiar with the structure of regulatory submission documents and the sub-genres that constitute a drug filing. As a result of this highly developed genre knowledge, the regulatory readers of electronic submissions are likely to share a similar schema and engage in particular tasks when reading documents. These reading tasks include constant questioning of the drug and device sponsors’ methods and results. 

The challenge for sponsors putting together electronic submissions is how to best satisfy the reader’s expectations, expectations which are based on the structure and organization of paper documents. It is most likely that radically redesigning a submission document in order to take advantage of online text would remove a lot of the expectations that regulatory readers use as they work. In addition, redesigning a submission document places sponsors at the risk of being seen as “nonconforming” to agency standards for documents and data set designs.

Because users generally interpret unfamiliar concepts “in terms of existing procedures or schemata” (Hammond), the key for electronic submissions is to adhere to reader’s expectations by making clear that necessary elements of the submission are included and structured logically. Once the readers realize and accept the structure of information in an electronic submission document, they can then take advantage of hypertext features that make review tasks easier and less time consuming than they are with paper submissions. Among these features should be an organizational structure and actual text format that enables readers to see clearly a hierarchy of information, find specific information quickly, and annotate and store information. 

Several studies have compared reading practices of users viewing paper and electronic texts. Results from these studies (Leventhal et al. 1993, Smith and Savory 1989, Gould et al., 1987) indicate that often reading information on a screen takes more time than on paper, leading to a performance deficit of “between 20 and 30% when reading from screen” (Dillon, 1994). 

However, as studies of the SuperBook project indicate, when an electronic text is designed to anticipate users’ needs and reading strategies, time on task can actually be reduced and search accuracy improved. These studies of the SuperBook project, conceptualized by the cognitive sciences research group at Bellcore during the mid-1980s, evaluated the usability of an electronic text over a print textbook. Although the first experiment indicated that speed and accuracy were no better for the electronic SuperBook browser than printed text, later experiments that used a revised version of SuperBook indicated an advantage in both speed and search accuracy of the electronic text over the print text. In particular, the revised version reduced search response times and modified search techniques, incorporated advance organizers such as displaying the Table of Contents continuously, and revised the placement of graphics so that they did not overlay the Table of Contents window as they had previously. These revisions resulted in a 25% advantage in both reading speed and accuracy of the electronic text over the print version (Landauer et al., 1993). 

Data from Dillon (1994) indicates that readers can locate information just as quickly in electronic texts as they can in paper as long as the reader is given an accurate model of information structure and is not required to read dense and lengthy portions of text, as lengthy sections on screen can led to speed deficits. 

The impact of information structure on reading speed is investigated in Hornbaek and Frokjaer’s 2001 study. The researchers studied three different interfaces for electronic documents to determine which design facilitated the fastest reading speed. Results indicated that subjects using fisheye interfaces read documents faster than they did with linear and overview+detail interfaces. The authors recommend fisheye interfaces, which reduce navigation time by distorting the document so that the “important parts” of a document (i.e. first and last paragraphs, headings, topic sentences) are immediately visible to readers and the rest of the information in a document can be expanded and viewed with the click of the mouse. According to Hornbaek and Frokjaer, this interface encourages readers to employ “an overview oriented reading style.” 

In addition to the impacts of different interfaces for electronic documents, the spacing and size and style of font may make also affect time on task. Kruk and Muter’s 1984 study indicated the ways in which spacing of text on the screen impacts reading time. Single spaced text produced slightly more than 10% slower reading than double spaced text.
Research also shows that users may have problems reading serif fonts. Several studies (Bernard et al. 2001; Schriver 1997; Hartley and Rooum 1983) have investigated the effects of font styles on reading efficiency and legibility and come up with inconclusive findings. However, as Williams (2000) explains, users may have problems reading serif fonts on the screen because they can appear “blocky and disproportionately large, especially when displayed in small type sizes or on low-resolution screens.” Williams also notes that due to the distance from the eye to the screen and the fact that the majority of users do not have perfect vision, font sizes should not be any smaller than 12 points.

I’ll continue this discussion in another blog and talk about the impact of electronic submissions on regulatory reader time-on-task and comprehension. 


Originally published on our Knowledge Management blog

09 March 2009

Medical and Pharmaceutical Writers Must Use Knowledge Management Tools To Create Their Documents

In a past blog post I mentioned that full-time writers in the life sciences must see themselves as much more than just writers. I argued that writers engaged in pharmaceutical, device, and clinical research must see themselves as knowledge managers, not merely the managers of data or the "shapers" of information. I have argued that writers must do more to help control the knowledge environment. In this post I suggest writers must make use of knowledge management tools to help them represent the explicit, and more importantly, the tacit knowledge of a development or research project.

It is to this point that I want to bring to your attention two posts by Arjun Thomas on the excellent blog Project Management Tips.  The first post is titled Knowledge Inventory.  In particular, I like Thomas' discussion regarding the identification of tacit knowledge within a development team or organization. I believe it is critical for professional writers to see as part of their job the need to track and then leverage tacit knowledge within the documents they are tasked to produce.

In the second post Thomas talks about different methods to maximize the capture of both tacit and explicit knowledge. In particular I want to bring your attention to the concept of the Information Repository that Thomas presents. I suggest that as writers what you really want to create is a Knowledge Repository. This is something we do with project teams in our consulting practice at McCulley/Cuppan.


The Knowledge Repository as deployed by a writer is a structured table that keeps track of all the critical messages and issues that must be addressed in their document. The table consists of columns that track the following:
  1. Messages the writer wants the reader to absorb and understand about the development work or research described within the report.
  2. The issues (questions) the reader will likely have about each message and the nature of the development work or research described in the report.
  3. The answers to each question.
  4. The underlying rationale as to why each answer is the most appropriate response.
  5. The specific data or precedence that supports the line of thinking described by the message, response, and rationale.
  6. The subject expert(s) who best know the specific message and the data to be described in the report.
Constructing such a table will provide writers with a map, a map that frames the intellectual argument of a document as well as a map that directs them (and others within the team or organization) to the knowledge sources that must be mined to ensure the document fulfills the intended strategic purpose.
I am working with my colleague, Philip Bernick, on a project right now where we are guiding a client through the process of developing this type of Knowledge Repository. The repository now contains about 600 discrete rows of information. We are using the repository in a variety of ways. Principally as a tool for gap analysis and then as an organizing tool to map out the various sections of  a business-critical document that the development team must produce over the next nine months.

Highly effective writers working in the pharmaceutical and medical device industry understand the importance of using knowledge management tools. They understand that true knowledge retention is not an easy task and it is important to capture knowledge in a written form. They also understand that it is critical to have a well-defined method that is easy to use and scalable to the demands of the project or else you will end up with a mess, not a knowledge repository.


Originally published on our Knowledge Management blog

04 March 2009

Building and Using Knowledge

Here is an interesting post by Dave Snowden at Cognitive Edge postulating his Seven Rules of Knowledge Management. In particular I am drawn to Snowden's Rule No. 7--
We always know more than we can say, and we will always say more than we can write down.
This is such a profound truism. The process of moving from mind to speech to written word involves moving through different media. And is in physical science this change in medium can result in signal decay. Unfortunately, this is a truism that should not exist, because there are writing tools designed to help capture knowledge and to help reduce the loss of knowledge.

The other rule I find intriguing is Rule No. 4--
Everything is fragmented.
Snowden postulates that humans are wired to deal with fragmented information in an unstructured environment and this is why we have problems working within the confines of highly structured documents. An interesting theory. I am trying to find more research on this topic. If true, then when I see my clients struggle with their document templates or engage in reviews best described as chaos I can just sit back, sigh, and say: "well they cannot help it, they are just wired that way."

All kidding aside, Snowden observations are worth keeping in mind whenever you want to plan a large collaborative project or map out work processes, especially documentation processes.


Originally published on our Knowledge Management blog

22 February 2009

Rethinking the Design of PowerPoint Slides: Claim-Evidence Structure

This post continues the discussion regarding the presentation of technical and research information through the medium of PowerPoint.  My assertion in the last post is that the users of PowerPoint are the principal party at fault for lousy presentations and the wholesale disregard for their audience.
One of the criticisms leveled against technical PPT slides is the overuse (perhaps abuse is a better descriptor) of the topic/subtopic organization structure. This outline format will routinely place critical information in subordinated positions.

So one of the simple ways PPT presentations can be improved is to follow the BLUF principle. Bottom Line Up Front. In such an approach you design slides to follow the basic structure of an argument.  The structure as defined by Stephen Toulmin. Here is a link that provides a nice overview of Toulmin's argument model.

In essence a well-crafted argument must have three components:
  1. Your Claim
  2. The Evidence (Data underlying the claim)
  3. The underlying principle that in essence answers the question: why does the data make your claim to be true?
I would suggest that this should be the basic organizational structure for all forms of technical/scientific communication, be it a technical research report or a technical presentation. In our McCulley/Cuppan consulting practice we are always reminding clients to design documents and presentations in this manner.

I want to bring to your attention a marvelous web page at Penn State University addressing the use of the claim-evidence-principle model of PPT slide design. The creators of this web page have done a nice job of loading example PPT slides, references to additional resource information, and a real nice bibliography on the genre of presentations. Here is the link to the website:  http://writing.engr.psu.edu/slides.html.  Take a couple minutes to check it out.


Originally published on our Knowledge Management blog

31 January 2009

Just What Do We Mean by Collaborative vs Cooperative?

I spent considerable time working the past two weeks with three different clients developing three different business platforms with three different work cultures in three different geographic locations. Yet they all do their development work , especially mission-critical development work and associated documentation (such as regulatory submissions) in essentially the same way. All three companies engage in cooperative work with very little collaboration occurring at any time in the process. Generally when collaboration does happen it is very, very late in the process and occurs only during meetings to address review comments on documents.  That is it for collaboration, a thin veneer very late in the process.

In all three instances, the people I work with believe they are indeed operating in a very collaborative work environments. They sit in dismay listening  to my characterization of their work practices as  described above. But after we the peel the onion, they begin to appreciate the distinctions of working merely at the level of cooperation versus the level of collaboration. My starting point with clients is to provide an effective working definition of what we at McCulley/Cuppan mean by collaborative versus cooperative work practices.

As a starting point for any discussion we have to examine the fundamental difference between collaboration and cooperation. The line of demarcation is the level of formality in the relationships between departments or stakeholders in the conduction of work to support a common goal, which in the pharmaceutical industry is to bring a new drug or line extension to the market. I tell my clients that collaboration involves these departments or stakeholders coming together and fundamentally changing their individual approaches to sharing of resources and responsibilities as well as ways of working and information sharing. Cooperation on the other hand is where departments or stakeholders maintain their separate mandates and responsibilities, engage in doing most work as they see appropriate (and generally isolated from others on the project)  but may agree to do some work together or present work for review by other stakeholders or departments in order to meet a common goal.

To help drive further discussion on just what do we mean by collaborative versus cooperative, I suggest you read this David Eaves post discussing his perspective on cooperating versus collaborating.

    Originally published on our Knowledge Management blog