Showing posts with label improving reviewing skills. Show all posts
Showing posts with label improving reviewing skills. Show all posts

14 August 2012

Much can be Done to Improve Reviewing of Research Protocols


I have been away from the blog for most of the summer. It has not been an intended absence, rather I have been caught in a whole lot of work and personal obligations that kept me away from making posts here. 

We have been doing several projects this summer related to the planning, writing, and reviewing of clinical research protocols. In this post I want to share some thoughts about review.

We believe that protocol reviewers need a more a refined mental model for how to review drafts of their documents—how to concentrate on the big-picture rhetorical concerns and set aside attending to edits and stylistic corrections during the early rounds of review and then attend exclusively to these elements in the final rounds of review.

In work with several clients, we find that reviewers confuse the roles of reviewers and editors and reviewers also confuse the timing of when edits should be made. By this I mean we see many reviewers operating at the sentence and word level in the initial review of the protocol concept document. The protocol concept document is where you have to get all the big picture details ironed out regarding study design and conduct before you sweat the details at the paragraph and sentence level. Based on our observations and interviews, we suggest that better guidance is needed to define the roles for reviewers and editors and time points for application.

We recommend that performance criteria and subsequent measurement should be enacted to help review teams better understand and appreciate the need to attend to the intellectual attributes of a planned research trial. When I ask people sitting in my workshops how many protocol amendments they average per study the response is always very similar: a collective groan followed by the retort, "we average way too many."  So it seems like it is time to change methods. It is at this point in the conversation with clients that I invoke the great Albert Einstein quote regarding the working definition for insanity: to continue to deploy the same methods, but then expect different outcomes.

Our assessment of work practices at a number of pharmaceutical companies suggests that reviewer discipline is poor in that reviewers fail to participate at all or in meaningful ways in early draft reviews and also extend reviews of certain protocol sections without really improving communication quality. When a protocol goes through 6 rounds of review, then something is likely not right and when the Background section in the protocol is actively reviewed in all 6 rounds, then for sure something is not right.

We recommend that late in the protocol development process guidance regarding the line-level review of text should be enacted to ensure precision and consistency. We find huge inconsistencies and ambiguities with three heavily used modal verbs: may, should, and must.  We also find that many protocols do a poor job of characterizing aspects of agency for instance, who has responsibility for decisions or ensurance of patient data integrity. We also find that reviewers rarely consider the temporal frame of reference for information in a protocol. We find huge inconsistencies where in a given paragraph the language may start in present tense, move to future tense, and then back to the present tense.

18 April 2011

How the Most Sophisticated Documentation Groups Operate

At the apex of our version of the Documentation Capability Maturity Model are the Level 6 "Optimizing" groups. These are very sophisticated writing groups that are continually looking at ways to enhance work practices and processes so as to better serve their customer needs.

At this level, the job descriptions for all the subject matter experts contain extensive descriptions regarding their roles and responsibilities in the development of high quality document products. Work performance is not merely judged on how well they execute design and conduct of studies, but also on how effective they documents are in supporting organizational strategies and economic objectives.

At this level, the writing groups rely upon carefully defined document quality standards that reach well beyond style guides and template preferences. These groups articulate detailed guidance for executing effective strategic reviews. All understand the importance of working to shared standards over individual preferences. Reviewers authenticate and sign off for documents meeting strategic intentions and communication quality. If problems arise downstream, then the reviewers are held as culpable for the problems.

Documentation project management inside a Level 6 writing group tracks the amount of time along with other parameters applied by individuals to planning, authoring, and reviewing documents. Performance is always reviewed for "lessons-learned" at the end of all major writing projects.

At this level, there is a clear commitment to the assessment of document usability for the target audience and even testing of  document designs for certain types of documents (such as clinical study protocols)  early in the document life cycle.

These writing groups take full advantage of authoring tools to assure information is effectively generated once and then repurposed to other documents as a drug or device asset moves forward in the development life cycle.

Lastly, these very sophisticated groups make time in their busy schedules for innovation both in terms of work practice and work tools.

I do not know of any groups in the pharma or med device industries who have the above mentioned attributes. Do you?

22 December 2010

Not knowing when good is good enough in writing regulatory documentation has a huge cost


We do not talk much on this blog regarding the use of language or the application of terms in science writing. Principal reason is that much of what we see in regulatory submission documents is genuinely “good enough.” However, others do not necessarily see it that way. I want to share with you how discussions in review roundtables can end up getting focused at really absurd levels of detail with a misapplied sense of establishing quality communication. 

In our consulting work, we try to be disciplined during our document reviews and only comment on language when it truly obscures or alters meaning. Being grammatically perfect in regulatory submission documents is a nice notion, but in practice consumes way too much time and organizational energy and will yield little in terms of outcomes.

We share this point with people all the time...but at times the advice goes unheeded and even worse…at times people just do not know when to move on and address real big concerns in their documents.

A case in point is a situation I observed regarding a long winded discussion in a review meeting over the use of the term “very critical”. The term “critical” in a medical sense means: of a patient's condition having unstable and abnormal vital signs and other unfavorable indicators. In theory, the meaning of critical is a black-or-white proposition without qualifications regarding gradation. Something is either critical or it is not. Therefore there should be no adverbs, like “very” in front of the term “critical” to connote a measureable degree of criticality. In this roundtable review the team got caught up in a 30 minute discussion that involved only two people arguing whether to use the term “very critical” or change it to “critical.”

Being pragmatic, I’d have to say: “Guys what are you thinking? You hold a team hostage for 30 minutes to argue over grammatical accuracy? To argue over something that will not matter when and if read by a regulatory reviewer. There were 10 professionals sitting in the room and 8 did nothing for 30 minutes. Cost of salaries alone is enough argument to say “Forget about it, let’s move on…we cannot afford to argue over such insignificant detail.” When we add in the opportunity cost (what these 10 people collectively could have been doing with their time), then for sure you have to make the argument.

This above episode gets played out time and time again in review sessions all over the pharma and medical device industries and is the reason why I am steadfast in my position that the vast majority of people involved in authorship and review do not know the answer to the question “How do you know when is good, good enough?” The end result is inordinate amounts of time can be applied at the wrong level of detail in reports and submission documents.

06 October 2010

The importance of understanding the reader and the need to be informed by reading theory

In thinking about the ExLPharma conference where I spoke two weeks ago on review, I talked for some time about the importance of reviewers understanding their readers and the need to be informed by reading theory so as to become a truly good reviewer. My comments were largely well received by the group attending the conference and, for most of the points I raised regarding reading theory, were truly novel for them. I came away from the conference with the notion that few people actively engaged in creating business documents have invested time in thinking about what regulatory readers actually “do” with their documents and how they actually read these mission-critical documents.

Given that most people inspect a document for accuracy tells me that the prevalent view regarding reading is “if we get the numbers and the words right, then we are good to go and surely everyone will understand what we mean.”

I remain perplexed as to why this view is so uniformly applied. I am even more perplexed why some people I cross paths with in my client settings struggle to accept the notion that readers construct a meaning that they personally create from a text, so that "what a text means" can differ from reader to reader. Readers construct meaning based not only on the visual cues in the text (the words and format of the page itself) but also based on the knowledge readers already have stored in their memory. This pre-existing knowledge readers bring with them as they encounter a text is very potent and not much appreciated by many I work with across the life sciences.


Originally published on our Knowledge Management blog

01 September 2010

More Regarding Merit of Peer Review

Last week the New York Times had a major article devoted to web alternatives to peer review. Actually it does not describe an alternative to peer review but rather an alternative way of doing peer review via the web. The concept is not new, but by getting major coverage by the NYT, perhaps web-mediated review is starting to get some real traction.

Way back in 2004, Gerry McKiernan, Science and Technology Librarian at Iowa State University, posted a great paper suggesting internet-based peer review alternatives: "Peer Review in the Internet Age: Five Easy Pieces."

Last point for this post, here is an interesting editorial paper: "Is Peer Review Censorship?"  written by Arturo Casadevall and Ferric C. Fang that appeared in the American Society for Microbiology. As a point of emphasis to echo comments we have made previously in this blog, I note their comment that:
The current system persists despite abundant evidence of imperfections in the peer review process. Most scientists would agree that peer review improves manuscripts and prevents some errors in publication. However, although there is widespread consensus among scientists that peer review is a good thing, there are remarkably little data that the system works as intended.
Once again, why is a community so driven by evidence, so willing to cast a closed eye to the paucity of support for the practice of peer review?


Originally published on our Knowledge Management blog

16 July 2009

How Do You Measure Communication Quality?

One of the truisms we see in our McCulley/Cuppan consulting work is that rounds of document review tend to go until the point when the document must be sent somewhere. That's why we say that in the pharmaceutical industry, the opportunities for making changes to a document are virtually limitless. The problem driving this situation is most people involved with authoring and reviewing process do not have good markers to inform them of the overall communication quality of a document.  So without good markers they are left to utilize really poor markers to help them measure document quality. Markers like: grammatical soundness; how many people have reviewed the document; how many rounds of review; and how many comments leveled on the text and data in the document. Unfortunately, these markers have little correlation in the case of grammatical soundness and, for the other three, no correlation whatsoever to the communication quality of a document.

To paraphrase Steve Jong in his paper (you can read it hereYou Get What You Measure—So Measure Quality: "if you don't measure it, you'll never get it."  This is so true with document communication quality.  In order to measure communication quality you have to employ meaningful markers. We find our clients typically employ only two markers that are useful: accuracy and compliance. Unfortunately, neither of these do much to measure the quality of argument, soundness of logic, or overall usability of a document for the end-user. There are some useful markers to consider for measuring these document attributes. More on these markers in my next post.


Originally published on our Knowledge Management blog

11 June 2009

Unproductive Review Practices: Why They're Still Around Even Though People Know Better

In "Why the Focus on Review Practices?," my colleague Jessica Mahajan highlights the observation made in our McCulley/Cuppan consulting that reviewers, who are expected to work toward enhancing document quality to improve its effectiveness, tend to direct much of their review effort to mechanical and stylistic document elements (syntax, word choice, punctuation, grammar) at the expense of the intellectual work the document is supposed to do. One of my previous posts "How Do We Get People to Apply Improved Work Practices?" explores ways to motivate change when change would provide significant benefits to both individual and organization. In turn, I have a theory about why we continually see subject matter expertise for review applied to the task of copy-editing, and why that practice is so hard to change. The theory is built around how we:
  • Learn to write.
  • Learn to review.
  • Ask for review.

How We Learn to Write
Think about how you learned to write. If your experience was like that of kids I visit in middle and high schools, then your teachers tried to encourage you to write in a context and with a purpose. Unfortunately, they likely ended up using rubrics that are all about structure, word usage, and typography. A rubric that had little regard for how well writing fulfilled purpose and satisfied reader's needs. A rubric I saw recently (I collect these things, and this one was typical) graded students on everything but content, and as long as the writing followed the specified form it got top marks (an A in this instance). A really interesting paper on black holes, and the physics behind them (which may have been beyond some readers, but worked really hard to make the ideas accessible to a varied reader audience) got a B because of errors in typography. More popular are the 5 or 6 equally weighted measures called writing traits. Students are given points for:
  1. Ideas and Content
  2. Organization
  3. Voice
  4. Word Choice
  5. Sentence Fluency
  6. Conventions
Just look at this: how can the assemblage of ideas and content bear value no greater than word choice and sentence fluency? When our ideas are given so little weight (~17% here), is it any wonder that people attend to form over function?

This is how we learn to write--texts we create are based on finished models that rarely tell us what makes them good models. Further, we are never given insight to the process of crafting and iteratively refining a text, that is, a model for what should be in place in a first-draft document, versus a second draft. In most learning environments, documents are judged based on how well they adhere to rules constructed for talking about how language should work.

Unfortunately, this approach does not change when we get into higher education. Some courses in technical degree programs have a writing component. But if you test out of freshman composition (where the previous description is still pretty accurate) then at best you may get one required course in technical communication. This might be taught by a creative writing student who is mostly interested in finishing their MFA and thinks that learning to "write" a well-organized memo (form) should be one of the four major projects students will prepare for the course. Because creative writing and technical writing don't have much to offer each other, right?

There are exceptions to this scenario, but unfortunately the above is probably a pretty good description of the rule. More to the point, grading is hard (especially when there are not good rules for anything other than grammar and punctuation), and most students are primarily interested in receiving top marks. So students simply want to know what they have to do to get the top mark. The model is a finished document that is good enough--in terms of content, organization, voice, word choice, sentence fluency, and conventions--to get a top mark. Likely the target given to the students addresses only five of the aforementioned six attributes. The one left out is content. So the student focus and energy goes into fulfilling these five attributes.
As an informative aside, when departments ask for help in training their people (students or employees), the most frequent initial and typical request is to "just help them with their grammar". This despite the fact that we know when we focus on grammar, the quality of writing, measured by what the writing does, goes down.
Learning to write in the workplace is slightly different. Here we're given a finished document, and told to 'make it look like this'. The document is complete, but bears no annotation or associated guidance to suggest what attributes of the document makes superior and worthy of the status of a 'model'. In a worst case scenario, someone's internal dialog might go something like this, "I'll look pretty stupid if I ask my boss what it is about this document that makes it a good model--so I won't. I mean, it is obvious, right? And besides, I (choose all that apply: a. got A's in English, b. have a PhD, c. have written technical reports before, d. I have all the data in the report) so... I must be okay, right?"

In our McCulley/Cuppan consulting, we constantly see a model used for constructing documents where new documents are based on old ones. Authors endeavor to make new reports as complete as possible before asking anyone to give them a look. I can recall several instances where authors were told to write until they had nothing else to say and then their supervisor would be ready to look at the report. This approach to writing--to model a preexisting document and to make it as complete as possible before bringing extra eyes in to help--sets up a workplace dynamic that sabotages the potential for productive change.

How We Learn to Review
How we learn to review follows the model of how we learn to write. In school, students construct papers that respond to prompts and are graded. We spend our time learning how to construct sentences that are grammatically correct, forgetting that people can get over a misspelled word or two or a tense problem if we have something to say. Often the only thing teachers can use to distinguish one useful response to the prompt from another are the mechanical elements of a sentence. And they can't give everyone an A. That would be grade inflation, or worse!

Papers are returned to students with lots of blood (well, red ink since lots of teachers still like those pens) that identify words misspelled, grammar errors and organization problems. In other words, the students work is 'assessed', but not reviewed. And I can't blame the teachers--this is what they were trained to do and to help students effectively learn to communicate well via the written word involves a lot of reading (not fun or easy, I promise). Identifying the mechanical problems in a document is easiest and fastest, which is a consideration when you have thirty or more papers to read in an evening. I have a colleague teaching so many sections that she's got 115 papers to read at one sitting!
The problem is compounded by the fact that in competitive societies we're taught not to collaborate. Rare is the teacher that has students collaborating on projects or written work, though thankfully, this is changing. We learn not to share our answers with others ('cause that's cheating). What we practice in school is what we bring to the workplace, supplemented with observations and suggestions from people who review with us that help us to construct new models. In terms of document review, we start with the models we got from our teachers: fix the typos, suggest alternative wording, and massage the format.

Since our colleagues use this model too, we stick with it. In other words, we do what is familiar. We also have to do something during the review. In the absence of more specific instructions we have to let people know we put reasonable effort into the review exercise. After all, review is an activity. One of the ways to measure the extent of our activity is to count up the total number of review remarks we have left on a document. The more remarks, the better job we did as a reviewer. So we turn our attention to verifying the numbers in the report are accurate and make to 'dot the i, cross the t, and don't forget that comma'.

We are conditioned to be reactive reviewers--we respond to what is present in a document, not what is missing. We are conditioned to operate on the concept of a finished document, no matter where the document sits in the drafting process. Even with a draft one we start at page one and we work straight through the document until we're finished--that is, finished with the document, out of time, or out of energy. We see this all the time in our assessment of review practice. There is straight-line decline in the frequency of review remarks per page as you move through the document. We see Draft 1 report synopses and summaries overloaded with review remarks even though in the body of the report the Discussion Section is only 30% completed and there is no Conclusion Section yet.

Through conditioning in the workplace, we have no sense of strategic review. The prevalent strategy is to simply get the review done so we can 'get back to our day job'. We often have a reverential belief that all it takes to succeed with a scientific or technical report is to just get the data right. That is, make sure the data are accurate. We are also conditioned to think that all you have to do is get the study design right--the rest does not mean too much. That is, the report is merely a repository for data. So we are conditioned to discount the value of scientific reports because constructing well-written, clear, concise, and informative documents takes time away from our 'day job' of conducting science.

In other posts we've talked about the importance of review and the huge commitment organizations have made to review. You would think that, since it is so important, more time would be spent on training people to become more effective reviewers--particularly during their professional training. Yet we don't see this. We've not found a single academic program offering credentials in technical communication or medical writing that offers a course in review (as opposed to editing)--yet the complexity and difficulty of review would certainly warrant one.

Most reviewers learn to review on the job. How do we know? We've asked thousands working in various organizations covering a broad spectrum of disciplines, and we've read others who've asked. Further, a quick survey of the most popular books in Technical and Professional Communication and Medical Writing devote little real estate to the topic of review. In a three hundred page text, we find less that 5% devoted to review. Yet review is certainly more than 5% of the process.

How We Ask for Review
When we analyze review practices and products for our clients we look at more than just the documents under review. We also assess how people communicate about review and the tools they use to facilitate review. Typically communication regarding the task of review is a simple request: "Please review this by such-and-such (a date)." We rarely find instructions from the author to help inform reviewers: "Please have a look at section x because I'm really having trouble explaining y." We'll post a longer description of this topic, but the point to be made here is that authors rarely help their reviewers with instructions/review requests that focus reviewers on what would help the document and authors advance their work.

Our assessment of review practices suggests that the collective review effort does little to improve the document's communication quality. It likely will improve accuracy of the data, compliance with a template, and contain sentences that are all grammatically correct. But the conveyance of messages and the logic of arguments may remain murky or even suspect. Given everything I have said up to this point, why would you expect a different outcome than this?

The Theory
So here is the theory: Expensive subject matter experts are reduced to copy-editing because that is what they know best (they come into the professional box with plenty of conditioning from the academy), it is familiar, it is what everybody else does, and their organization hasn't offered them a better alternative. Further, the situation won't get any better because even if (when) they find a better alternative they're too busy to change (they have their day job to do and besides they have too many documents to review to be fettered by revising ways of working), and even if they wanted to change things, the organization's leadership wouldn't buy into it.

Fortunately, much can be done to really 'move the needle on the meter' and improve individual and organizational ways of working when it comes to the task of document review. I know this to be the case from the consulting and training work we do as we've helped a number of organizations improve review practices and document quality.


Originally published on our Knowledge Management blog

20 April 2009

Improving the Practice of Document Review

Document reviews should be used as a tool to build quality into research and technical reports. In most handbooks for professional writers, review is recommended for clear and simple reasons: it is intended to identify problems and suggest improvements that enable an organization to produce documents that accomplish its goals and meet readers’ needs. It is true that science creates devices and drugs, but it is the documents that secure product approval and registration from the FDA and other regulatory agencies.

To create high quality documents in the most efficient manner, reviews must take place at various stages of document development. No matter the stage, all reviews should be strategic—that is they need to address the fundamental question of whether the document makes the right argument about the data described in the report. Reviewers should ask if the document stands up to challenge and fully justifies its conclusions. They should ask whether the reader is given enough context to understand the positions expressed in the document.

Review allows subject matter experts and upper management to add information that may not be available to authors. Review offers an opportunity for building consensus across functions within an organization.

Review is a process of evaluation that focuses on the functional elements of a document (what the document is supposed to ‘do’ or supposed to ‘say’). We can characterize the major purposes of review in descending order of importance as follows:
  • Attending to purpose in terms of confirming content matches purpose of the document; logic of the arguments are complete and relevant, and the organization of the document content will readily support what the reader wants to do with the document.
  • Attending to audience in terms of confirming precision of the discussion (semantics); sufficient contextual information; and ease of navigation.
  • Attending to compliance in terms of confirming accuracy and completeness of content; consistency of style; and reasonably well-structured grammar.
Successful collaborative document development and review practices always include the following attributes:
  1. Involvement of critical stakeholders early, defining their roles and responsibilities.
  2. Articulation of the targeted scope, purpose(s), and message(s) for the final document.
  3. Shared quality standards for the final document product and formally described procedural agendas for the who, what, when, and why of review.
  4. Identify and plan phases of review and associated priorities.

Originally published on our Knowledge Management blog