Recently, I was involved on an interesting project at a Top-Ten pharmaceutical company. The project entailed assessing prevalent review practices applied to people working within one of the R&D groups. What I examined was the complete review record (from first draft to final draft) for various research reports produced within this group. The assessment involved a quantitative and a qualitative analysis of review performance. Following are some of my thoughts and observations.
As anyone who’s familiar with this blog knows, improving document review practices is of great concern to us at McCulley/Cuppan. Why? Why do we, and our clients, keep coming back to the topic of review performance? The following observations on a recent consulting project provide some insight as to why review is, or needs to be, a central focus for improving knowledge propagation and dissemination.
In this project we analyzed the effectiveness of a team in moving from conceptualizing to finalizing a document. We did this through an extensive analysis of their review commentary generated through different stages of document development. The findings were consistent with what we’ve seen over the past years of assessing review practices for other clients: some good, some bad, and some ugly.
Bottom-line--we found considerable room for improvement.
Following are some examples of what happens when resources and tools are misapplied during the review process.
Senior Management as Spell-Checker? On this project looking at four business critical documents we found that throughout multiple drafts of each document (even up through final draft) there were edits for word choice, punctuation, verb tense, and spelling made by senior management, including the group vice president. Let me repeat that--the vice president of the research group focused on making spelling edits. Why is senior management focusing on basic edits to structure? That is one expensive copy editor. Should a senior official in a group be bogged down at the line level making edits? Is that the best use of their time, talent, and insights? I think not. If that is their focus, then who is responsible for keeping the arguments presented in the documents strategic and logical? This is a common practice; perhaps there is some thought that tweaking grammar improves the rhetorical and semantical structure of a document. Rather I think it is merely a matter that these are easy elements to fix versus considering how well a document fulfills the intended logic and strategy.
Simultaneous review What happens when you send a document to multiple people at the same time with the same review instructions (which is often merely "please review")? You get massive duplication of edits. To the tune of hundreds of same or similar edits per document. Then on top of that, the authors of these four documents had to deal with a variety of syntactical or lexical edits (structure and word choice) made to the same piece of text, but with slight variations. Whose edit do you choose? A common practice we find is the edit made by the individual with the higher pay grade tends to trump all other recommendations.
Chaos reigns supreme When a document is reviewed by upwards of 20 people through multiple drafts, (and I mean multiple--like 5-8 rounds of review!!) and they receive little guidance or control to what may and may not do in the review process, then chaos often reigns supreme. We find that work is constantly revisited with everyone making continuous edits throughout the document--this is why we say the opportunities to revise a document are virtually limitless. A case in point: we looked at one document that moved through eight rounds of review. Yes, you read that correctly--eight rounds of review. In tracking review comments for just one section of this document (yes, the following numbers reflect review comments for only one of the sections in this document) we find the following review performance:
Draft 1-- 14 comments; Draft 2-- 55 comments/119 edits; Draft 3-- 97 comments/765 edits; Draft 4-- 42 comments/578 edits; Draft 5-- 37 comments/423 edits; Draft 6--15/comments/98 edits; Draft 7-- 37 comments/153 edits; and Draft 8-- 99 comments/272 edits
Clearly on this project, the team had problems with establishing what they wanted to accomplish within this particular report section and how to establish when good is good enough.
I know some readers of this post may say--"oh my gosh, that kind of performance would never happen with our document reviews." Keep in mind, I mentioned at the start of this post that such outcomes are all too common.
Originally published on our Knowledge Management blog
No comments:
Post a Comment