10 September 2011

Why Do Life Science Research Groups Fail to Improve Review Practices?

I've been away from the posting entries here for much of the summer as I had quite a bit going on both with McCulley/Cuppan projects and some personal writing projects. One of those writing projects is a collaboration with my long-standing colleague at the University of Delaware, Stephen Bernhardt. We worked together on a paper I was looking to submit to the Journal of Business and Technical Communication (JBTC) summarizing our examination at two large pharmaceutical companies of review practices applied to clinical research reports written to support new drug registration submissions.

The paper is titled: Missed Opportunities in the Review and Revision of Clinical Study Reports. It is scheduled to be published next April in JBTC.  

The paper summarizes findings derived from formal interviews, examinations of review efforts on succeeding drafts of various clinical study reports, observations of roundtable discussions of draft reports, and our formal assessments of document quality between early draft and final report versions.

The key finding is that we found document review was too focused on low-level edits as opposed to global revisions that would improve the arguments and address audience concerns. We also found that time-consuming document reviews did not lead to demonstrable improvement in report quality, with evident and important problems left unattended from draft to draft. The interviews showed most reviewers felt that over 80% of their review effort attended to intellectual versus structural aspects of the document where as our assessment of those review efforts showed that nearly 75% of review efforts in fact attended to numerical and grammatical accuracy and other structural aspects of the documents. Further, the interviews showed that very few individuals had any meaningful insight into what other reviewers attempted to accomplish during their course of their reviews.

Reviews generated voluminous remarks and edits, but the vast majority of both in-text edits and marginal comments addressed data integrity, simple reordering of information, and low-level features of style and language. Few remarks in any round of review addressed construction, completeness, or representation of arguments, logic trails linking purpose and objectives to discussions and conclusions, resolution of difficult issues, or study design rationales that would satisfy a skeptical regulatory reader.

The volume and type of review remarks were relatively similar between early drafts and late drafts. Reviewers spread similar remarks in similar proportions throughout the documents, whether the review was of an early or late draft.

Reviews did not significantly improve communication quality, as measured by assessment of initial draft compared to final report version. The review effort apparently had little impact on the communication quality of the final version of the reports we examined.

Troublesome issues regarding soundness or completeness of evidence, irregularities in study conduct, and interpretation of data frequently went unaddressed in final reports.

A theme throughout the literature is that document review frequently brings into play conflicting or competing purposes. We clearly saw this too in our examinations.

We have now done extensive assessment of review performance in eight pharmaceutical companies. The problems mentioned above are clearly widespread.

Clearly there is need for companies to become more evaluative and methodical regarding their own work practices, in part because the costs associated with planning, authoring and reviewing individual research reports are substantial. If we consider the time and costs of authoring, review, and publication preparation, we estimate that a final clinical research report might range in cost from $50,000 to well over $200,000. Now if we add in the opportunity costs for the time spent in inefficient review efforts (most companies stipulate in their SOPs that reviews are to be completed in two rounds yet few ever attain that standard) and having to respond to regulatory agency inquiries because of poor communication quality of a research report, then the costs for a final version clinical research report quickly swell and the cost-per-page of the final report approaches $2,500/page. Numbers I find incredible.

Even more astounding is the continued tolerance for such wholesale inefficiencies within these companies. The big question is why. Why do these life science organizations tolerate such costs? I recognize that changing non-productive, conditioned, inefficient practices is not an easy matter and that a company must do a lot of work to counter poor reviewer discipline and the ingrained tendencies of review teams to focus on low-level stylistic edits as opposed to high-level rhetorical concerns. However, other industries have placed a premium on good review performance and invested the effort to change practice and behavior to achieve the desired outcomes. 

No comments:

Post a Comment