Showing posts with label measuring document quality. Show all posts
Showing posts with label measuring document quality. Show all posts

07 February 2012

More on What is a Document?

So what is a document?

In response to my last blog post, I have been asked by several individuals—"so then what is a document?"

My short answer—"I do not know for sure."

Now for the long answer.

The widely accepted definition for a document is as a textual record. This definition served us well in the past. But now with digital records, semiotics, and information retrieval tools; I am not sure the definition meets the needs of how we communicate in 2012.

As early as the 1930s Paul Otlet, an Information Scientist of considerable renown, suggested that the definition of documents also include digital images and even three dimensional objects. I am not prepared to toss all the elements Otlet describes into the mix. But I am prepared to suggest that documents are organized physical evidence and as such the organization transcends the classic definition for a document as this vehicle is a less relevant communication medium in 2012 than it was is 1982. I do not have a preferred term, I wish I did, but I do suggest we attempt to move away from the term document as it suggests a domain for organized physical evidence that does not match the reality of the digital age.

Suzanne Briet suggested a definition some time ago that a document is evidence in support of a fact. I rather like this notion. She makes the point that documents should not be viewed as being concerned with texts, but with access to the evidence. I suggest this is the essence of all regulatory writing that I talk about often in this Blog. If one considers the models in place for electronic drug submissions, thinking in the classic terms of 8.5 x 11 and A4 is really not very useful.

Rather it is better to be thinking in terms of taxonomies of information or perhaps even semiotics. Semiotics is the study of signs, indication, designation, signification, and communication. Semiotics is closely related to the field of linguistics. I look at semiotics as a valid attribute for this discussion because the life sciences are driven by numbers and what are numbers, but signs and the significance of those signs.

Then there is Michael Buckland who talks about how a key characteristic of “information-as-knowledge” is that it is intangible: one cannot touch it or measure it in any direct way. Knowledge, belief, and opinion are personal, subjective, and conceptual. Therefore, to communicate them, they have to be expressed, described, or represented in some physical way, as a signal or communication.

What we are really talking about happening in regulatory submission packages is the conveyance of knowledge.  This conveyance often transcends the boundaries of a traditional text, that is, a document as it is generally defined. The Briet notion of "evidence in support of a fact" works well as a definition of a document especially if we change the quote to read "evidence is support of a claim."


22 December 2010

Not knowing when good is good enough in writing regulatory documentation has a huge cost


We do not talk much on this blog regarding the use of language or the application of terms in science writing. Principal reason is that much of what we see in regulatory submission documents is genuinely “good enough.” However, others do not necessarily see it that way. I want to share with you how discussions in review roundtables can end up getting focused at really absurd levels of detail with a misapplied sense of establishing quality communication. 

In our consulting work, we try to be disciplined during our document reviews and only comment on language when it truly obscures or alters meaning. Being grammatically perfect in regulatory submission documents is a nice notion, but in practice consumes way too much time and organizational energy and will yield little in terms of outcomes.

We share this point with people all the time...but at times the advice goes unheeded and even worse…at times people just do not know when to move on and address real big concerns in their documents.

A case in point is a situation I observed regarding a long winded discussion in a review meeting over the use of the term “very critical”. The term “critical” in a medical sense means: of a patient's condition having unstable and abnormal vital signs and other unfavorable indicators. In theory, the meaning of critical is a black-or-white proposition without qualifications regarding gradation. Something is either critical or it is not. Therefore there should be no adverbs, like “very” in front of the term “critical” to connote a measureable degree of criticality. In this roundtable review the team got caught up in a 30 minute discussion that involved only two people arguing whether to use the term “very critical” or change it to “critical.”

Being pragmatic, I’d have to say: “Guys what are you thinking? You hold a team hostage for 30 minutes to argue over grammatical accuracy? To argue over something that will not matter when and if read by a regulatory reviewer. There were 10 professionals sitting in the room and 8 did nothing for 30 minutes. Cost of salaries alone is enough argument to say “Forget about it, let’s move on…we cannot afford to argue over such insignificant detail.” When we add in the opportunity cost (what these 10 people collectively could have been doing with their time), then for sure you have to make the argument.

This above episode gets played out time and time again in review sessions all over the pharma and medical device industries and is the reason why I am steadfast in my position that the vast majority of people involved in authorship and review do not know the answer to the question “How do you know when is good, good enough?” The end result is inordinate amounts of time can be applied at the wrong level of detail in reports and submission documents.

16 September 2010

Proving Document Quality

You say you produce high-quality document products? Great, now prove it! Although many can recognize quality when they read it, the prospect of measuring documentation quality seems dubious to many.

I work with a great many clients where authors say they generate high quality documents, but they can only show that their documents are accurate. That is, accurate in terms of template, source data, and grammar conformance. Beyond these three inspection parameters, many regard documentation quality as inherently immeasurable. I argue that these three parameters by themselves are wholly insufficient measures of document quality and that important aspects of how documents convey meaning can be measured.

Surely they must see other metrics of value, especially when considering factors like the number of regulatory questions elicited by errors of omission and commission in their very accurate submission documents or the number of protocol amendments generated over the life of their very accurate research protocol.

Some see additional metrics as meaningful, but not for their type of documentation. In the realm of clinical development documentation, many consider their output unique and beyond measurement. I hear more than a few medical writers say, “We know our own field intimately, and everywhere we look we see shades of gray and unique situations, so measuring is not useful.” and others in clinical development say “We are unique, and your notions of quality do not apply to us.”

Still others find the prospect of measuring output impractical or suggest that even if they had data, “Against what standards could we make comparisons?”

Even easily measured attributes like accuracy are not effectively tracked. Documents are routinely verified/corrected by quality control operators, yet the output is rarely tabulated and statistically analyzed to provide a measure of writer performance. The intention is merely to ensure the document is archived as accurate in terms of template, source data, and grammar conformance.

I do not know of any pharmaceutical or medical device company that makes use of any statistical quality control techniques to track their document quality. It would be easy to apply similar principles as used in the auto industry or pharmaceutical manufacturing. The auto manufacturers turn out millions of units every year, and each company tracks manufacturing defects, but not by examining every car. Instead, they pull a few cars off the assembly line and tear them apart. Pharmaceutical manufacturers test an appropriate sample from each manufacturing lot. In both cases the products are measured against a set of well-defined acceptance criteria.

Such acceptance criteria exist for technical and scientific documents (in our work at McCulley/Cuppan we have developed such a set of criteria that we use during our training and consulting work for clients) so such quality control methods could easily be deployed.


Originally published on our Knowledge Management blog

16 February 2010

Improved Document Quality Does Pay

Quality control and improvement in document content and design can provide a significant financial benefit to  pharmaceutical and medical device organizations. It is our observation that companies must learn that writing is a process open to continuous improvement.

However, I find in my consulting work that many people working in the life sciences question the merits of attempts to improve the communication quality of their technical and scientific documents.  Quite a few of these people subscribe to the notion that as long as I get the study design right or have the correct data, then that is good enough. This may be the case for a small subset of documents, but the assumption clearly does not apply to the vast majority of documents produced in the life sciences. It is our observation that an effort to improve document quality generally does pay off for the individual and the organization. Though these pay-offs may not be easily seen unless there is an attempt to capture data regarding the impact of better document quality.

Here's a few examples where organizations instituted quality control programs and measured change in selected outcomes in order to answer questions about their document products and processes. These organizations found document quality by design does pay.

The Motorola Corporation substantially improved its operation after instituting a document quality program. The company made changes in R&D and Finance operation guides. These changes were focused on providing clearer directions and rationales in the guides and an easy-to-use format. These changes have helped streamline processes, and reduced delays from review and rework. The company calculated savings at US$ 20,000,000 per year. Source – Business Week Oct. 25, 1992
The United Kingdom Department of Health and Social Security spent the equivalent of US$ 50,000 to develop and test a series of new forms. They have reported annual savings of over US$ 2.9 million in reduced staff time from the forms being properly completed the first time.
Source – Plain Language Principles and Practice by Erwin R. Steinberg

The United Kingdom Department of Customs and Excise cut a 55% error rate to 3% by revising lost-baggage claim forms used by airline passengers.  Source – American Institutes for Research
An  aerospace contractor lost 15 consecutive contract proposals over a three year time period before a quality proposal writing group was called in to review the quality of the proposals and to help the proposal development teams institute new work practices and improve writing skills for developing large proposals. After instituting changes in documentation work practices and processes, the contractor “won” 11 of 12 proposals.
Source – Shipley Associates private communication


Originally published on our Knowledge Management blog

24 August 2009

How Do You Know When Good is Good Enough?

How do you know when good is good for any document you may produce?

I ask this question in every workshop I facilitate. Generally the response is a head nod followed by the comment "Yes, that is the question.....I wish I had the answer." There is the rub. We rarely sit back and consider the notion of what attributes we need in place to have a high quality communication product. 

Often we will work on a document until we run out of time (I am convinced the only marker used by the majority of people authoring documents in the pharmaceutical or medical device world is time.) We do this because we have not defined what document quality "is."

Defining document communication quality means developing expectations or standards of quality. Standards can be applied at the level of an individual, team, or an organization. Defined standards or definitions of quality are prerequisites for measuring quality. If standards don’t exist, they must be designed.

Standards are explicit statements of expected quality. They may apply to writing and reviewing practice as well as to the document product. In terms of writing, document standards communicate expectations for how a particular document will communicate to the user what is to be known or to be done. In essence the standards establish the parameters that ensure a document achieves the desired results.

Originally published on our Knowledge Management blog

02 August 2009

The McCulley/Cuppan Standards Development Process We Use with Our Clients

As I mentioned in a previous post, in our McCulley/Cuppan consulting work we find the prevalent standards used to determine the “success” of a document are largely driven by simple measures of accuracy and then a passel of “home brewed” concepts for characteristics of document that are largely idiosyncratic ideas about what matters to the reader.

When you have 10 people reviewing a document you will end up with at least 12 opinions about the quality of the document (incongruous number is intentional as sometimes you have a reviewer offering more than one opinion that often conflict) and ways of describing quality that are all over the map. People use different terms to describe quality and if they actually use the same term, then it is highly unlikely that they will use the same definition for the term. So the first problem faced in the review process is the vocabulary used to describe quality attributes in a document.

When writers and reviewers compose or edit text, they continually make decisions that concern semantics—the meaning their words convey—and syntax—the way the words are arranged and other structural elements of the document. However, writers and reviewers often base these decisions on assumptions that have not been tested with technically-oriented adult readers or complex, data-rich, technical documents. Worse yet is many assumptions have never been tested to determine validity. Thus, there are actions ordered by writers and reviewers that may not in fact have the expected effects on a readers' performance (we know this is certainly true with one very important reading audience for pharmaceutical and medical device companies—the regulatory agency, like FDA). So the second problem faced in the review process is to understand what document elements have a meaningful impact on semantics and where to focus time and attention on the syntactical elements of a document.

The first thing we do with a client is an examination of the terms used formally (such as in guidance documents) and informally (such as review comments in documents) to describe quality. This will give us a sense of how the organization views quality and how sophisticated they may be in trying to create a common platform that describes document quality for the organization.

The second thing we do is to provide clients with the terms McCulley/Cuppan uses to describe document quality and why the concepts underlying these terms are extremely important to help determine document communication quality. A very important consideration is that the terms should speak the user’s language, with words, phrases and concepts familiar to the user, rather than system-oriented terms.

We spend a huge amount of time talking about semantics. The term "semantics" refers to the study of how language conveys meaning. Using a broad definition of semantics, we help our clients learn how to focus on different features of a document. Features like word choice, the position of information in document sections, paragraphs and in documents as a whole, idea importance, and the visual representation of data.

Then we work with a client team to create and vet working definitions for the various quality standards.

We then roll out the standards in a workshop setting and show people how the standards are applied to the types of documents they have and will produce.


Originally published on our Knowledge Management blog

16 July 2009

How Do You Measure Communication Quality?

One of the truisms we see in our McCulley/Cuppan consulting work is that rounds of document review tend to go until the point when the document must be sent somewhere. That's why we say that in the pharmaceutical industry, the opportunities for making changes to a document are virtually limitless. The problem driving this situation is most people involved with authoring and reviewing process do not have good markers to inform them of the overall communication quality of a document.  So without good markers they are left to utilize really poor markers to help them measure document quality. Markers like: grammatical soundness; how many people have reviewed the document; how many rounds of review; and how many comments leveled on the text and data in the document. Unfortunately, these markers have little correlation in the case of grammatical soundness and, for the other three, no correlation whatsoever to the communication quality of a document.

To paraphrase Steve Jong in his paper (you can read it hereYou Get What You Measure—So Measure Quality: "if you don't measure it, you'll never get it."  This is so true with document communication quality.  In order to measure communication quality you have to employ meaningful markers. We find our clients typically employ only two markers that are useful: accuracy and compliance. Unfortunately, neither of these do much to measure the quality of argument, soundness of logic, or overall usability of a document for the end-user. There are some useful markers to consider for measuring these document attributes. More on these markers in my next post.


Originally published on our Knowledge Management blog