20 April 2009

Improving the Practice of Document Review

Document reviews should be used as a tool to build quality into research and technical reports. In most handbooks for professional writers, review is recommended for clear and simple reasons: it is intended to identify problems and suggest improvements that enable an organization to produce documents that accomplish its goals and meet readers’ needs. It is true that science creates devices and drugs, but it is the documents that secure product approval and registration from the FDA and other regulatory agencies.

To create high quality documents in the most efficient manner, reviews must take place at various stages of document development. No matter the stage, all reviews should be strategic—that is they need to address the fundamental question of whether the document makes the right argument about the data described in the report. Reviewers should ask if the document stands up to challenge and fully justifies its conclusions. They should ask whether the reader is given enough context to understand the positions expressed in the document.

Review allows subject matter experts and upper management to add information that may not be available to authors. Review offers an opportunity for building consensus across functions within an organization.

Review is a process of evaluation that focuses on the functional elements of a document (what the document is supposed to ‘do’ or supposed to ‘say’). We can characterize the major purposes of review in descending order of importance as follows:
  • Attending to purpose in terms of confirming content matches purpose of the document; logic of the arguments are complete and relevant, and the organization of the document content will readily support what the reader wants to do with the document.
  • Attending to audience in terms of confirming precision of the discussion (semantics); sufficient contextual information; and ease of navigation.
  • Attending to compliance in terms of confirming accuracy and completeness of content; consistency of style; and reasonably well-structured grammar.
Successful collaborative document development and review practices always include the following attributes:
  1. Involvement of critical stakeholders early, defining their roles and responsibilities.
  2. Articulation of the targeted scope, purpose(s), and message(s) for the final document.
  3. Shared quality standards for the final document product and formally described procedural agendas for the who, what, when, and why of review.
  4. Identify and plan phases of review and associated priorities.

Originally published on our Knowledge Management blog

01 April 2009

How Do People At FDA Read Documents On-screen?

With the substantial move to submitting electronic documents versus paper documents to FDA, it is useful to pause and consider how somebody actually reads a large complex technical document on screen.

Research from reading theory, human factors studies, cognitive psychology, and technical communication have helped us at McCulley/Cuppan develop a set of assumptions regarding reading behaviors for online texts. We are unaware of any studies looking directly at the ways in which regulatory reviewers approach electronic texts. However, there is research that examined the ways in which readers use electronic documents. From these studies and our FDA reviewer interview data we have constructed a set of assumptions about online reading behaviors deployed by readers in regulatory agencies like FDA. 

Research suggests that users do not respond passively to a system but instead have goals and expectations from which they make inferences and predictions (Marchionini). While users possess mental models, abilities, and preferences that are unique, the regulatory reader is largely very familiar with the structure of regulatory submission documents and the sub-genres that constitute a drug filing. As a result of this highly developed genre knowledge, the regulatory readers of electronic submissions are likely to share a similar schema and engage in particular tasks when reading documents. These reading tasks include constant questioning of the drug and device sponsors’ methods and results. 

The challenge for sponsors putting together electronic submissions is how to best satisfy the reader’s expectations, expectations which are based on the structure and organization of paper documents. It is most likely that radically redesigning a submission document in order to take advantage of online text would remove a lot of the expectations that regulatory readers use as they work. In addition, redesigning a submission document places sponsors at the risk of being seen as “nonconforming” to agency standards for documents and data set designs.

Because users generally interpret unfamiliar concepts “in terms of existing procedures or schemata” (Hammond), the key for electronic submissions is to adhere to reader’s expectations by making clear that necessary elements of the submission are included and structured logically. Once the readers realize and accept the structure of information in an electronic submission document, they can then take advantage of hypertext features that make review tasks easier and less time consuming than they are with paper submissions. Among these features should be an organizational structure and actual text format that enables readers to see clearly a hierarchy of information, find specific information quickly, and annotate and store information. 

Several studies have compared reading practices of users viewing paper and electronic texts. Results from these studies (Leventhal et al. 1993, Smith and Savory 1989, Gould et al., 1987) indicate that often reading information on a screen takes more time than on paper, leading to a performance deficit of “between 20 and 30% when reading from screen” (Dillon, 1994). 

However, as studies of the SuperBook project indicate, when an electronic text is designed to anticipate users’ needs and reading strategies, time on task can actually be reduced and search accuracy improved. These studies of the SuperBook project, conceptualized by the cognitive sciences research group at Bellcore during the mid-1980s, evaluated the usability of an electronic text over a print textbook. Although the first experiment indicated that speed and accuracy were no better for the electronic SuperBook browser than printed text, later experiments that used a revised version of SuperBook indicated an advantage in both speed and search accuracy of the electronic text over the print text. In particular, the revised version reduced search response times and modified search techniques, incorporated advance organizers such as displaying the Table of Contents continuously, and revised the placement of graphics so that they did not overlay the Table of Contents window as they had previously. These revisions resulted in a 25% advantage in both reading speed and accuracy of the electronic text over the print version (Landauer et al., 1993). 

Data from Dillon (1994) indicates that readers can locate information just as quickly in electronic texts as they can in paper as long as the reader is given an accurate model of information structure and is not required to read dense and lengthy portions of text, as lengthy sections on screen can led to speed deficits. 

The impact of information structure on reading speed is investigated in Hornbaek and Frokjaer’s 2001 study. The researchers studied three different interfaces for electronic documents to determine which design facilitated the fastest reading speed. Results indicated that subjects using fisheye interfaces read documents faster than they did with linear and overview+detail interfaces. The authors recommend fisheye interfaces, which reduce navigation time by distorting the document so that the “important parts” of a document (i.e. first and last paragraphs, headings, topic sentences) are immediately visible to readers and the rest of the information in a document can be expanded and viewed with the click of the mouse. According to Hornbaek and Frokjaer, this interface encourages readers to employ “an overview oriented reading style.” 

In addition to the impacts of different interfaces for electronic documents, the spacing and size and style of font may make also affect time on task. Kruk and Muter’s 1984 study indicated the ways in which spacing of text on the screen impacts reading time. Single spaced text produced slightly more than 10% slower reading than double spaced text.
Research also shows that users may have problems reading serif fonts. Several studies (Bernard et al. 2001; Schriver 1997; Hartley and Rooum 1983) have investigated the effects of font styles on reading efficiency and legibility and come up with inconclusive findings. However, as Williams (2000) explains, users may have problems reading serif fonts on the screen because they can appear “blocky and disproportionately large, especially when displayed in small type sizes or on low-resolution screens.” Williams also notes that due to the distance from the eye to the screen and the fact that the majority of users do not have perfect vision, font sizes should not be any smaller than 12 points.

I’ll continue this discussion in another blog and talk about the impact of electronic submissions on regulatory reader time-on-task and comprehension. 


Originally published on our Knowledge Management blog