01 April 2009

How Do People At FDA Read Documents On-screen?

With the substantial move to submitting electronic documents versus paper documents to FDA, it is useful to pause and consider how somebody actually reads a large complex technical document on screen.

Research from reading theory, human factors studies, cognitive psychology, and technical communication have helped us at McCulley/Cuppan develop a set of assumptions regarding reading behaviors for online texts. We are unaware of any studies looking directly at the ways in which regulatory reviewers approach electronic texts. However, there is research that examined the ways in which readers use electronic documents. From these studies and our FDA reviewer interview data we have constructed a set of assumptions about online reading behaviors deployed by readers in regulatory agencies like FDA. 

Research suggests that users do not respond passively to a system but instead have goals and expectations from which they make inferences and predictions (Marchionini). While users possess mental models, abilities, and preferences that are unique, the regulatory reader is largely very familiar with the structure of regulatory submission documents and the sub-genres that constitute a drug filing. As a result of this highly developed genre knowledge, the regulatory readers of electronic submissions are likely to share a similar schema and engage in particular tasks when reading documents. These reading tasks include constant questioning of the drug and device sponsors’ methods and results. 

The challenge for sponsors putting together electronic submissions is how to best satisfy the reader’s expectations, expectations which are based on the structure and organization of paper documents. It is most likely that radically redesigning a submission document in order to take advantage of online text would remove a lot of the expectations that regulatory readers use as they work. In addition, redesigning a submission document places sponsors at the risk of being seen as “nonconforming” to agency standards for documents and data set designs.

Because users generally interpret unfamiliar concepts “in terms of existing procedures or schemata” (Hammond), the key for electronic submissions is to adhere to reader’s expectations by making clear that necessary elements of the submission are included and structured logically. Once the readers realize and accept the structure of information in an electronic submission document, they can then take advantage of hypertext features that make review tasks easier and less time consuming than they are with paper submissions. Among these features should be an organizational structure and actual text format that enables readers to see clearly a hierarchy of information, find specific information quickly, and annotate and store information. 

Several studies have compared reading practices of users viewing paper and electronic texts. Results from these studies (Leventhal et al. 1993, Smith and Savory 1989, Gould et al., 1987) indicate that often reading information on a screen takes more time than on paper, leading to a performance deficit of “between 20 and 30% when reading from screen” (Dillon, 1994). 

However, as studies of the SuperBook project indicate, when an electronic text is designed to anticipate users’ needs and reading strategies, time on task can actually be reduced and search accuracy improved. These studies of the SuperBook project, conceptualized by the cognitive sciences research group at Bellcore during the mid-1980s, evaluated the usability of an electronic text over a print textbook. Although the first experiment indicated that speed and accuracy were no better for the electronic SuperBook browser than printed text, later experiments that used a revised version of SuperBook indicated an advantage in both speed and search accuracy of the electronic text over the print text. In particular, the revised version reduced search response times and modified search techniques, incorporated advance organizers such as displaying the Table of Contents continuously, and revised the placement of graphics so that they did not overlay the Table of Contents window as they had previously. These revisions resulted in a 25% advantage in both reading speed and accuracy of the electronic text over the print version (Landauer et al., 1993). 

Data from Dillon (1994) indicates that readers can locate information just as quickly in electronic texts as they can in paper as long as the reader is given an accurate model of information structure and is not required to read dense and lengthy portions of text, as lengthy sections on screen can led to speed deficits. 

The impact of information structure on reading speed is investigated in Hornbaek and Frokjaer’s 2001 study. The researchers studied three different interfaces for electronic documents to determine which design facilitated the fastest reading speed. Results indicated that subjects using fisheye interfaces read documents faster than they did with linear and overview+detail interfaces. The authors recommend fisheye interfaces, which reduce navigation time by distorting the document so that the “important parts” of a document (i.e. first and last paragraphs, headings, topic sentences) are immediately visible to readers and the rest of the information in a document can be expanded and viewed with the click of the mouse. According to Hornbaek and Frokjaer, this interface encourages readers to employ “an overview oriented reading style.” 

In addition to the impacts of different interfaces for electronic documents, the spacing and size and style of font may make also affect time on task. Kruk and Muter’s 1984 study indicated the ways in which spacing of text on the screen impacts reading time. Single spaced text produced slightly more than 10% slower reading than double spaced text.
Research also shows that users may have problems reading serif fonts. Several studies (Bernard et al. 2001; Schriver 1997; Hartley and Rooum 1983) have investigated the effects of font styles on reading efficiency and legibility and come up with inconclusive findings. However, as Williams (2000) explains, users may have problems reading serif fonts on the screen because they can appear “blocky and disproportionately large, especially when displayed in small type sizes or on low-resolution screens.” Williams also notes that due to the distance from the eye to the screen and the fact that the majority of users do not have perfect vision, font sizes should not be any smaller than 12 points.

I’ll continue this discussion in another blog and talk about the impact of electronic submissions on regulatory reader time-on-task and comprehension. 


Originally published on our Knowledge Management blog

No comments:

Post a Comment