This is another post from our archives, but is pertinent to the document assessment work we've been doing this summer:
In "
Why the Focus on Review Practices?,"
my colleague Jessica Mahajan highlights the observation made in our
McCulley/Cuppan consulting that reviewers, who are expected to work
toward enhancing document quality to improve its effectiveness, tend to
direct much of their review effort to mechanical and stylistic
document elements (syntax, word choice, punctuation, grammar) at the
expense of the intellectual work the document is supposed to do. One of
my previous posts "
How Do We Get People to Apply Improved Work Practices?"
explores ways to motivate change when change would provide significant
benefits to both individual and organization. In turn, I have a
theory about why we continually see subject matter expertise for review
applied to the task of copy-editing, and why that practice is so hard
to change. The theory is built around how we:
- Learn to write.
- Learn to review.
- Ask for review.
How We Learn to Write
Think
about how you learned to write. If your experience was like that of
kids I visit in middle and high schools, then your teachers tried to
encourage you to write in a context and with a purpose. Unfortunately,
they likely ended up using rubrics that are all about structure, word
usage, and typography. A rubric that had little regard for how well
writing fulfilled purpose and satisfied reader's needs. A rubric I saw
recently (I collect these things, and this one was typical) graded
students on everything but content, and as long as the writing followed
the specified form it got top marks (an A in this instance). A really
interesting paper on black holes, and the physics behind them (which
may have been beyond some readers, but worked really hard to make the
ideas accessible to a varied reader audience) got a B because of errors
in typography. More popular are the 5 or 6 equally weighted measures
called writing traits. Students are given points for:
- Ideas and Content
- Organization
- Voice
- Word Choice
- Sentence Fluency
- Conventions
Just
look at this: how can the assemblage of ideas and content bear value
no greater than word choice and sentence fluency? When our ideas are
given so little weight (~17% here), is it any wonder that people
attend to form over function?
This is how we learn to
write--texts we create are based on finished models that rarely tell us
what makes them good models. Further, we are never given insight to
the process of crafting and iteratively refining a text, that is, a
model for what should be in place in a first-draft document, versus a
second draft. In most learning environments, documents are judged based
on how well they adhere to rules constructed for talking about how
language should work.
Unfortunately, this approach does
not change when we get into higher education. Some courses in
technical degree programs have a writing component. But if you test out
of freshman composition (where the previous description is still
pretty accurate) then at best you may get one required course in
technical communication. This might be taught by a creative writing
student who is mostly interested in finishing their MFA and thinks that
learning to "write" a well-organized memo (form) should be one of the
four major projects students will prepare for the course. Because
creative writing and technical writing don't have much to offer each
other, right?
There are exceptions to this scenario,
but unfortunately the above is probably a pretty good description of
the rule. More to the point, grading is hard (especially when there are
not good rules for anything other than grammar and punctuation), and
most students are primarily interested in receiving top marks. So
students simply want to know what they have to do to get the top mark.
The model is a finished document that is good enough--in terms of
content, organization, voice, word choice, sentence fluency, and
conventions--to get a top mark. Likely the target given to the students
addresses only five of the aforementioned six attributes. The one left
out is content. So the student focus and energy goes into fulfilling
these five attributes.
As an informative aside,
when departments ask for help in training their people (students or
employees), the most frequent initial and typical request is to "just
help them with their grammar". This despite the fact that we know when
we focus on grammar, the quality of writing, measured by what the
writing does, goes down.
Learning to write in the
workplace is slightly different. Here we're given a finished document,
and told to 'make it look like this'. The document is complete, but
bears no annotation or associated guidance to suggest what attributes
of the document makes superior and worthy of the status of a 'model'.
In a worst case scenario, someone's internal dialog might go something
like this, "I'll look pretty stupid if I ask my boss what it is about
this document that makes it a good model--so I won't. I mean, it is
obvious, right? And besides, I (choose all that apply: a. got A's in
English, b. have a PhD, c. have written technical reports before, d. I
have all the data in the report) so... I must be okay, right?"
In
our McCulley/Cuppan consulting, we constantly see a model used for
constructing documents where new documents are based on old ones.
Authors endeavor to make new reports as complete as possible before
asking anyone to give them a look. I can recall several instances where
authors were told to write until they had nothing else to say and then
their supervisor would be ready to look at the report. This approach to
writing--to model a preexisting document and to make it as complete as
possible before bringing extra eyes in to help--sets up a workplace
dynamic that sabotages the potential for productive change.
How We Learn to Review
How
we learn to review follows the model of how we learn to write. In
school, students construct papers that respond to prompts and are
graded. We spend our time learning how to construct sentences that are
grammatically correct, forgetting that people can get over a misspelled
word or two or a tense problem if we have something to say. Often the
only thing teachers can use to distinguish one useful response to the
prompt from another are the mechanical elements of a sentence. And they
can't give everyone an A. That would be grade inflation, or worse!
Papers
are returned to students with lots of blood (well, red ink since lots
of teachers still like those pens) that identify words misspelled,
grammar errors and organization problems. In other words, the students
work is 'assessed', but not reviewed. And I can't blame the
teachers--this is what they were trained to do and to help students
effectively learn to communicate well via the written word involves a
lot of reading (not fun or easy, I promise). Identifying the mechanical
problems in a document is easiest and fastest, which is a
consideration when you have thirty or more papers to read in an
evening. I have a colleague teaching so many sections that she's got
115 papers to read at one sitting!
The problem is compounded by
the fact that in competitive societies we're taught not to collaborate.
Rare is the teacher that has students collaborating on projects or
written work, though thankfully, this is changing. We learn not to
share our answers with others ('cause that's cheating). What we
practice in school is what we bring to the workplace, supplemented with
observations and suggestions from people who review with us that help
us to construct new models. In terms of document review, we start with
the models we got from our teachers: fix the typos, suggest alternative
wording, and massage the format.
Since our colleagues
use this model too, we stick with it. In other words, we do what is
familiar. We also have to do something during the review. In the
absence of more specific instructions we have to let people know we put
reasonable effort into the review exercise. After all, review is an
activity. One of the ways to measure the extent of our activity is to
count up the total number of review remarks we have left on a document.
The more remarks, the better job we did as a reviewer. So we turn our
attention to verifying the numbers in the report are accurate and make
to 'dot the i, cross the t, and don't forget that comma'.
We
are conditioned to be reactive reviewers--we respond to what is
present in a document, not what is missing. We are conditioned to
operate on the concept of a finished document, no matter where the
document sits in the drafting process. Even with a draft one we start at
page one and we work straight through the document until we're
finished--that is, finished with the document, out of time, or out of
energy. We see this all the time in our assessment of review practice.
There is straight-line decline in the frequency of review remarks per
page as you move through the document. We see Draft 1 report synopses
and summaries overloaded with review remarks even though in the body of
the report the Discussion Section is only 30% completed and there is no
Conclusion Section yet.
Through conditioning in the
workplace, we have no sense of strategic review. The prevalent strategy
is to simply get the review done so we can 'get back to our day job'.
We often have a reverential belief that all it takes to succeed with a
scientific or technical report is to just get the data right. That is,
make sure the data are accurate. We are also conditioned to think that
all you have to do is get the study design right--the rest does not
mean too much. That is, the report is merely a repository for data. So
we are conditioned to discount the value of scientific reports because
constructing well-written, clear, concise, and informative documents
takes time away from our 'day job' of conducting science.
In
other posts we've talked about the importance of review and the huge
commitment organizations have made to review. You would think that,
since it is so important, more time would be spent on training people
to become more effective reviewers--particularly during their
professional training. Yet we don't see this. We've not found a single
academic program offering credentials in technical communication or
medical writing that offers a course in review (as opposed to
editing)--yet the complexity and difficulty of review would certainly
warrant one.
Most reviewers learn to review on the job.
How do we know? We've asked thousands working in various organizations
covering a broad spectrum of disciplines, and we've read others who've
asked. Further, a quick survey of the most popular books in Technical
and Professional Communication and Medical Writing devote little real
estate to the topic of review. In a three hundred page text, we find
less that 5% devoted to review. Yet review is certainly more than 5% of
the process.
How We Ask for Review
When
we analyze review practices and products for our clients we look at
more than just the documents under review. We also assess how people
communicate about review and the tools they use to facilitate review.
Typically communication regarding the task of review is a simple
request: "Please review this by such-and-such (a date)." We rarely
find instructions from the author to help inform reviewers: "Please
have a look at section x because I'm really having trouble explaining
y." We'll post a longer description of this topic, but the point to be
made here is that authors rarely help their reviewers with
instructions/review requests that focus reviewers on what would help
the document and authors advance their work.
Our
assessment of review practices suggests that the collective review
effort does little to improve the document's communication quality. It
likely will improve accuracy of the data, compliance with a template,
and contain sentences that are all grammatically correct. But the
conveyance of messages and the logic of arguments may remain murky or
even suspect. Given everything I have said up to this point, why would
you expect a different outcome than this?
The Theory
So
here is the theory: Expensive subject matter experts are reduced to
copy-editing because that is what they know best (they come into the
professional box with plenty of conditioning from the academy), it is
familiar, it is what everybody else does, and their organization hasn't
offered them a better alternative. Further, the situation won't get
any better because even if (when) they find a better alternative
they're too busy to change (they have their day job to do and besides
they have too many documents to review to be fettered by revising ways
of working), and even if they wanted to change things, the
organization's leadership wouldn't buy into it.
Fortunately,
much can be done to really 'move the needle on the meter' and improve
individual and organizational ways of working when it comes to the task
of document review. I know this to be the case from the consulting and
training work we do as we've helped a number of organizations improve
review practices and document quality.