How important is language and writing style in a clinical study report? I was recently asked this question by a medical writer working for one of my McCulley/Cuppan clients. The writer is dealing with a team that seems to obsess over every word in every draft and the writer is looking for some help in how to address the situation.
Here is my response to the question:
You are asking about lexical and syntactical elements of writing (the third element of writing is grammatical.)
Lexical pertains to the words (vocabulary) of a language. In the context of clinical research we need to talk about several applied lexicons of scientific phraseology that apply broadly to science and then narrowly to a specific therapeutic area. The admittedly most distinctive feature of any clinical study report is the application of specific scientific and technical prose. So, language is very important in a CSR to avoid lexical ambiguity (why I so love statisticians and their demands for careful use of language when describing statistical observations) in order to allow the reader to derive the intended meaning.
My experience suggests that many people in Pharma think attention to syntactical elements (style) means they are either eliminating ambiguity or improving clarity of message. Rarely is this the case.
You have heard me say before that style does not matter in the type of writing represented in clinical study reports submitted to regulatory authorities in the US and elsewhere.
My position is supported by current discourse theory. Discourse theory states that, as a rule in scientific writing, meaning is largely derived from the precise use of key scientific words, not how these words are strung together. It is the key words that create the meta-level knowledge of the report. Varying style does little to aid or impede comprehension.
What happens is people often chase and play around with the style of document. Largely they are looking to manipulate an advanced set of discourse markers specific for clinical science writing or some subset specific to a therapeutic discipline. Discourse markers are the word elements that string together the key scientific words and help signal transitions within and across sentences. These discourse markers are the elements that provide for style. There are macro markers (those indicating overall organization) and micro markers (functioning as fillers, indicating links between sentences, etc.) Comprehension studies show that manipulating discourse markers--that is, messing with style--in most instances does not influence reader comprehension. It is worth noting that manipulation of macro markers appears to have some impact on comprehension for non-native speakers of English (why it is worth using textual advanced organizers to help with document readability.)
So the net-net is: there is little fruit to be picked from messing with style in a clinical study report. Put review focus on the use and placement of key terms.
This is a bit of a non-sequitur to the question, but a concept I’d like to share. To derive meaning from scientific text, readers will rely on their prior knowledge, and cues provided by the key terms and data they encounter or fail to find in a sentence, paragraph, table, or section of a clinical study report. So what I’d really prefer to get people thinking about is the semantical elements of their documents. Semantics is fundamentally about encoding knowledge and how you as an author enable the reader to process your representation of knowledge in a meaningful way. Semantics is about how much interpretive space you provide to the reader in a document by what you say and equally important, by what you do not say. Of course you cannot get to the point of thinking about semantics unless you see clinical study reports as something more than just a warehouse for data.
The McCulley/Cuppan Blog on Tools and Strategies for Improving Quality of Knowledge Management and Communication in the Life Sciences.
Showing posts with label knowledge sharing. Show all posts
Showing posts with label knowledge sharing. Show all posts
11 June 2009
Unproductive Review Practices: Why They're Still Around Even Though People Know Better
In "Why the Focus on Review Practices?," my colleague Jessica Mahajan highlights the observation made in our McCulley/Cuppan consulting that reviewers, who are expected to work toward enhancing document quality to improve its effectiveness, tend to direct much of their review effort to mechanical and stylistic document elements (syntax, word choice, punctuation, grammar) at the expense of the intellectual work the document is supposed to do. One of my previous posts "How Do We Get People to Apply Improved Work Practices?" explores ways to motivate change when change would provide significant benefits to both individual and organization. In turn, I have a theory about why we continually see subject matter expertise for review applied to the task of copy-editing, and why that practice is so hard to change. The theory is built around how we:
How We Learn to Write
Think about how you learned to write. If your experience was like that of kids I visit in middle and high schools, then your teachers tried to encourage you to write in a context and with a purpose. Unfortunately, they likely ended up using rubrics that are all about structure, word usage, and typography. A rubric that had little regard for how well writing fulfilled purpose and satisfied reader's needs. A rubric I saw recently (I collect these things, and this one was typical) graded students on everything but content, and as long as the writing followed the specified form it got top marks (an A in this instance). A really interesting paper on black holes, and the physics behind them (which may have been beyond some readers, but worked really hard to make the ideas accessible to a varied reader audience) got a B because of errors in typography. More popular are the 5 or 6 equally weighted measures called writing traits. Students are given points for:
This is how we learn to write--texts we create are based on finished models that rarely tell us what makes them good models. Further, we are never given insight to the process of crafting and iteratively refining a text, that is, a model for what should be in place in a first-draft document, versus a second draft. In most learning environments, documents are judged based on how well they adhere to rules constructed for talking about how language should work.
Unfortunately, this approach does not change when we get into higher education. Some courses in technical degree programs have a writing component. But if you test out of freshman composition (where the previous description is still pretty accurate) then at best you may get one required course in technical communication. This might be taught by a creative writing student who is mostly interested in finishing their MFA and thinks that learning to "write" a well-organized memo (form) should be one of the four major projects students will prepare for the course. Because creative writing and technical writing don't have much to offer each other, right?
There are exceptions to this scenario, but unfortunately the above is probably a pretty good description of the rule. More to the point, grading is hard (especially when there are not good rules for anything other than grammar and punctuation), and most students are primarily interested in receiving top marks. So students simply want to know what they have to do to get the top mark. The model is a finished document that is good enough--in terms of content, organization, voice, word choice, sentence fluency, and conventions--to get a top mark. Likely the target given to the students addresses only five of the aforementioned six attributes. The one left out is content. So the student focus and energy goes into fulfilling these five attributes.
In our McCulley/Cuppan consulting, we constantly see a model used for constructing documents where new documents are based on old ones. Authors endeavor to make new reports as complete as possible before asking anyone to give them a look. I can recall several instances where authors were told to write until they had nothing else to say and then their supervisor would be ready to look at the report. This approach to writing--to model a preexisting document and to make it as complete as possible before bringing extra eyes in to help--sets up a workplace dynamic that sabotages the potential for productive change.
How We Learn to Review
How we learn to review follows the model of how we learn to write. In school, students construct papers that respond to prompts and are graded. We spend our time learning how to construct sentences that are grammatically correct, forgetting that people can get over a misspelled word or two or a tense problem if we have something to say. Often the only thing teachers can use to distinguish one useful response to the prompt from another are the mechanical elements of a sentence. And they can't give everyone an A. That would be grade inflation, or worse!
Papers are returned to students with lots of blood (well, red ink since lots of teachers still like those pens) that identify words misspelled, grammar errors and organization problems. In other words, the students work is 'assessed', but not reviewed. And I can't blame the teachers--this is what they were trained to do and to help students effectively learn to communicate well via the written word involves a lot of reading (not fun or easy, I promise). Identifying the mechanical problems in a document is easiest and fastest, which is a consideration when you have thirty or more papers to read in an evening. I have a colleague teaching so many sections that she's got 115 papers to read at one sitting!
The problem is compounded by the fact that in competitive societies we're taught not to collaborate. Rare is the teacher that has students collaborating on projects or written work, though thankfully, this is changing. We learn not to share our answers with others ('cause that's cheating). What we practice in school is what we bring to the workplace, supplemented with observations and suggestions from people who review with us that help us to construct new models. In terms of document review, we start with the models we got from our teachers: fix the typos, suggest alternative wording, and massage the format.
Since our colleagues use this model too, we stick with it. In other words, we do what is familiar. We also have to do something during the review. In the absence of more specific instructions we have to let people know we put reasonable effort into the review exercise. After all, review is an activity. One of the ways to measure the extent of our activity is to count up the total number of review remarks we have left on a document. The more remarks, the better job we did as a reviewer. So we turn our attention to verifying the numbers in the report are accurate and make to 'dot the i, cross the t, and don't forget that comma'.
We are conditioned to be reactive reviewers--we respond to what is present in a document, not what is missing. We are conditioned to operate on the concept of a finished document, no matter where the document sits in the drafting process. Even with a draft one we start at page one and we work straight through the document until we're finished--that is, finished with the document, out of time, or out of energy. We see this all the time in our assessment of review practice. There is straight-line decline in the frequency of review remarks per page as you move through the document. We see Draft 1 report synopses and summaries overloaded with review remarks even though in the body of the report the Discussion Section is only 30% completed and there is no Conclusion Section yet.
Through conditioning in the workplace, we have no sense of strategic review. The prevalent strategy is to simply get the review done so we can 'get back to our day job'. We often have a reverential belief that all it takes to succeed with a scientific or technical report is to just get the data right. That is, make sure the data are accurate. We are also conditioned to think that all you have to do is get the study design right--the rest does not mean too much. That is, the report is merely a repository for data. So we are conditioned to discount the value of scientific reports because constructing well-written, clear, concise, and informative documents takes time away from our 'day job' of conducting science.
In other posts we've talked about the importance of review and the huge commitment organizations have made to review. You would think that, since it is so important, more time would be spent on training people to become more effective reviewers--particularly during their professional training. Yet we don't see this. We've not found a single academic program offering credentials in technical communication or medical writing that offers a course in review (as opposed to editing)--yet the complexity and difficulty of review would certainly warrant one.
Most reviewers learn to review on the job. How do we know? We've asked thousands working in various organizations covering a broad spectrum of disciplines, and we've read others who've asked. Further, a quick survey of the most popular books in Technical and Professional Communication and Medical Writing devote little real estate to the topic of review. In a three hundred page text, we find less that 5% devoted to review. Yet review is certainly more than 5% of the process.
How We Ask for Review
When we analyze review practices and products for our clients we look at more than just the documents under review. We also assess how people communicate about review and the tools they use to facilitate review. Typically communication regarding the task of review is a simple request: "Please review this by such-and-such (a date)." We rarely find instructions from the author to help inform reviewers: "Please have a look at section x because I'm really having trouble explaining y." We'll post a longer description of this topic, but the point to be made here is that authors rarely help their reviewers with instructions/review requests that focus reviewers on what would help the document and authors advance their work.
Our assessment of review practices suggests that the collective review effort does little to improve the document's communication quality. It likely will improve accuracy of the data, compliance with a template, and contain sentences that are all grammatically correct. But the conveyance of messages and the logic of arguments may remain murky or even suspect. Given everything I have said up to this point, why would you expect a different outcome than this?
The Theory
So here is the theory: Expensive subject matter experts are reduced to copy-editing because that is what they know best (they come into the professional box with plenty of conditioning from the academy), it is familiar, it is what everybody else does, and their organization hasn't offered them a better alternative. Further, the situation won't get any better because even if (when) they find a better alternative they're too busy to change (they have their day job to do and besides they have too many documents to review to be fettered by revising ways of working), and even if they wanted to change things, the organization's leadership wouldn't buy into it.
Fortunately, much can be done to really 'move the needle on the meter' and improve individual and organizational ways of working when it comes to the task of document review. I know this to be the case from the consulting and training work we do as we've helped a number of organizations improve review practices and document quality.
Originally published on our Knowledge Management blog
- Learn to write.
- Learn to review.
- Ask for review.
How We Learn to Write
Think about how you learned to write. If your experience was like that of kids I visit in middle and high schools, then your teachers tried to encourage you to write in a context and with a purpose. Unfortunately, they likely ended up using rubrics that are all about structure, word usage, and typography. A rubric that had little regard for how well writing fulfilled purpose and satisfied reader's needs. A rubric I saw recently (I collect these things, and this one was typical) graded students on everything but content, and as long as the writing followed the specified form it got top marks (an A in this instance). A really interesting paper on black holes, and the physics behind them (which may have been beyond some readers, but worked really hard to make the ideas accessible to a varied reader audience) got a B because of errors in typography. More popular are the 5 or 6 equally weighted measures called writing traits. Students are given points for:
- Ideas and Content
- Organization
- Voice
- Word Choice
- Sentence Fluency
- Conventions
This is how we learn to write--texts we create are based on finished models that rarely tell us what makes them good models. Further, we are never given insight to the process of crafting and iteratively refining a text, that is, a model for what should be in place in a first-draft document, versus a second draft. In most learning environments, documents are judged based on how well they adhere to rules constructed for talking about how language should work.
Unfortunately, this approach does not change when we get into higher education. Some courses in technical degree programs have a writing component. But if you test out of freshman composition (where the previous description is still pretty accurate) then at best you may get one required course in technical communication. This might be taught by a creative writing student who is mostly interested in finishing their MFA and thinks that learning to "write" a well-organized memo (form) should be one of the four major projects students will prepare for the course. Because creative writing and technical writing don't have much to offer each other, right?
There are exceptions to this scenario, but unfortunately the above is probably a pretty good description of the rule. More to the point, grading is hard (especially when there are not good rules for anything other than grammar and punctuation), and most students are primarily interested in receiving top marks. So students simply want to know what they have to do to get the top mark. The model is a finished document that is good enough--in terms of content, organization, voice, word choice, sentence fluency, and conventions--to get a top mark. Likely the target given to the students addresses only five of the aforementioned six attributes. The one left out is content. So the student focus and energy goes into fulfilling these five attributes.
As an informative aside, when departments ask for help in training their people (students or employees), the most frequent initial and typical request is to "just help them with their grammar". This despite the fact that we know when we focus on grammar, the quality of writing, measured by what the writing does, goes down.Learning to write in the workplace is slightly different. Here we're given a finished document, and told to 'make it look like this'. The document is complete, but bears no annotation or associated guidance to suggest what attributes of the document makes superior and worthy of the status of a 'model'. In a worst case scenario, someone's internal dialog might go something like this, "I'll look pretty stupid if I ask my boss what it is about this document that makes it a good model--so I won't. I mean, it is obvious, right? And besides, I (choose all that apply: a. got A's in English, b. have a PhD, c. have written technical reports before, d. I have all the data in the report) so... I must be okay, right?"
In our McCulley/Cuppan consulting, we constantly see a model used for constructing documents where new documents are based on old ones. Authors endeavor to make new reports as complete as possible before asking anyone to give them a look. I can recall several instances where authors were told to write until they had nothing else to say and then their supervisor would be ready to look at the report. This approach to writing--to model a preexisting document and to make it as complete as possible before bringing extra eyes in to help--sets up a workplace dynamic that sabotages the potential for productive change.
How We Learn to Review
How we learn to review follows the model of how we learn to write. In school, students construct papers that respond to prompts and are graded. We spend our time learning how to construct sentences that are grammatically correct, forgetting that people can get over a misspelled word or two or a tense problem if we have something to say. Often the only thing teachers can use to distinguish one useful response to the prompt from another are the mechanical elements of a sentence. And they can't give everyone an A. That would be grade inflation, or worse!
Papers are returned to students with lots of blood (well, red ink since lots of teachers still like those pens) that identify words misspelled, grammar errors and organization problems. In other words, the students work is 'assessed', but not reviewed. And I can't blame the teachers--this is what they were trained to do and to help students effectively learn to communicate well via the written word involves a lot of reading (not fun or easy, I promise). Identifying the mechanical problems in a document is easiest and fastest, which is a consideration when you have thirty or more papers to read in an evening. I have a colleague teaching so many sections that she's got 115 papers to read at one sitting!
The problem is compounded by the fact that in competitive societies we're taught not to collaborate. Rare is the teacher that has students collaborating on projects or written work, though thankfully, this is changing. We learn not to share our answers with others ('cause that's cheating). What we practice in school is what we bring to the workplace, supplemented with observations and suggestions from people who review with us that help us to construct new models. In terms of document review, we start with the models we got from our teachers: fix the typos, suggest alternative wording, and massage the format.
Since our colleagues use this model too, we stick with it. In other words, we do what is familiar. We also have to do something during the review. In the absence of more specific instructions we have to let people know we put reasonable effort into the review exercise. After all, review is an activity. One of the ways to measure the extent of our activity is to count up the total number of review remarks we have left on a document. The more remarks, the better job we did as a reviewer. So we turn our attention to verifying the numbers in the report are accurate and make to 'dot the i, cross the t, and don't forget that comma'.
We are conditioned to be reactive reviewers--we respond to what is present in a document, not what is missing. We are conditioned to operate on the concept of a finished document, no matter where the document sits in the drafting process. Even with a draft one we start at page one and we work straight through the document until we're finished--that is, finished with the document, out of time, or out of energy. We see this all the time in our assessment of review practice. There is straight-line decline in the frequency of review remarks per page as you move through the document. We see Draft 1 report synopses and summaries overloaded with review remarks even though in the body of the report the Discussion Section is only 30% completed and there is no Conclusion Section yet.
Through conditioning in the workplace, we have no sense of strategic review. The prevalent strategy is to simply get the review done so we can 'get back to our day job'. We often have a reverential belief that all it takes to succeed with a scientific or technical report is to just get the data right. That is, make sure the data are accurate. We are also conditioned to think that all you have to do is get the study design right--the rest does not mean too much. That is, the report is merely a repository for data. So we are conditioned to discount the value of scientific reports because constructing well-written, clear, concise, and informative documents takes time away from our 'day job' of conducting science.
In other posts we've talked about the importance of review and the huge commitment organizations have made to review. You would think that, since it is so important, more time would be spent on training people to become more effective reviewers--particularly during their professional training. Yet we don't see this. We've not found a single academic program offering credentials in technical communication or medical writing that offers a course in review (as opposed to editing)--yet the complexity and difficulty of review would certainly warrant one.
Most reviewers learn to review on the job. How do we know? We've asked thousands working in various organizations covering a broad spectrum of disciplines, and we've read others who've asked. Further, a quick survey of the most popular books in Technical and Professional Communication and Medical Writing devote little real estate to the topic of review. In a three hundred page text, we find less that 5% devoted to review. Yet review is certainly more than 5% of the process.
How We Ask for Review
When we analyze review practices and products for our clients we look at more than just the documents under review. We also assess how people communicate about review and the tools they use to facilitate review. Typically communication regarding the task of review is a simple request: "Please review this by such-and-such (a date)." We rarely find instructions from the author to help inform reviewers: "Please have a look at section x because I'm really having trouble explaining y." We'll post a longer description of this topic, but the point to be made here is that authors rarely help their reviewers with instructions/review requests that focus reviewers on what would help the document and authors advance their work.
Our assessment of review practices suggests that the collective review effort does little to improve the document's communication quality. It likely will improve accuracy of the data, compliance with a template, and contain sentences that are all grammatically correct. But the conveyance of messages and the logic of arguments may remain murky or even suspect. Given everything I have said up to this point, why would you expect a different outcome than this?
The Theory
So here is the theory: Expensive subject matter experts are reduced to copy-editing because that is what they know best (they come into the professional box with plenty of conditioning from the academy), it is familiar, it is what everybody else does, and their organization hasn't offered them a better alternative. Further, the situation won't get any better because even if (when) they find a better alternative they're too busy to change (they have their day job to do and besides they have too many documents to review to be fettered by revising ways of working), and even if they wanted to change things, the organization's leadership wouldn't buy into it.
Fortunately, much can be done to really 'move the needle on the meter' and improve individual and organizational ways of working when it comes to the task of document review. I know this to be the case from the consulting and training work we do as we've helped a number of organizations improve review practices and document quality.
Originally published on our Knowledge Management blog
06 June 2009
How Do We Get People to Apply Improved Work Practices?
I am always thinking about how we can get people to really apply improved work practices. In our work situation, people readily accept the ideas and principles we share regarding ways of working on knowledge elicitation and document development, but then I see people routinely retreat to old ways of working. Lots can be done to help counter these tendencies.
A couple ideas come from a great post I read this morning by Elizabeth Harrin on the blog Project Management Tips. Her post addresses the very interesting question of "how do you make lessons-learned stick?" Two points in her article stand out for me:
We get involved in client projects that carry on for months or years--frankly, it is really too late to wait until the project is over to collect lessons learned. I will endeavor to apply Point One in all of my project work with clients. I think we take this approach at McCulley/Cuppan on projects, but not in a consistently applied formal way.
Point Two is a really interesting concept shared by Ms. Harrin, but one that takes steely determination and solid support from the highest levels of the organization. In my work, I find the underlying issue behind this point to be that people just hate leaving their comfort zones, even if they recognize their personal ways of working are suboptimal. Point Two is all about making a retreat to the recesses of comfort zones an uncomfortable process. I recall a number of years ago we had a client who wanted the organization to stop printing and filing paper versions of reports and memos. So they did two things: took away the personal file cabinets and limited access to departmental printers. Initially the situation was like being on the famous British Naval vessel, The Bounty, but all quickly learned to reinvent their personal ways of working, those that could not tolerate the demanded change in habits soon left the organization.
These two points connect back to another important concept I picked up on a post by Wendy Wickham on the blog: In the Middle of the Curve. She suggests that business training typically is centered on developing knowledge (she refers to it as brainpower) and takes little time to actually help people understand how to manage it. This is so true. It is one thing to intellectualize ideas and concepts, but the challenge is to actually apply the ideas and concepts. Pausing mid-stream in a project can help and changing work environments to prevent a retreat to old habits can too (though this approach comes pre-loaded with the potential for encouraging bad morale and other social issues that can sink a project real fast.)
Originally published on our Knowledge Management blog
A couple ideas come from a great post I read this morning by Elizabeth Harrin on the blog Project Management Tips. Her post addresses the very interesting question of "how do you make lessons-learned stick?" Two points in her article stand out for me:
- Do not wait until the end of the project to review lessons learned.
- Make it difficult for people to do things the old way.
We get involved in client projects that carry on for months or years--frankly, it is really too late to wait until the project is over to collect lessons learned. I will endeavor to apply Point One in all of my project work with clients. I think we take this approach at McCulley/Cuppan on projects, but not in a consistently applied formal way.
Point Two is a really interesting concept shared by Ms. Harrin, but one that takes steely determination and solid support from the highest levels of the organization. In my work, I find the underlying issue behind this point to be that people just hate leaving their comfort zones, even if they recognize their personal ways of working are suboptimal. Point Two is all about making a retreat to the recesses of comfort zones an uncomfortable process. I recall a number of years ago we had a client who wanted the organization to stop printing and filing paper versions of reports and memos. So they did two things: took away the personal file cabinets and limited access to departmental printers. Initially the situation was like being on the famous British Naval vessel, The Bounty, but all quickly learned to reinvent their personal ways of working, those that could not tolerate the demanded change in habits soon left the organization.
These two points connect back to another important concept I picked up on a post by Wendy Wickham on the blog: In the Middle of the Curve. She suggests that business training typically is centered on developing knowledge (she refers to it as brainpower) and takes little time to actually help people understand how to manage it. This is so true. It is one thing to intellectualize ideas and concepts, but the challenge is to actually apply the ideas and concepts. Pausing mid-stream in a project can help and changing work environments to prevent a retreat to old habits can too (though this approach comes pre-loaded with the potential for encouraging bad morale and other social issues that can sink a project real fast.)
Originally published on our Knowledge Management blog
09 May 2009
Knowledge Management and Negotiating Meaning in Technical and Scientific Reports
Came across this blog posting by Steve Barth on the topic of working definitions for the term Knowledge Management. Take a look.
Barth talks about the list of working definitions for knowledge management compiled by Ray Sims. I absolutely agree with Barth that it is much more effective to have multiple definitions for a concept than just a single idea. When Philip and I talk about knowledge management, our discussions often begin with the qualifier "it depends on......."
A quote in Barth's piece I want to share with you here:
Barth's comment that best outcomes occur through the process of consensus and not enforcement is important to keep in mind. My interest in this approach is to help ensure that objective assessment of a body of work is seen through relatively clear eyes and unfettered minds. See what Dave Snowden has to say about our inability to maintain clear, focused objectivity. In particular I see Snowden's effect called "self-confirmation" interfere with the process of constructing meaning in documents. Snowden describes self-confirmation as follows:
Originally published on our Knowledge Management blog
Barth talks about the list of working definitions for knowledge management compiled by Ray Sims. I absolutely agree with Barth that it is much more effective to have multiple definitions for a concept than just a single idea. When Philip and I talk about knowledge management, our discussions often begin with the qualifier "it depends on......."
A quote in Barth's piece I want to share with you here:
I prefer multiple definitions because clarity and agreement cannot be assumed. Meaning must be negotiated and confirmed. Even if it's only a temporary agreement or working definition for the task at hand, that represents a position triangulated from the multiple points of view of all participants. You'll have a much better commitment to success if consensus was earned rather than enforced.Meaning must be negotiated and confirmed. This is an important concept not just for developing a working definition for a term like knowledge management, but it is also an approach critical to the conveyance of knowledge in scientific and technical report. We see the process of negotiated meaning played out, sometimes really well and other times quite poorly, in our daily work in the pharmaceutical industry. Those who buy into the notion that technical and research reports do more than just report data will endeavor to ensure their documents convey a clear sense of meaning. They will work during the drafting and review sessions to refine meaning and not just refine grammar and presentation.
Barth's comment that best outcomes occur through the process of consensus and not enforcement is important to keep in mind. My interest in this approach is to help ensure that objective assessment of a body of work is seen through relatively clear eyes and unfettered minds. See what Dave Snowden has to say about our inability to maintain clear, focused objectivity. In particular I see Snowden's effect called "self-confirmation" interfere with the process of constructing meaning in documents. Snowden describes self-confirmation as follows:
Self confirmation our ignoring any evidence that disturbs our pre-judgements or hypotheses and means rationalization means that we only search for data that will support the pre-judgements.In our McCulley/Cuppan work we have seen far too many times discussions on what to "say" in a document driven by personality, where it is the loudest or longest exhortations or often pay grade of the commentator that will decide what message or messages will or will not appear in a document. Certainly not a best practice approach for effective management and representation of knowledge in business documents.
Originally published on our Knowledge Management blog
31 January 2009
Just What Do We Mean by Collaborative vs Cooperative?
I spent considerable time working the past two weeks with three different clients developing three different business platforms with three different work cultures in three different geographic locations. Yet they all do their development work , especially mission-critical development work and associated documentation (such as regulatory submissions) in essentially the same way. All three companies engage in cooperative work with very little collaboration occurring at any time in the process. Generally when collaboration does happen it is very, very late in the process and occurs only during meetings to address review comments on documents. That is it for collaboration, a thin veneer very late in the process.
In all three instances, the people I work with believe they are indeed operating in a very collaborative work environments. They sit in dismay listening to my characterization of their work practices as described above. But after we the peel the onion, they begin to appreciate the distinctions of working merely at the level of cooperation versus the level of collaboration. My starting point with clients is to provide an effective working definition of what we at McCulley/Cuppan mean by collaborative versus cooperative work practices.
As a starting point for any discussion we have to examine the fundamental difference between collaboration and cooperation. The line of demarcation is the level of formality in the relationships between departments or stakeholders in the conduction of work to support a common goal, which in the pharmaceutical industry is to bring a new drug or line extension to the market. I tell my clients that collaboration involves these departments or stakeholders coming together and fundamentally changing their individual approaches to sharing of resources and responsibilities as well as ways of working and information sharing. Cooperation on the other hand is where departments or stakeholders maintain their separate mandates and responsibilities, engage in doing most work as they see appropriate (and generally isolated from others on the project) but may agree to do some work together or present work for review by other stakeholders or departments in order to meet a common goal.
To help drive further discussion on just what do we mean by collaborative versus cooperative, I suggest you read this David Eaves post discussing his perspective on cooperating versus collaborating.
Originally published on our Knowledge Management blog
In all three instances, the people I work with believe they are indeed operating in a very collaborative work environments. They sit in dismay listening to my characterization of their work practices as described above. But after we the peel the onion, they begin to appreciate the distinctions of working merely at the level of cooperation versus the level of collaboration. My starting point with clients is to provide an effective working definition of what we at McCulley/Cuppan mean by collaborative versus cooperative work practices.
As a starting point for any discussion we have to examine the fundamental difference between collaboration and cooperation. The line of demarcation is the level of formality in the relationships between departments or stakeholders in the conduction of work to support a common goal, which in the pharmaceutical industry is to bring a new drug or line extension to the market. I tell my clients that collaboration involves these departments or stakeholders coming together and fundamentally changing their individual approaches to sharing of resources and responsibilities as well as ways of working and information sharing. Cooperation on the other hand is where departments or stakeholders maintain their separate mandates and responsibilities, engage in doing most work as they see appropriate (and generally isolated from others on the project) but may agree to do some work together or present work for review by other stakeholders or departments in order to meet a common goal.
To help drive further discussion on just what do we mean by collaborative versus cooperative, I suggest you read this David Eaves post discussing his perspective on cooperating versus collaborating.
Originally published on our Knowledge Management blog
Subscribe to:
Comments (Atom)