30 March 2008

Bad Advice on Peer Review of Grant Proposals

The blog Entertaining Research posted some comments on the question I posed regarding formal training for peer review. One piece posted cited work by Alan Jay Smith the Task of the Referee. I think this paper is a good reference point for discussion of peer review of proposed journal articles. However, it is not a good reference regarding the task of assessing grant proposals. I find Smith encouraging some of the elements that contribute to poor review performance and variable outcomes as described by Nancy Mayo, et al. at the 2005 JAMA Conference on Peer Review: “Peering at Peer Review: Harnessing the Collective Wisdom to Arrive at Funding Decisions About Grant Applications” or their associated article: “Peering at peer review revealed high degree of chance associated with funding of grant applications” published in the Journal of Clinical Epidemiology [the conclusion taken by Mayo and her co-authors is there is a lack of concordance among reviewers on the relative merits of individual research grants, which indicates there is a risk that funding outcomes will depend on who is assigned as reviewers rather than the merits of the project] and by Cole, Cole, and Simon: Science, 1981, Vol 214, Issue 4523, 881-886 “Chance and consensus in peer review” [the paper covers an experiment in which 150 proposals submitted to the National Science Foundation were evaluated independently by a new set of reviewers. Results showed little correlation between sets of reviewers and suggest that getting a research grant depends to a significant extent on chance. The implication here is that the method of grant proposal assessment is very idiosyncratic].

In Smith's work, he suggests that subjective assessment of grants is acceptable. That is, judge merit of future work based on prior output even if proposal is sloppy or contains insufficient detail and judge the merit of proposed work based on where one is educated. Here is a quote from his paper:
A major difference between a research proposal and a paper is that a proposal is speculative, so you must evaluate what is likely to result. Therefore, when you evaluate a proposal by a well-known investigator, a substantial fraction of that evaluation should depend on the investigator's reputation. People with a consistent history of good research will probably do good work, no matter how sloppy or brief their proposal. People with a consistent history of low-quality research will probably continue in the same manner, no matter how exciting the proposal, how voluminous their research, or how hot the topic. However, you must also consider the possibility that a well-regarded researcher may propose poor research or that a researcher noted for poor-quality work has decided to do better work. It is important that you do not discriminate against newcomers who have no reputation, either good or bad. In this case, you must rely much more heavily on the text of the proposal and such information as the investigator's PhD institution and dissertation, academic record, host institution, and comments by his or her advisor or others.
The need is for a well-characterized set of standards with associated set of criteria for judging the merits of research proposals. Then all assessors must base their decisions within the framework of the standards and they must make their arguments regarding merit based on the criteria.


Originally published on our Knowledge Management blog

No comments:

Post a Comment