Date of Conferral

1-1-2010

Degree

Ph.D.

School

Management

Advisor

Aridaman Jain

Abstract

Individuals who screen research grant applications often select candidates on the basis of a few key parameters; success or failure can be reduced to a series of peer-reviewed Likert scores on as little as four criteria: risk, relevance, return, and reasonableness. Despite the vital impact these assessments have upon the sponsors, researchers, and society in general as a benefactor of the research, there is little empirical research into the peer-review process. The purpose of this study was to investigate how reviewers evaluate reasonableness and how the process can be modeled in a decision support system. The research questions both address the relationship between an individual's estimates of reasonableness and the indicators of scope, resources, cost, and schedule as well as evaluate the performance of several cognitive models as predictors of reasonableness. Building upon Brunswik's theory of probabilistic functionalism, a survey methodology was used to implement a policy-capturing exercise that yielded a quantitative baseline of reasonableness estimates. The subsequent data analysis addressed the predictive performance of six cognitive models as measured by the mean-square-deviation between the models and the data. A novel mapping approach developed by von Helversen and Rieskamp, a fuzzy logic model, and an exemplar model were found to outperform classic linear regression. A neural network model and the QuickEst heuristic model did not perform as well as linear regression. This information can be used in a decision support system to improve the reliability and validity of future research assessments. The positive social impact of this work would be more efficient allocation and prioritization of increasingly scarce research funds in areas of science such as social, psychological, medical, pharmaceutical, and engineering.

Share

 
COinS