Publication Type

Conference Proceeding Article

Publication Date

4-2007

Abstract

A frequently encountered problem in decision making is the following review problem: review a large number of objects and select a small number of the best ones. An example is selecting conference papers from a large number of submissions. This problem involves two sub-problems: assigning reviewers to each object, and summarizing reviewers ’ scores into an overall score that supposedly reflects the quality of an object. In this paper, we address the score summarization sub-problem for the scenario where a small number of reviewers evaluate each object. Simply averaging the scores may not work as even a single reviewer could influence the average significantly. We recognize that reviewers are not necessarily on an equal ground and propose the notion of “leniency” to model this difference of reviewers. Two insights underpin our approach: (1) the “leniency ” of a reviewer depends on how s/he evaluates objects as well as on how other reviewers evaluate the same set of objects, (2) the “leniency” of a reviewer and the “quality ” of objects evaluated exhibit a mutual dependency relationship. These insights motivate us to develop a model that solves both “leniency ” and “quality” simultaneously. We study the effectiveness of this model on a real-life dataset.

Discipline

Databases and Information Systems | Numerical Analysis and Scientific Computing

Research Areas

Data Management and Analytics

Publication

Proceedings of the 2007 SIAM International Conference on Data Mining: April 26-28, Minneapolis, MN

First Page

539

Last Page

544

ISBN

9780898716306

Identifier

10.1137/1.9781611972771.58

Publisher

SIAM

City or Country

Philadelphia, PA

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Additional URL

http://dx.doi.org/10.1137/1.9781611972771.58

Share

COinS