Publication Type

Journal Article

Version

publishedVersion

Publication Date

4-2009

Abstract

Political text offers extraordinary potential as a source of information about the policy positions of political actors. Despite recent advances in computational text analysis, human interpretative coding of text remains an important source of text-based data, ultimately required to validate more automatic techniques. The profession's main source of cross-national, time-series data on party policy positions comes from the human interpretative coding of party manifestos by the Comparative Manifesto Project (CMP). Despite widespread use of these data, the uncertainty associated with each point estimate has never been available, undermining the value of the dataset as a scientific resource. We propose a remedy. First, we characterize processes by which CMP data are generated. These include inherently stochastic processes of text authorship, as well as of the parsing and coding of observed text by humans. Second, we simulate these error-generating processes by bootstrapping analyses of coded quasi-sentences. This allows us to estimate precise levels of nonsystematic error for every category and scale reported by the CMP for its entire set of 3,000-plus manifestos. Using our estimates of these errors, we show how to correct biased inferences, in recent prominently published work, derived from statistical analyses of error-contaminated CMP data.

Discipline

Models and Methods | Political Science

Research Areas

Political Science

Publication

American Journal of Political Science

Volume

53

Issue

2

First Page

495

Last Page

513

ISSN

0092-5853

Identifier

10.1111/j.1540-5907.2009.00383.x

Publisher

Wiley

Copyright Owner and License

Publisher

Additional URL

https://doi.org/10.1111/j.1540-5907.2009.00383.x

Share

COinS