Assessing rater agreement using marginal association models

Susan M. Perkins, Mark P. Becker

Research output: Contribution to journalArticle

18 Scopus citations

Abstract

New models that are useful in the assessment of rater agreement, particularly when the rating scale is ordered or partially ordered, are presented. The models are parameterized to address two important aspects of rater agreement: (i) agreement in terms of the overall frequency in which raters assign categories; and (ii) the extent to which raters agree on the category assigned to individual subjects or items. We present methodology for the simultaneous modelling of univariate marginal responses and bivariate marginal associations in the K-way contingency table representing the joint distribution of K rater responses. The univariate marginal responses provide information for evaluating agreement in terms of the overall frequency of responses, and the bivariate marginal associations provide information on category-wise agreement among pairs of raters. In addition, estimated scores within a generalized log non-linear model for bivariate associations facilitate the assessment of category distinguishability.

Original languageEnglish (US)
Pages (from-to)1743-1760
Number of pages18
JournalStatistics in Medicine
Volume21
Issue number12
DOIs
StatePublished - Jun 30 2002

Keywords

  • Category distinguishability
  • Log-linear model
  • Marginal association model
  • Ordinal data
  • Rater agreement

ASJC Scopus subject areas

  • Epidemiology

Fingerprint Dive into the research topics of 'Assessing rater agreement using marginal association models'. Together they form a unique fingerprint.

Cite this