Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools

Findings from the NCAA-DoD CARE Consortium

Care Consortium Investigators

Research output: Contribution to journalArticle

20 Citations (Scopus)

Abstract

Background: Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. Objective: To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. Methods: Participants (n = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen’s d). Clinical interpretation guidelines were also generated using confidence intervals to account for non-normally distributed data. Results: Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1–3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1–2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). Conclusions: This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the absence of a gold standard diagnostic measure, with the ultimate goal of developing more refined and sound tools for clinical use. Clinical interpretation guidelines are provided for the clinician to apply with a degree of certainty in application.

Original languageEnglish (US)
Pages (from-to)1-14
Number of pages14
JournalSports Medicine
DOIs
StateAccepted/In press - Nov 14 2017

Fingerprint

Reproducibility of Results
Reaction Time
Quality of Life
Guidelines
Athletes
Confidence Intervals
Students
Education
Research

ASJC Scopus subject areas

  • Orthopedics and Sports Medicine
  • Physical Therapy, Sports Therapy and Rehabilitation

Cite this

Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools : Findings from the NCAA-DoD CARE Consortium. / Care Consortium Investigators.

In: Sports Medicine, 14.11.2017, p. 1-14.

Research output: Contribution to journalArticle

@article{d09b8a4d5ca24e5da8f1e5c9ee1689b9,
title = "Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium",
abstract = "Background: Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. Objective: To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. Methods: Participants (n = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen’s d). Clinical interpretation guidelines were also generated using confidence intervals to account for non-normally distributed data. Results: Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1–3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1–2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). Conclusions: This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the absence of a gold standard diagnostic measure, with the ultimate goal of developing more refined and sound tools for clinical use. Clinical interpretation guidelines are provided for the clinician to apply with a degree of certainty in application.",
author = "{Care Consortium Investigators} and Broglio, {Steven P.} and Barry Katz and Shi Zhao and Michael McCrea and Thomas McAllister and Hoy, {April Reed} and Joseph Hazzard and Louise Kelly and Justus Ortega and Nicholas Port and Margot Putukian and Dianne Langford and Darren Campbell and Gerald McGinty and Patrick O’Donnell and Steven Svoboda and John DiFiori and Christopher Giza and Holly Benjamin and Thomas Buckley and Thomas Kaminski and James Clugston and Julianne Schmidt and Luis Feigenbaum and James Eckner and Kevin Guskiewicz and Jason Mihalik and Jessica Miles and Scott Anderson and Christina Master and Anthony Kontos and Sara Chrisman and Alison Brooks and Stefan Duma and Christopher Miles and Brian Dykhuizen and Laura Lintner",
year = "2017",
month = "11",
day = "14",
doi = "10.1007/s40279-017-0813-0",
language = "English (US)",
pages = "1--14",
journal = "Sports Medicine",
issn = "0112-1642",
publisher = "Springer International Publishing AG",

}

TY - JOUR

T1 - Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools

T2 - Findings from the NCAA-DoD CARE Consortium

AU - Care Consortium Investigators

AU - Broglio, Steven P.

AU - Katz, Barry

AU - Zhao, Shi

AU - McCrea, Michael

AU - McAllister, Thomas

AU - Hoy, April Reed

AU - Hazzard, Joseph

AU - Kelly, Louise

AU - Ortega, Justus

AU - Port, Nicholas

AU - Putukian, Margot

AU - Langford, Dianne

AU - Campbell, Darren

AU - McGinty, Gerald

AU - O’Donnell, Patrick

AU - Svoboda, Steven

AU - DiFiori, John

AU - Giza, Christopher

AU - Benjamin, Holly

AU - Buckley, Thomas

AU - Kaminski, Thomas

AU - Clugston, James

AU - Schmidt, Julianne

AU - Feigenbaum, Luis

AU - Eckner, James

AU - Guskiewicz, Kevin

AU - Mihalik, Jason

AU - Miles, Jessica

AU - Anderson, Scott

AU - Master, Christina

AU - Kontos, Anthony

AU - Chrisman, Sara

AU - Brooks, Alison

AU - Duma, Stefan

AU - Miles, Christopher

AU - Dykhuizen, Brian

AU - Lintner, Laura

PY - 2017/11/14

Y1 - 2017/11/14

N2 - Background: Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. Objective: To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. Methods: Participants (n = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen’s d). Clinical interpretation guidelines were also generated using confidence intervals to account for non-normally distributed data. Results: Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1–3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1–2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). Conclusions: This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the absence of a gold standard diagnostic measure, with the ultimate goal of developing more refined and sound tools for clinical use. Clinical interpretation guidelines are provided for the clinician to apply with a degree of certainty in application.

AB - Background: Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. Objective: To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. Methods: Participants (n = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen’s d). Clinical interpretation guidelines were also generated using confidence intervals to account for non-normally distributed data. Results: Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1–3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1–2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). Conclusions: This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the absence of a gold standard diagnostic measure, with the ultimate goal of developing more refined and sound tools for clinical use. Clinical interpretation guidelines are provided for the clinician to apply with a degree of certainty in application.

UR - http://www.scopus.com/inward/record.url?scp=85033701806&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85033701806&partnerID=8YFLogxK

U2 - 10.1007/s40279-017-0813-0

DO - 10.1007/s40279-017-0813-0

M3 - Article

SP - 1

EP - 14

JO - Sports Medicine

JF - Sports Medicine

SN - 0112-1642

ER -