A Decision Analytic Method for Scoring Performance on Computer-Based Patient Simulations

Stephen Downs, Charles P. Friedman, Farah Marasigan, Gary Gartner

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

As computer based clinical case simulations become increasingly popular for training and evaluating clinicians, approaches are needed to evaluate a trainee's or examinee's solution of the simulated cases. We developed a decision analytic approach to scoring performance on computerized patient case simulations. We developed decision models for computerized patient case simulations in four specific domains in the field of infectious disease. The decision models were represented as influence diagrams. A single decision node represents the possible diagnoses the user may make. One chance node represents a probability distribution over the set of competing diagnoses in the simulations. The value node contains the utilities associated with all possible combinations of diagnosis and disease. All relevant data that the user may request from the simulation are represented as chance nodes with arcs to or from the diagnosis node and/or each other. Probabilities in the decision model were derived from the literature, where available, or expert opinion. Utilities were assessed by standard gamble from clinical experts. The process of solving computer based patient simulations involves repeated cycles of requesting data (history, physical examination or laboratory) and receiving these data from the simulations. Each time the user requests clinical data from the simulation, the influence diagram is evaluated with and without an arc from the corresponding chance node to the decision node. The difference in expected utility between the two solutions of the influence diagram represents the expected value of information (VOI) from the requested clinical datum. The ratio of the expected VOI from the data requested and the expected value of perfect information about the diagnosis is a normative measure of the quality of each of the user's data requests. This approach provides a continuous measure of the quality of the user's data requests in a way that is sensitive to the previous data collected. The score distinguishes serious from minor misdiagnoses. And the same influence diagram can be used to evaluate performance on multiple simulations in the same clinical domain.

Original languageEnglish (US)
Pages (from-to)667-671
Number of pages5
JournalJournal of the American Medical Informatics Association
Volume4
Issue numberSUPPL.
StatePublished - 1997
Externally publishedYes

Fingerprint

Patient Simulation
Research Design
Expert Testimony
Diagnostic Errors
Computer Simulation
Physical Examination
Communicable Diseases
History

ASJC Scopus subject areas

  • Medicine(all)

Cite this

A Decision Analytic Method for Scoring Performance on Computer-Based Patient Simulations. / Downs, Stephen; Friedman, Charles P.; Marasigan, Farah; Gartner, Gary.

In: Journal of the American Medical Informatics Association, Vol. 4, No. SUPPL., 1997, p. 667-671.

Research output: Contribution to journalArticle

Downs, Stephen ; Friedman, Charles P. ; Marasigan, Farah ; Gartner, Gary. / A Decision Analytic Method for Scoring Performance on Computer-Based Patient Simulations. In: Journal of the American Medical Informatics Association. 1997 ; Vol. 4, No. SUPPL. pp. 667-671.
@article{3cc34ff4e5ac4d9cbb0a6068ff337a45,
title = "A Decision Analytic Method for Scoring Performance on Computer-Based Patient Simulations",
abstract = "As computer based clinical case simulations become increasingly popular for training and evaluating clinicians, approaches are needed to evaluate a trainee's or examinee's solution of the simulated cases. We developed a decision analytic approach to scoring performance on computerized patient case simulations. We developed decision models for computerized patient case simulations in four specific domains in the field of infectious disease. The decision models were represented as influence diagrams. A single decision node represents the possible diagnoses the user may make. One chance node represents a probability distribution over the set of competing diagnoses in the simulations. The value node contains the utilities associated with all possible combinations of diagnosis and disease. All relevant data that the user may request from the simulation are represented as chance nodes with arcs to or from the diagnosis node and/or each other. Probabilities in the decision model were derived from the literature, where available, or expert opinion. Utilities were assessed by standard gamble from clinical experts. The process of solving computer based patient simulations involves repeated cycles of requesting data (history, physical examination or laboratory) and receiving these data from the simulations. Each time the user requests clinical data from the simulation, the influence diagram is evaluated with and without an arc from the corresponding chance node to the decision node. The difference in expected utility between the two solutions of the influence diagram represents the expected value of information (VOI) from the requested clinical datum. The ratio of the expected VOI from the data requested and the expected value of perfect information about the diagnosis is a normative measure of the quality of each of the user's data requests. This approach provides a continuous measure of the quality of the user's data requests in a way that is sensitive to the previous data collected. The score distinguishes serious from minor misdiagnoses. And the same influence diagram can be used to evaluate performance on multiple simulations in the same clinical domain.",
author = "Stephen Downs and Friedman, {Charles P.} and Farah Marasigan and Gary Gartner",
year = "1997",
language = "English (US)",
volume = "4",
pages = "667--671",
journal = "Journal of the American Medical Informatics Association : JAMIA",
issn = "1067-5027",
publisher = "Oxford University Press",
number = "SUPPL.",

}

TY - JOUR

T1 - A Decision Analytic Method for Scoring Performance on Computer-Based Patient Simulations

AU - Downs, Stephen

AU - Friedman, Charles P.

AU - Marasigan, Farah

AU - Gartner, Gary

PY - 1997

Y1 - 1997

N2 - As computer based clinical case simulations become increasingly popular for training and evaluating clinicians, approaches are needed to evaluate a trainee's or examinee's solution of the simulated cases. We developed a decision analytic approach to scoring performance on computerized patient case simulations. We developed decision models for computerized patient case simulations in four specific domains in the field of infectious disease. The decision models were represented as influence diagrams. A single decision node represents the possible diagnoses the user may make. One chance node represents a probability distribution over the set of competing diagnoses in the simulations. The value node contains the utilities associated with all possible combinations of diagnosis and disease. All relevant data that the user may request from the simulation are represented as chance nodes with arcs to or from the diagnosis node and/or each other. Probabilities in the decision model were derived from the literature, where available, or expert opinion. Utilities were assessed by standard gamble from clinical experts. The process of solving computer based patient simulations involves repeated cycles of requesting data (history, physical examination or laboratory) and receiving these data from the simulations. Each time the user requests clinical data from the simulation, the influence diagram is evaluated with and without an arc from the corresponding chance node to the decision node. The difference in expected utility between the two solutions of the influence diagram represents the expected value of information (VOI) from the requested clinical datum. The ratio of the expected VOI from the data requested and the expected value of perfect information about the diagnosis is a normative measure of the quality of each of the user's data requests. This approach provides a continuous measure of the quality of the user's data requests in a way that is sensitive to the previous data collected. The score distinguishes serious from minor misdiagnoses. And the same influence diagram can be used to evaluate performance on multiple simulations in the same clinical domain.

AB - As computer based clinical case simulations become increasingly popular for training and evaluating clinicians, approaches are needed to evaluate a trainee's or examinee's solution of the simulated cases. We developed a decision analytic approach to scoring performance on computerized patient case simulations. We developed decision models for computerized patient case simulations in four specific domains in the field of infectious disease. The decision models were represented as influence diagrams. A single decision node represents the possible diagnoses the user may make. One chance node represents a probability distribution over the set of competing diagnoses in the simulations. The value node contains the utilities associated with all possible combinations of diagnosis and disease. All relevant data that the user may request from the simulation are represented as chance nodes with arcs to or from the diagnosis node and/or each other. Probabilities in the decision model were derived from the literature, where available, or expert opinion. Utilities were assessed by standard gamble from clinical experts. The process of solving computer based patient simulations involves repeated cycles of requesting data (history, physical examination or laboratory) and receiving these data from the simulations. Each time the user requests clinical data from the simulation, the influence diagram is evaluated with and without an arc from the corresponding chance node to the decision node. The difference in expected utility between the two solutions of the influence diagram represents the expected value of information (VOI) from the requested clinical datum. The ratio of the expected VOI from the data requested and the expected value of perfect information about the diagnosis is a normative measure of the quality of each of the user's data requests. This approach provides a continuous measure of the quality of the user's data requests in a way that is sensitive to the previous data collected. The score distinguishes serious from minor misdiagnoses. And the same influence diagram can be used to evaluate performance on multiple simulations in the same clinical domain.

UR - http://www.scopus.com/inward/record.url?scp=0346580149&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0346580149&partnerID=8YFLogxK

M3 - Article

VL - 4

SP - 667

EP - 671

JO - Journal of the American Medical Informatics Association : JAMIA

JF - Journal of the American Medical Informatics Association : JAMIA

SN - 1067-5027

IS - SUPPL.

ER -