Pilot evaluation of a method to assess prescribers’ information processing of medication alerts

Alissa L. Russ, Brittany L. Melton, Joanne Daggy, Jason J. Saleem

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Background Prescribers commonly receive alerts during medication ordering. Prescribers work in a complex, time-pressured environment; to enhance the effectiveness of safety alerts, the effort needed to cognitively process these alerts should be minimized. Methods to evaluate the extent to which computerized alerts support prescribers’ information processing are lacking. Objective To develop a methodological protocol to assess the extent to which alerts support prescribers’ information processing at-a-glance; specifically, the incorporation of information into their working memory. We hypothesized that the method would be feasible and that we would be able to detect a significant difference in prescribers’ information processing with a revised alert display that incorporates warning design guidelines compared to the original alert display. Methods A counterbalanced, within-subject study was conducted with 20 prescribers in a human-computer interaction laboratory. We tested a single alert that was displayed in two different ways. Prescribers were informed that an alert would appear for 10 s. After the alert was shown, a white screen was displayed, and prescribers were asked to verbally describe what they saw; indicate how many total warnings; and describe anything else they remembered about the alert. We measured information processing via the accuracy of prescribers’ free recall and their ability to identify that three warning messages were present. Two analysts independently evaluated participants’ responses against a comprehensive catalog of alert elements and then discussed discrepancies until reaching consensus. Results This feasibility study demonstrated that the method seemed to be effective for evaluating prescribers’ information processing of medication alert displays. With this method, we were able to detect significant differences in prescribers’ recall of alert information. The proportion of total data elements that prescribers were able to accurately recall was significantly greater for the revised versus original alert display (p = 0.006). With the revised display, more prescribers accurately reported that three warnings were shown (p = 0.002). Conclusions The methodological protocol was feasible for evaluating the alert display and yielded important findings on prescribers’ information processing. Study methods supplement traditional usability evaluation methods and may be useful for evaluating information processing of other healthcare technologies.

Original languageEnglish (US)
Pages (from-to)11-18
Number of pages8
JournalJournal of Biomedical Informatics
Volume66
DOIs
StatePublished - Feb 1 2017

Fingerprint

Automatic Data Processing
Display devices
Network protocols
Aptitude
Feasibility Studies
Human computer interaction
Short-Term Memory
Consensus
Guidelines
Technology
Delivery of Health Care
Safety
Data storage equipment

Keywords

  • Alert systems, medication
  • Cognition
  • Decision support systems, clinical
  • Human factors and ergonomics
  • Patient safety
  • User-computer interface

ASJC Scopus subject areas

  • Computer Science Applications
  • Health Informatics

Cite this

Pilot evaluation of a method to assess prescribers’ information processing of medication alerts. / Russ, Alissa L.; Melton, Brittany L.; Daggy, Joanne; Saleem, Jason J.

In: Journal of Biomedical Informatics, Vol. 66, 01.02.2017, p. 11-18.

Research output: Contribution to journalArticle

@article{d1ad6950d21345258bfb77ca43e9f3df,
title = "Pilot evaluation of a method to assess prescribers’ information processing of medication alerts",
abstract = "Background Prescribers commonly receive alerts during medication ordering. Prescribers work in a complex, time-pressured environment; to enhance the effectiveness of safety alerts, the effort needed to cognitively process these alerts should be minimized. Methods to evaluate the extent to which computerized alerts support prescribers’ information processing are lacking. Objective To develop a methodological protocol to assess the extent to which alerts support prescribers’ information processing at-a-glance; specifically, the incorporation of information into their working memory. We hypothesized that the method would be feasible and that we would be able to detect a significant difference in prescribers’ information processing with a revised alert display that incorporates warning design guidelines compared to the original alert display. Methods A counterbalanced, within-subject study was conducted with 20 prescribers in a human-computer interaction laboratory. We tested a single alert that was displayed in two different ways. Prescribers were informed that an alert would appear for 10 s. After the alert was shown, a white screen was displayed, and prescribers were asked to verbally describe what they saw; indicate how many total warnings; and describe anything else they remembered about the alert. We measured information processing via the accuracy of prescribers’ free recall and their ability to identify that three warning messages were present. Two analysts independently evaluated participants’ responses against a comprehensive catalog of alert elements and then discussed discrepancies until reaching consensus. Results This feasibility study demonstrated that the method seemed to be effective for evaluating prescribers’ information processing of medication alert displays. With this method, we were able to detect significant differences in prescribers’ recall of alert information. The proportion of total data elements that prescribers were able to accurately recall was significantly greater for the revised versus original alert display (p = 0.006). With the revised display, more prescribers accurately reported that three warnings were shown (p = 0.002). Conclusions The methodological protocol was feasible for evaluating the alert display and yielded important findings on prescribers’ information processing. Study methods supplement traditional usability evaluation methods and may be useful for evaluating information processing of other healthcare technologies.",
keywords = "Alert systems, medication, Cognition, Decision support systems, clinical, Human factors and ergonomics, Patient safety, User-computer interface",
author = "Russ, {Alissa L.} and Melton, {Brittany L.} and Joanne Daggy and Saleem, {Jason J.}",
year = "2017",
month = "2",
day = "1",
doi = "10.1016/j.jbi.2016.11.011",
language = "English (US)",
volume = "66",
pages = "11--18",
journal = "Journal of Biomedical Informatics",
issn = "1532-0464",
publisher = "Academic Press Inc.",

}

TY - JOUR

T1 - Pilot evaluation of a method to assess prescribers’ information processing of medication alerts

AU - Russ, Alissa L.

AU - Melton, Brittany L.

AU - Daggy, Joanne

AU - Saleem, Jason J.

PY - 2017/2/1

Y1 - 2017/2/1

N2 - Background Prescribers commonly receive alerts during medication ordering. Prescribers work in a complex, time-pressured environment; to enhance the effectiveness of safety alerts, the effort needed to cognitively process these alerts should be minimized. Methods to evaluate the extent to which computerized alerts support prescribers’ information processing are lacking. Objective To develop a methodological protocol to assess the extent to which alerts support prescribers’ information processing at-a-glance; specifically, the incorporation of information into their working memory. We hypothesized that the method would be feasible and that we would be able to detect a significant difference in prescribers’ information processing with a revised alert display that incorporates warning design guidelines compared to the original alert display. Methods A counterbalanced, within-subject study was conducted with 20 prescribers in a human-computer interaction laboratory. We tested a single alert that was displayed in two different ways. Prescribers were informed that an alert would appear for 10 s. After the alert was shown, a white screen was displayed, and prescribers were asked to verbally describe what they saw; indicate how many total warnings; and describe anything else they remembered about the alert. We measured information processing via the accuracy of prescribers’ free recall and their ability to identify that three warning messages were present. Two analysts independently evaluated participants’ responses against a comprehensive catalog of alert elements and then discussed discrepancies until reaching consensus. Results This feasibility study demonstrated that the method seemed to be effective for evaluating prescribers’ information processing of medication alert displays. With this method, we were able to detect significant differences in prescribers’ recall of alert information. The proportion of total data elements that prescribers were able to accurately recall was significantly greater for the revised versus original alert display (p = 0.006). With the revised display, more prescribers accurately reported that three warnings were shown (p = 0.002). Conclusions The methodological protocol was feasible for evaluating the alert display and yielded important findings on prescribers’ information processing. Study methods supplement traditional usability evaluation methods and may be useful for evaluating information processing of other healthcare technologies.

AB - Background Prescribers commonly receive alerts during medication ordering. Prescribers work in a complex, time-pressured environment; to enhance the effectiveness of safety alerts, the effort needed to cognitively process these alerts should be minimized. Methods to evaluate the extent to which computerized alerts support prescribers’ information processing are lacking. Objective To develop a methodological protocol to assess the extent to which alerts support prescribers’ information processing at-a-glance; specifically, the incorporation of information into their working memory. We hypothesized that the method would be feasible and that we would be able to detect a significant difference in prescribers’ information processing with a revised alert display that incorporates warning design guidelines compared to the original alert display. Methods A counterbalanced, within-subject study was conducted with 20 prescribers in a human-computer interaction laboratory. We tested a single alert that was displayed in two different ways. Prescribers were informed that an alert would appear for 10 s. After the alert was shown, a white screen was displayed, and prescribers were asked to verbally describe what they saw; indicate how many total warnings; and describe anything else they remembered about the alert. We measured information processing via the accuracy of prescribers’ free recall and their ability to identify that three warning messages were present. Two analysts independently evaluated participants’ responses against a comprehensive catalog of alert elements and then discussed discrepancies until reaching consensus. Results This feasibility study demonstrated that the method seemed to be effective for evaluating prescribers’ information processing of medication alert displays. With this method, we were able to detect significant differences in prescribers’ recall of alert information. The proportion of total data elements that prescribers were able to accurately recall was significantly greater for the revised versus original alert display (p = 0.006). With the revised display, more prescribers accurately reported that three warnings were shown (p = 0.002). Conclusions The methodological protocol was feasible for evaluating the alert display and yielded important findings on prescribers’ information processing. Study methods supplement traditional usability evaluation methods and may be useful for evaluating information processing of other healthcare technologies.

KW - Alert systems, medication

KW - Cognition

KW - Decision support systems, clinical

KW - Human factors and ergonomics

KW - Patient safety

KW - User-computer interface

UR - http://www.scopus.com/inward/record.url?scp=85007164362&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85007164362&partnerID=8YFLogxK

U2 - 10.1016/j.jbi.2016.11.011

DO - 10.1016/j.jbi.2016.11.011

M3 - Article

C2 - 27908833

AN - SCOPUS:85007164362

VL - 66

SP - 11

EP - 18

JO - Journal of Biomedical Informatics

JF - Journal of Biomedical Informatics

SN - 1532-0464

ER -