“We previously described an automated early warning system that identifies patients at high risk for clinical deterioration. Detection is achieved with the use of a predictive model (the Advance Alert Monitor [AAM] program) that identifies such patients. Beginning in November 2013, we conducted a pilot test of this program in 2 hospitals in Kaiser Permanente Northern California (KPNC), an integrated health care delivery system that owns 21 hospitals. The system generates AAM scores that predict the risk of unplanned transfer to the ICU or death in a hospital ward among patients who have “full code” orders (i.e., patients who wish to have cardiopulmonary resuscitation performed in the event that they have a cardiac arrest). The alerts provide 12-hour warnings and do not require an immediate response from clinicians. Given encouraging results from this program, the KPNC leadership deployed the AAM program in its 19 remaining hospitals on a staggered schedule.
[Methods]
[..] The model was based on 649,418 hospitalizations (including 19,153 hospitalizations in which the patients’ condition deteriorated) involving 374,838 patients 18 years of age or older who had been admitted to KPNC hospitals between January 1, 2010, and December 31, 2013. Predictors included laboratory tests, individual vital signs, neurologic status, severity of illness and longitudinal indexes of coexisting conditions, care directives, and health services indicators (e.g., length of stay). As instantiated in the Epic EHR system, an AAM score of 5 (alert threshold) indicates a 12-hour risk of clinical deterioration of 8% or more. At this threshold, the model generates one new alert per day per 35 patients, with a C statistic of 0.82 and 49% sensitivity.
The eligible population consisted of adults 18 years of age or older who had initially been admitted to a general medical–surgical ward or step-down unit, including patients who had initially been admitted to a surgical area and who were then subsequently admitted to one of these units. The target population included eligible patients whose condition reached the alert threshold at sites where the program was operational (intervention cohort; alerts led to a clinical response) or not (comparison cohort; usual care, with no alerts). The comparison cohort also included all the patients who had been admitted to any of the study hospitals in the 1 year before the introduction of the intervention in the first hospital (historical controls). The nontarget population included all the patients whose condition did not reach the alert threshold.
The automated system scanned a patient’s data and assigned separate scores for the following three variables: vital signs and laboratory test results (assessed at admission and hourly with the Laboratory-based Acute Physiology Score, version 2 [LAPS2], on a scale from 0 to 414, with higher scores indicating greater physiologic instability), chronic coexisting conditions at admission (Comorbidity Point Score, version 2 [COPS2], on which 12-month scores range from 0 to 1014, with higher scores indicating a worse burden of coexisting conditions), and deterioration risk (according to the AAM, with risk scores ranging from 0 to 100%, and higher scores indicating a greater risk of clinical deterioration). The LAPS2 and COPS2 scores, which are assigned to all hospitalized adults, are scalar values that facilitate the characterization and description of patients’ vital signs plus laboratory test results and their coexisting conditions separately. Patients with the care directives “full code,” “partial code” (i.e., patients allow some, but not all, resuscitation procedures), and “do not resuscitate” are assigned AAM scores; AAM scores are not assigned to patients in the ICU or to patients who have a care directive of “comfort care only,” who were excluded from our main analyses.
In order to minimize alert fatigue, automated-system results were not shown directly to hospital staff. Specially trained registered nurses monitored alerts remotely. If the AAM score reached the threshold, the nurses working remotely performed an initial chart review and contacted the rapid-response nurse on the ward or step-down unit, who then initiated a structured assessment and contacted the patient’s physician. The physician then could initiate a clinical rescue protocol (which could include proactive transfer to the ICU), an urgent palliative care consultation, or both. Subsequently, the nurses working remotely monitored patients’ status, ensuring adherence to the performance standards of the AAM program. At active sites, registered nurses on the rapid-response team were staffed 24 hours a day, 7 days a week, and did not have regular patient assignments. Implementation teams ensured that the clinical staff at the study sites received training on all the components of the program.
[Results]
We identified 633,430 hospitalizations during the study period. After excluding 17,042 hospitalizations for which the hospital location could not be determined and 769 hospitalizations for obstetric reasons, 615,619 hospitalizations (548,838 in the eligible population and 66,781 in the ICU cohort) involving 354,489 patients were included in the analysis. Events for which a patient’s condition triggered an alert at a hospital different from the one at which the patient had been admitted were rare (<1%).
[..] We estimated an absolute difference of 3.8 percentage points in mortality within 30 days after an event reaching the alert threshold between the intervention cohort and the comparison cohort. This difference translated into 3.0 deaths (95% CI, 1.2 to 4.8) avoided per 1000 eligible patients or to 520 deaths (95% CI, 209 to 831) per year over the 3.5-year study period among approximately 153,000 annual hospitalizations. The intervention was also associated with a lower incidence of ICU admission, a higher percentage of patients with a favorable status 30 days after the alert, a shorter length of stay, and longer survival.
[..] Patients in the intervention cohort were less likely than those in the comparison cohort to die without a referral for palliative care. We did not observe clinically significant differences in vital-sign measurements, therapies, and changes in care directives between the intervention cohort and the comparison cohort. This result indicates that we were not able to identify changes in process measures that may have led to the observed improvements in outcomes associated with the intervention.
[Discussion]
In this study, we quantified beneficial hospital outcomes — lower mortality, a lower incidence of ICU admission, and a shorter length of hospital stay — that were associated with staggered deployment of an automated predictive model that identifies patients at high risk for clinical deterioration. Unlike many scores currently in use, AAM is fully automated, takes advantage of detailed EHR data, and does not require an immediate response by hospital staff. These factors facilitated its incorporation into a rapid-response system that uses remote monitoring, thus shielding providers from alert fatigue. The AAM program is based on standardized workflows for all aspects of care of patients whose condition is deteriorating in wards, including aspects of care involving clinical rescue, palliative care, provider education, and system administration.
The magnitude of the effects we found are consistent with other studies of rapid-response systems. With respect to complex automated scores, it is important to consider the rigorously executed randomized trial conducted by Kollef et al. Those authors found a shorter length of stay in the hospital among patients who had an early-warning-system alert displayed to clinicians than among patients for whom alerts were not displayed, but they also found no change in the incidence of ICU admission or in in-hospital mortality. In a very large study of a rapid-response system that used manual scores, Chen et al. found that preexisting favorable trends in mortality in 292 Australian hospitals continued, with additional improvement among low-risk patients. Priestley et al. compared outcomes in a single hospital before and after the implementation of a rapid-response system that used manual scoring. They found lower inpatient mortality than we did but did not report 30-day mortality and had equivocal findings regarding the length of stay in the hospital. Bedoya et al. used an automated version of the National Early Warning Score and found no change in the incidence of ICU admission or in in-hospital mortality. Bedoya et al. also found that there was clinician frustration with excessive alerts and noted that the score was largely ignored by frontline nursing staff.
[..] Reflection on our findings suggests future directions for research. One direction is to quantify the relative contributions of the predictive model and the clinical rescue and palliative care processes. Automated scores have statistical performance that is superior to scores such as the National Early Warning Score, but it is unclear whether scores that use newer approaches (e.g., so-called bespoke models) will necessarily result in better outcomes. This is because, as Bedoya et al. point out, clinicians might not use the predictions. A second area relates to how notifications are handled. Although the use of remote monitoring in KPNC appears to have been successful, it is not the only way in which one might ensure compliance without alert fatigue. The program is amortized across 21 hospitals, which permits economies of scale, such as the use of nurses working remotely who attempt to mitigate alert fatigue as well as monitor compliance with clinical rescue and palliative care workflows. This approach may not be feasible for many hospitals.”
Full article, Escobar GJ, Liu VX, Schuler A et al. New England Journal of Medicine 2020.12.20