Hospital study shows onset of COVID-19 led to spike in sepsis alerts at hospitals
A descriptive study in JAMA Network Open — involving 24 hospitals across four geographically diverse health systems looked at the number of sepsis alerts in the months before and during COVID-19 — and found a significant jump in alerts after the start of the pandemic, despite lower patient numbers from canceled elective surgeries, according to a news release from Michigan Medicine.
The hospitals involved were associated with the University of Michigan, New York University Langone Health, Mass General Brigham, and BJC Healthcare.
The Epic Sepsis Model, also known as ESM, is a widely implemented system based on artificial intelligence that will generate an alert if a patient displays a sufficient number of variables indicating a risk of sepsis.
“The ESM calculates a score from 0-100, which reflects the probability (or percent chance) of a patient developing sepsis in the next 6 hours,” explained author Karandeep Singh, MD, MMSc, Assistant Professor in the Departments of Learning Health Sciences, Internal Medicine, Urology, and Information at U-M. Singh also heads U-M Health’s Clinical Intelligence Committee.
With an alerting threshold of 6, “our study found that the total volume of daily sepsis alerts increased by 43%, even though hospitals had cancelled elective surgeries and reduced their census by 35% to prepare for the [COVID] surge,” Singh said. One reason for the higher number of alerts despite a lower patient census is that the patients who filled hospitals during COVID’s first wave were, in general, in more critical condition than elective surgery patients and therefore more likely to generate alerts.
Another reason for the jump in alerts could be a result of “dataset shift,” where a model’s performance will deteriorate when there are sudden, unexpected changes in a hospital’s case mix (e.g., a COVID surge). Singh previously described the phenomenon of dataset shift in a July 2021 New England Journal of Medicine paper.
“COVID-19 was a ‘black swan’ event that likely affected many existing models in ways we don’t yet fully understand,” said Singh. “Future studies that use post-pandemic data to evaluate AI models should be careful to interpret their findings within the context of the pandemic.”
And aside from what caused it, “the increase in total alerts is illuminating because it provides a sense of just how busy front-line clinicians were during the first wave [of COVID], despite the lower hospital census,” he said.
The study points to the importance of monitoring sudden increases in alert volume, both to prevent alert fatigue for overworked healthcare workers, and to look into whether some form of dataset shift is affecting a model’s accuracy.
“We need to have a way to anticipate and deal with situations where the alert volume is high,” said Singh. “Even among well-resourced hospitals, there is a limit to the capacity to respond to model-driven alerts. Clinical AI governance is a first step toward providing real-time guidance to health systems when situations like this arise, and for setting standards based on local constraints.”