Skip to Main Content

A growing number of AI tools are being used to predict everything from sepsis to strokes, with the hope of accelerating the delivery of life-saving care. But over time, new research suggests, these predictive models can become a victim of their own success — sending their performance into a nosedive and generating inaccurate, potentially harmful results.

“There is no accounting for this when your models are being tested,” said Akhil Vaid, an instructor of data-driven and digital medicine at the Icahn School of Medicine at Mount Sinai and author of the new research, published Monday in the Annals of Internal Medicine. “You can’t run validation studies, do external validation, run clinical trials — because all they’ll tell you is that the model works. And when it starts to work, that is when the problems will arise.”

advertisement

Vaid and his Mount Sinai colleagues simulated the deployment of two models that predicted a patient’s risk of dying and acute kidney injury within five days of entering the ICU. Their simulations assumed that the models did what they were supposed to — lower deaths and kidney injury by identifying patients for earlier intervention.

Get unlimited access to award-winning journalism and exclusive events.

Subscribe

Exciting news! STAT has moved its comment section to our subscriber-only app, STAT+ Connect. Subscribe to STAT+ today to join the conversation or join us on Twitter, Facebook, LinkedIn, and Threads. Let's stay connected!

To submit a correction request, please visit our Contact Us page.