How a startup beat health care heavyweights to win Medicare’s AI contest
by Casey Ross
A $1 million government contest to predict health problems with artificial intelligence attracted the heavyweights of industry and beyond — from Mayo Clinic, to IBM, to the data and consulting powerhouse Deloitte.
But the winner of Medicare’s AI health outcomes challenge is a lesser-known startup from Austin, Texas, called ClosedLoop.ai. The company, whose victory was announced late Friday, bested 300 rivals with a system capable of forecasting adverse health events by crunching an array of data on patients.
The system not only predicts outcomes such as infections and unplanned hospital stays, but embeds its warnings in electronic health records and suggests interventions that doctors can take to prevent them from occurring. ClosedLoop’s product envisions a future in which AI saves gobs of money — and untold anguish — by helping doctors keep patients healthy, instead of reacting only when they get sick. The Centers for Medicare and Medicaid Services said it hopes to incorporate solutions from the contest into new health care delivery models to be tested by its innovation center.
But ClosedLoop is already working with many health care clients, including Medical Home Network, the nation’s largest Medicaid accountable care organization, to improve patient care. STAT spoke with the company’s chief executive, Andrew Eye, about the startup’s growth, how its technology works, and what steps it’s taking to root out bias and other problems that come with incorporating AI into health care decision-making. This interview has been lightly edited for brevity and clarity.
How does your contest-winning AI make its predictions?
CMS said, “Tell us who’s going to have something bad happen in the next 30 days.” We predicted 13 different things (such as falls, infections, and unplanned hospitals readmissions), and we predicted those 13 things individually for patients. So a patient might be in the 80th percentile overall for having anything bad happen in the next 30 days, and that same patient might be in the 99th percentile for a fall-related injury. Why is that important? Because I may not have the capacity to enroll 20% of people in care management. But, if I want to know who I need to do home modification for — who should get that grab rail in the shower — then I can look at the fall-related injuries model. This portfolio approach allows us to go customer by customer, and say, “Let’s target your interventions to the people who need them.”
Can these predictions be implemented in existing electronic health records or other software, or does it work as a separate system?
We produce what we call a patient health forecast. What that looks like is, here are the specific things this person is at risk for and the reasons why, and that gets surfaced into whatever their software system is — that can be a care management platform or an electronic health record. The rule of thumb for us is, “No new screens.” I don’t want to have yet another system I have to log into as a physician.
I know you’re biased, but how big a deal is it for your company to win this contest?
When we were selected to the final seven, [former CMS Administrator] Seema Verma’s quote was that the seven finalists “represent the finest the AI community has to offer.” So as you can imagine, I have that tattooed on my arm now.
How is the company using AI now to serve health clients?
We have a machine learning automation platform built specifically and only for health care. You can think of that as a workbench for data scientists to build and deploy new predictive models better, faster, cheaper. We offer a catalogue of models for common health care use cases. That is not a catalogue of pre-trained models, nor is it intellectual property we’re trying to protect. Rather, we think of each of the models as a template that can be re-trained on a local population and that can be adapted to the data streams available for a given organization.
What type of clients are you working with?
The primary targets for us are risk-takers — health care organizations that are trying to drive down the cost of care. That’s often payers and risk-taking providers, so accountable care organizations, clinically integrated networks, anybody focused on alternative payment models. That’s kind of our sweet spot. We also work with digital health companies looking to use AI and embed it in their own products.
How do you guard against bias, or further disadvantaging marginalized populations, in making predictions about who needs more care?
That was a big focus for us in this challenge, just picking the right outcomes. You can’t predict cost. You have to predict medical problems such as fall-related injuries, hospital-acquired infections, or avoidable ER utilization, because each of those events is specifically intervenable. The broader conversation around bias and fairness and auditability is so important. For every patient and every prediction made every day, we can tell you why they were flagged as high-risk and how did that change over time. You can’t be fair if you’re not transparent. If you can’t explain why someone is high-risk, how are you ever going to allow someone to look at that and judge the fairness of those algorithms?