Given the importance of the overnight period in inpatient care, it remains challenging to get objective, quantitative data on whether or not a patient had a good night. And, to be provocative: how do we even define what is meant by a “good night?” While there is growing awareness that patients very often have sleepless nights in the hospital, actionable data to improve sleep and rest remain scarce.
It was in this context that I was honored to give a presentation with Dr. Juliessa Pavon at Duke Geriatric Medicine Grand Rounds. Duke and LookDeep Health have a partnership on artificial intelligence (AI) computer vision (CV) monitoring of patient activity in the general hospital. In our talk, we discussed our initial research study in Duke MICU. We also discussed our next step in the partnership: a “Living Lab” pilot on a medical stepdown unit using LookDeep’s real-time hardware and cloud-based software.
Dr. Pavon kicked off our talk with a mini-case report that highlighted a common clinical problem: a lack of information about the patient during the overnight period. The mini-case report focused on a patient with delirium who had received as-needed psychotropic medications overnight for a patient that was reported to have had “a bad night.” For a patient with delirium, this could mean any number of things, including difficulty sleeping or agitated behaviors.
The clinical example framed a key question: what are the ways in which AI/computer vision can help physicians better understand their patients in order to deliver better care? Indeed, this question is at the core of the partnership between LookDeep and Duke. Our initial study in the partnership focused on AI/CV monitoring of medical intensive care unit (MICU) patients with ICU delirium. We previously shared some of this work at ATS’22 and MLHC’22 and look forward to sharing new results at ATS’23. Our initial MICU study enrolled 22 patients and generated 2200 hours of video. As it relates to the clinical scenario presented by Dr. Pavon, the data were used to generate two different readouts that could be used on morning rounds to better understand the overnight period. First, we generated radar/spider plots that summarized patient activity along three dimensions and room environment along two dimensions (see above). Second, using AI/CV bedside activity data, we generated video summaries that represent the busiest 5-minute intervals for each hour in the overnight period.
We then talked about our ongoing work on our “Living Lab” pilot in a medical stepdown unit at Duke. The pilot will involve deployment of our real-time smart cameras, with the ultimate goal of care redesign using AI/CV data and inpatient telemedicine. This pilot involves ethnographic research on nursing handoffs we showed at ATA’23. We closed by pulling back a little and taking a big picture view of virtual medicine as it transitions from niche use cases to a ubiquitous feature of medical care.
One driver of ubiquity will be the kind of ambient data and AI analysis that we discussed. Another driver will be the tele-monitoring and tele-visits enabled by smart cameras in every hospital room. Together, these two drivers will deliver sufficient digital fidelity to qualify virtual medicine as a domain of care complementary to physical space.
I couldn’t be more proud to work with Dr. Pavon and her colleagues at Duke, and I look forward to exploring new horizons in AI/CV and virtual medicine.