If a tree falls in the forest and no one is around to hear it, did it really fall?
In hospitals, some version of this question plays out every day. A patient falls – from a chair, from a bed, in their room – and by the time anyone gets there, the fall is over. The nurse arrives, finds the patient on the floor, and starts piecing it together. Are they hurt? Did they hit their head? How long were they down?
We almost never witness the actual moment. We see the aftermath. We assess the patient, chart it, report it, debrief it. But the fall itself – the sequence of events that took a patient from safe to not safe – is almost always invisible. It happened, but nobody saw it.
Until now.
What We've Always Known – and What We Couldn't Prove
Every nurse who has worked with high-risk fall patients who are being cared for on a Virtual Sitting platform knows the feeling: a patient in a chair just feels riskier. They shift, they lean, they try to stand – and it happens fast. When a patient is in bed, the rails and the height buy you time. You have seconds to respond. With a chair, a patient can be on the floor before the sitter finishes a sentence.
But knowing something in your gut is different from being able to measure it. And there's a reasonable counter-argument that's hard to dismiss: patients who are in chairs tend to be more mobile. They're up. They're doing better. Does that make them less likely to fall? Maybe they're more alert, more compliant when the sitter talks to them. And without knowing the basics – how much time patients actually spend in each position – you just can't compare. You can count chair falls and bed falls, but you can't calculate a rate without a denominator.
That's the gap. And it's existed for as long as virtual sitting programs have been around.
Now We Can Measure It
We just published a preprint of a new study – building on our earlier peer-reviewed foundational work – that for the first time we're aware of answers this question with real data. We wanted to get it out ahead of AONL next week because the findings matter. The study covers about 17 months of data across ten hospitals, nearly 3,980 monitored patients, and close to 293,000 patient-hours of continuous observation.
Because our AI monitoring system tracks patient position continuously – not from occasional chart checks, but every minute of every hour – we finally have the denominator. We know how much time patients spent in chairs. We know how much time they spent in beds. And we can compare what happened during each.
The short version: chairs are significantly riskier. The fall rate for chair-seated patients was about four times higher than for bed-bound patients. After controlling for things like time of day, the adjusted risk was about 2.3 times higher. (For those who want the precise numbers and methodology, the full paper is here.)
That adjusted number – about 2.3 times – actually surprised us. We thought it might be higher. But that's the value of measuring instead of guessing: patient mobility, compliance, and attentiveness to sitter instructions all factor in. This isn't a measure of how dangerous chairs are in general – it's a measure of how well patients in chairs do under a virtual sitting program. And even in that context, more than twice the fall rate is meaningful.
And it's not just falls. The data shows that chair patients require substantially more attention from virtual observers. Per 100 adjusted hours, observers initiated 71.1 talk events for chair patients versus 54.4 for bed patients – and triggered alarms at nearly double the rate: 18.1 versus 9.3. Virtual observers aren't imagining it. Chair patients really do keep them busier.
Here's why this matters practically: if your virtual sitting program is watching 20% more patients in chairs than the hospital down the street, your fall metrics are going to look different. Not because your program is worse – because the risk profile of who you're watching is different – independent of their fall risk. Without knowing the denominator, you'd never be able to tell.
Beyond the Numbers: What Actually Happens
The study tells us chairs are riskier. But the study can't show you why.
Because we have AI-powered continuous monitoring, we can do what no one has been able to do before – we can actually watch the moment the fall happens. In privacy-protected, blurred video, we can see the sequence of events that took a patient from sitting safely to being on the floor. We are in the forest. We can see and hear the tree fall.
And when we looked at chair falls across our data, one thing jumped out immediately: the footrest.
In six out of seven direct chair falls we reviewed, the footrest was involved. The pattern is remarkably consistent. The patient leans forward. They put weight on the footrest – the way you'd put weight on a rigid seat, because it looks stable. It looks like it's part of the chair. But it's not. It's hinged, or spring-loaded, or simply not designed to hold weight from that angle. It gives. And the patient goes with it.
The whole thing takes just a few seconds.
This is what makes this finding so powerful. It's not abstract. It's not statistical. You can watch it happen. And once you've seen it, you can't unsee it – because you'll recognize the setup every time you walk into a room where a patient is leaning forward in one of these chairs.
What You Can Do About It
We want patients in chairs. Mobility matters – for respiratory function, circulatory health, delirium prevention, recovery, and everything we know about why keeping patients active and upright leads to better outcomes. The answer here is absolutely not to keep people in bed. It's to recognize that chairs introduce a specific, identifiable risk – and to address it directly.
Three Things to Consider for Your Program
See How It Affects Your Program
To make this tangible, try the simulator below. Set it to match your virtual observation program – how many patients per monitor, what percentage are in chairs, and how effective your fall prevention program is – and see how the chair mix changes your expected outcomes.
Virtual Observation Fall Risk Simulator
See how chair mix and program effectiveness interact
How to read "effective patients": If you're watching 12 patients but some are in chairs at 2.35× the risk, your monitor's workload is equivalent to watching more patients who are all in beds. This helps you think about sitter capacity and attention.
Notice what that purple box tells you. A 90% effective program with a significant share of chair patients delivers the same fall rate as a less effective program with all bed patients. Chairs are harder to manage – and unless you're measuring the chair-to-bed ratio, you'd never know it.
The chart above tells the story simply: start with a program that's 90% effective at preventing falls. As the percentage of patients in chairs increases, that program's effective performance degrades – not because the program got worse, but because chairs are harder to protect. At 50% chair patients, your 90% program performs like an 83% program. The chairs are quietly eating your results.
A Few Important Caveats
We want to be upfront about what this study can and can't tell us. The dataset is large – nearly 3,980 monitored patients and 293,000 patient-hours across ten hospitals. But falls are rare events, and that's actually the core challenge. Even in a dataset this size, the number of actual falls is relatively small. That means our estimates, while directionally clear, carry some uncertainty. The difference could be somewhat larger or smaller than the 2.3× we measured.
This is also an observational study across ten hospitals. We don't have data on individual patient acuity, mobility scores, or staffing levels – all of which could influence who ends up in a chair and how closely they're watched. It's possible that sicker patients are placed in chairs more often, which would make chairs look riskier than they inherently are. We can't fully separate those factors yet.
We also don't have data on the specific types, brands, or models of recliner chairs in use at each hospital, or whether all chairs were in proper working order at the time of the falls. Chair design and condition – including how footrests lock, how easily they give under weight, and how well they've been maintained – could meaningfully influence fall risk.
The footrest finding is striking and consistent – 6 out of 7 – but that's still a small number of events. We're confident in the pattern, but it needs to be validated with more data before it becomes the basis for formal policy changes. For now, we think it's strong enough to warrant attention and conversation, as it's an easy intervention to remind patients about them.
The bottom line: this study gives us the first real quantitative look at something nurses have felt for a long time. The direction is clear. The specifics will sharpen as more data comes in and future research builds on this. We wanted to share what we've found now – ahead of world leaders in the topic – the nurses at AONL – gathering to understand how to better protect patients and the nurses that care for them.
The Bigger Picture
The AI isn't solving this problem on its own. What it's doing is giving us something we've never had before: a way to see what's actually happening and to measure it precisely enough to act on it.
Many nurses have long suspected that chairs feel riskier. Now we can put some numbers around it. The combination of data and anonymized videos lets us understand and apply our knowledge in new ways.
We encounter situations like this every day. And because we finally have continuous eyes on the patient – a record of what happened even when nobody was physically in the room – we can combine that data with good old-fashioned clinical judgment. The technology provides the lens. The nurses and care teams provide the wisdom to know what to do with what they see.
We hope this helps you think about your own virtual observation program a little differently. Not to fear chairs – but to respect what they ask of us, and to give your patients the specific information they need to stay safe.