OH, LENGTH OF STAY, HOW SHALL I REDUCE THEE?
- Andrew Trees
- 03/25/2019
Clinical variation is the focus in healthcare, thanks in part to the unprecedented access to data made possible through EHRs. This information has inspired and, frankly, overwhelmed hospitals seeking to pinpoint inefficiencies and optimize pathways and processes. Clinicians striving to deliver quality are also overwhelmed. Some areas are simple to change: check this box, prescribe this drug at this time, eliminate this duplicative test. Yet the vast majority of opportunity is more complex, subtle, even irreducible. And, amongst important-yet-vexing quality improvement goals, reducing length of stay (LOS) has to be the granddaddy of them all.
Most clinicians responsible for inpatient hospital care are accountable for reducing LOS, owing to the DRG-based payment system, where every patient admitted is a fixed revenue stream for the hospital based on their clinical profile, and every good and service used to get that patient to health is a direct cost to the hospital.
These inpatient care costs add up to $377.5 billion in the U.S. annually.
For most medical cases, the most costly part of the stay is the time the patient stays in the hospital, with all the implied services, risk exposure, and opportunity costs.
From a clinician’s perspective, getting their patient healthy and home as soon as possible is always the goal. There are risk-benefit aspects to a decision to discharge: is the patient stable? Will the patient be safer in the hospital or somewhere else? Will a slightly earlier discharge increase risk of a readmission? Most hospitalists will grant that different clinicians have different practice styles, willingness to discharge, even overall efficiency. Still, most clinicians view length of stay as incidental to each patient’s circumstance, and the culmination of a lot of little things mostly outside their control. And they are usually right.
Thus, clinicians can be frustrated when asked to reduce LOS. “How?” they will ask.
THE PROBLEM: length of stay feedback attributed to one clinician
For purposes of reporting and benchmarking, LOS is conventionally attributed to the discharging clinician. There are two dooming problems with this approach, at least in the context of feedback, learning, and performance improvement:
-
Many clinicians (and many other factors) are involved in a given patient’s stay
-
LOS is an outcome, not an action; good feedback always points to actions
Patients are initially admitted by one hospitalist, yet many are eventually “handed off” to other hospitalist attendings, particularly for longer stays. Moreover, other clinicians — consulted specialists, APPs, residents — usually play a role in diagnostic and treatment decisions. In fact, over the course of an average hospital stay, a patient sees approximately 17 health professionals, with an average of 6 different physicians. These clinicians all take actions, interpret information, make decisions, and coordinate care as best as they can. Yet, were any of these individual clinicians to act or decide sub-optimally, it might unnecessarily increase LOS for the entire team — and, of course, the patient.
The issue is, without transparency into their practice patterns associated with longer stays, clinicians cannot even begin to learn what changes to make to optimize LOS. Most clinicians relish in data to improve, yet for such a fundamental goal in hospital medicine, such data is exceedingly rare.
This lack of useful data is made worse by the provision of incomplete or ineffective data. Most organizations give hospitalists some data about their group’s average LOS, but this clouds individual improvement opportunity. Others offer “individualized” LOS yet use single provider attribution to do so, crediting an entire patient stay to the discharging or billing clinician. This is typically an incomplete, inaccurate — even opposite — reflection of an individual’s performance. And again it fails the test of useful feedback, because it does not reflect any single clinician’s actual behavior, nor the shared ownership and myriad interwoven decisions executed by that team (of 6 physicians on average).
When faced with data they (rightly) feel is not their own, it is easy for clinicians to perceive all data on clinical variation as flawed, unreliable, and unhelpful. When clinicians reject such data, hospitals lose valuable opportunities to engage clinicians with transparent feedback and interventions integral to improving practice habits.
ONE SOLUTION: accelerated learning toward things that reduce length of stay
There is no silver bullet for earning clinician trust, facilitating learning and feedback, and enabling performance improvement. It requires acing unglamorous details spanning the spectrum of medical decision-making, hospital workflows, and a big dose of empathy for the “Why?” we all chose to work in healthcare.
We at Agathos have found improving transparency via these three core methods to transform the quality improvement discussion around length of stay:
1. Reliable attribution
If LOS was only caused by decisions made at the time of discharge, attribution would be easy. The discharging provider could easily review off-target stays and discover underlying actions. However, in hospital medicine, this is never the case. In fact, the discharging physician is more likely the hero of the story. When a patient is discharged, they — and the care team — win!
Imagine being the final leg of a relay race, when your teammate drops the baton after the first leg. They and everybody else run their hearts out, and when you get the baton you sprint as if your life depended on it … but you still finish far behind the winner, and after the race, you are the one singled out by your coach for taking too long. How demoralizing and uninsightful would that be?
Attribution of LOS to the discharging clinician makes no more sense. Instead, when benchmarking individual LOS performance, we strongly advise these best practices:
-
Use a benchmark (e.g., GM-LOS, predictive analytics) to calculate observed (O) vs. expected (E) LOS for each stay, and use each stay’s O minus E difference (O-E) as the unit for calculation. Some tracks run quicker. Some patients are sicker.
-
Attribute that O-E unit to all providers associated with that stay. Admitting, attending, discharging, consulting, APP, resident … is a given NPI attached to a given stay? Let them know — they own it! At least in part. At Agathos, we call this “multi-provider attribution,” and it is a useful concept for outcomes and many decisions influenced by multiple clinicians. It allows for more complete benchmarking, accelerated peer collaboration, and learning after the event.
-
Calculate individual LOS metrics for each provider by weighted-averaging the O-E for every stay attributed to them by an influence weight for each stay. This weight should reflect the variable influence multiple providers have in a given stay, and that a given provider has on multiple stays. At Agathos, we start with a relatively simple heuristic whereby the stay weight scales linearly with the percent of days each provider participated. More complex models may help, mindful of tradeoffs with comprehensibility. The main principle is that more influence should equate to the O-E counting more.
2. Real-time case examples
By themselves, more objective indicators of individual contribution to LOS are not sufficient for individualized learning. As with sports analytics, knowing a player’s win-loss ratio may help a coach recruit a better team (or a player negotiate a fairer compensation package), yet it hardly helps a player improve mid-season. For that, players and coaches need to be able to zoom in on individual games, review the film … and in the context of the racing metaphor, stare down that doomed handoff where the baton was dropped (as difficult as this may feel).
Recent relevant case examples, when attributed proportionally to all providers that had influence, can be a highly effective way for clinicians to quickly recall, reflect upon, and collaborate around cases where learnings are most likely to emerge. In hospital medicine, given the scarcity of time and the sheer volume of difficult cases, this rarely happens as often as it could or should. At Agathos, we find attaching recent, higher O-E LOS case summaries to the LOS metric accelerates learning, as does filtering for provider influence and key diagnostic areas, and listing the whole clinical team involved in the stay.
One of the most common questions we get is, “Would it not be better to predict and inform to clinicians before a given stay gets off track what they can do to prevent it?” Certainly. The best feedback is always feedback that does not have to be delivered. However, for a variety of reasons, this is impractical as a sole strategy and can be antithetical to learning.
- First, each patient is by definition a sample size of one and cannot be statistically appraised in the context of one’s practice patterns or outcomes influence. If a physician is informed that a patient is off-target, the knee jerk response will often be, “... and for good reason!”
- Second, when a patient is off-target and the clinician is convinced they should do something about it, what to do is not always obvious; the problem of actionability is not solved by simply turning analytics from descriptive to predictive.
- Third, even where a stay is at risk of going off-target and a remedial action is prompted and taken (e.g., nudging case management), learning has not necessarily happened. And the clinician may be less likely to recognize improvement opportunities in clinical edge cases, or where systemic bottlenecks occur, and clues about how they could have been avoided only emerge post hoc.
Finally, when a case is off track — if there is a human or process error at cause — the action, inaction, or decision has often already happened.
Ultimately, streamlined case review of off-target stays immediately after discharge can be a powerful learning tactic and supplement to point-of-care intervention.
3. Relevant action-level insights
This leads to the best way data can be leveraged to give clinicians useful feedback on length of stay: which practice patterns are associated with warrantably shorter stays, and how do mine compare with my peers?
Let’s again revisit the racing example. When does a baton drop go from a fluke to a pattern? Or more subtly, which turns do most runners round most efficiently? Where do herculean bouts of energy yield the strongest finish times? Where might the track need to be repaired?
All of these questions are best answered statistically over many races, and insights into these events could never happen if the only feedback runners received were their finish times (and particularly if only the final runner’s time!).
Fortunately, for all the attribution challenges of hospital medicine, a kaleidoscope of data forms a treasure trove for the data scientist and a well of learning potential for hospitalist groups dedicated to taking their craft to the next level. And, again, contrary to some popular wisdom, many of the most transformation learnings can only occur after the events in question, in the context of peers over many cases, and away from the point of care. Perhaps even entirely off shift.
Consider a hospitalist group where three physicians have higher O-E multi-provider influence-adjusted LOS than their peers. How might we further help them? Well, somewhere on the backend, algorithms have determined that over the same time period, Dr. X ordered palliative care consults about half as often for end-stage patients as did his peers, including five non-consults this last month associated with off-target stays. Dr. Y has a pattern of particularly low discharge efficiency (i.e., discharges per days attended) for heart failure cases, with case examples provided from three recent longer stays where they were attending yet ultimately handed off. And just as Dr. Z receives an insight on a pattern of ordering OT and PT consults later than her peers, she notices a peer, Dr. A, orders them quickest, and has one of the best O-Es of the group. She finds time in the break room to trade notes.
Within the EHR hides precious and precise clinical data about variation in clinical practice patterns, as owned by individual providers, and potential length of stay impact. From both hospitals’ and clinicians’ perspective, the potential opportunity owned by clinicians may be a minority of the whole, yet it is very real. When data is properly broken down and transformed into transparent, reliable, and action-level feedback, clinicians are empowered to play their part in evolving their practice patterns that most influence length of stay.
“I want to reduce length of stay ... but what can I do differently?” With the right data and the right transparency, we can invite clinicians to look closer and see.
This post was written by Andrew Trees, Elissa Baker, Laura Prescott, and Jonathan Pitocco and was updated on July 25, 2022.