A Brief Intro to Clinical "VR" ("Variation Reduction")
- Michael van Duren, MD, MBA
Who would dare tell a doctor how to practice medicine? While many other industries have embraced the benefits of standardization, the practice of medicine is still built on the pillars of professional autonomy and individual clinical judgment. For more tenured physicians this might sound like, “Nobody has the right to tell me what to do!” Younger physicians who have been more freshly exposed to team-based care and evidence-based medicine will often be more open to feedback but will assume, “We all pretty much apply practice the same way, don’t we?” When physicians are given feedback about how their individual practice patterns differ in common clinical circumstances, for instance the frequency and type of lab tests and imaging procedures ordered, they are often shocked by the striking amount of variation. Differences are often 3-4 fold between the highest and lowest of whatever rate is being measured.
Physicians are highly motivated to give the right care, and they have a bias for assuming that is just what they are doing. Thus, seeing their name at the high or low end of a graph with such a wide distribution can be profoundly distressing for any clinician. Observing themselves as an outlier compared to their peers is an alarming wake-up call that grabs their full attention: “How can this be? The data must be wrong! What is missing? What must I do to be more in line with my peers?”
Many mistakenly assume the only way to motivate doctors is to attach financial incentives. This underestimates their intrinsic human motivation to give the right care. Moreover, positive appraisal from their peers is a precious good that cannot be quantified. Reducing clinical variation saves time and eliminates the cost of unwarranted tests, procedures, and other services. All this is possible through simply providing individualized, action-level feedback with peer comparisons.
WHAT DOES CLINICAL VARIATION LOOK LIKE?
Let’s view a more involved example of clinical variation through the lens of two patients with similar health profiles. Both patients A and B present to the emergency department (ED) with severe chest pain, and each is admitted with the diagnosis of pulmonary embolism (PE). Patient A’s physician recently learned of new evidence-based guidelines for acute PE, which advise a shorter hospital stay followed by outpatient treatment, and decides that he is a suitable low-risk candidate. Patient A is discharged after five days.
Patient B’s physician is more conservative with their care approach and is not yet fully convinced by the data supporting an earlier discharge. Her physician wishes to avoid potential liability for discharging a PE patient too early, erring on the side of caution with a longer hospital stay. Patient B spends several weeks in the hospital, with frequent sleep interruptions for lab tests—not to mention the valuable time away from work and family.
Patients A and B are an example of clinical variation between two patients with similar diagnoses. Each is treated based on contextual evidence and physician autonomy, and both eventually recover, yet Patient A has a significantly better experience and produces significantly better practice economics for the hospital.
WHY DOES CLINICAL VARIATION HAPPEN?
Before we can reduce something as complex as unwarranted clinical variation, it is important to understand its origins. There are many reasons clinical variation exists, and while much amounts to human or system error, below are a few more nuanced reasons.
Physicians autonomously make care decisions, leveraging their education and experience with guidelines, best practices, and clinical evidence to determine the appropriate diagnoses and courses of treatment. The challenge is an overwhelming and constantly-evolving amount of information and content, impossible for any human to fully learn, remember, and apply. And while pathways and protocols can help physicians decipher vast amounts of information, even strong evidence has variable levels of confidence and tradeoffs, and too may decision forks can both breed false precision and undermine adherence. Moreover, medicine is both science and art; subjective judgment and instinct toward individual patients’ goals and situations must be used in concert with evidence. Thus physicians often find themselves drowning in information yet ironically unable to recall the best clinical practice for given situations.
This overwhelming amount of information and data, with a culture of respecting physician autonomy, can make it challenging for physicians to pinpoint variations in care. It is additionally difficult for them to admit and connect such variations to outcomes impact and prioritize change.
All human beings crave feedback on their performance relative to others, and one could argue that physicians—due to the competitive rigors of their education—display this trait even more than average. Medical students and residents receive regular quantitative feedback with test scores and evaluations on their clinical clerkships. The licensing exams and specialty certification boards also come with numerical scoring.
But once physicians are out in practice, feedback screeches to a halt. The only feedback most physicians are exposed to, if any, is metrics like patient satisfaction or patient outcomes like average length of stay or readmission rates—all of which have little direct connection to the individual physicians, their practice patterns, and care decisions. Because attempts to attribute such metrics to individuals are poor or non-existent, these feedback measures are generally disregarded as unhelpful.
Clinicians sincerely looking for feedback in order to improve their performance are sadly left in the dark, with nothing more useful than group averages as a motivational signal. Or worse, when an attempt is made to individualize feedback, it is not actionable: “Here is your average cost per case. It is higher than the group average. We don't know why, but you are a smart person, so you go figure it out.”
Gaps between evidence and practice
Great evidence and guidelines cannot be effective if they are not followed and do not allow for the appropriate degree of physician autonomy. “Knowing-doing gaps” (i.e., between guidelines and their real-world applications) are an important aspect of clinical variation.
The challenging thing is physicians would never admit they practice against proven guidelines. Rather, there are inevitable ambiguities regarding the clinical and patient circumstances in which given evidence applies, the extent to which it is to be carried out, and where the “art” of medicine, (e.g., individual experience, expertise, style, recognition of edge cases and unique patient circumstances) should nullify it.
Additionally, physicians do not always agree upon what the evidence suggests for particular situations. One example is MRIs for TIA patients. Do these varying courses often lead to overuse, or always appropriate care? It depends on who you speak to, even among experts. Every physician inevitably develops their own conclusions, frameworks, and experiences that tell them their practice is correct—and often they are right. Standardization of best practice guidelines are neither streamlined nor universal. Varying approaches to administering care often produces similar outcomes. And while this can be positive, there can also be inefficiencies, quality risks, and negative cultural impacts from multiple methods of care.
HOW CAN WE REDUCE CLINICAL VARIATION?
Simplify and individualize feedback.
The key to creating useful feedback is making it 100% specific to the individual who is responsible for the action being measured. For a physician to take a number seriously, they need to believe beyond a shadow of a doubt that the number applies to them. Here are few such examples:
- Count of admission orders where you ordered telemetry
- Count of times where you ordered CK-MB in addition to troponins
- Count of metabolic panels where you ordered “complete” rather than “basic”
In each of these situations, there will be zero doubt about the attribution of the responsible clinician, because the measure is assigned to the exact person who wrote the order. Such granularity can be tricky with claims data, where an attending is assigned by coders. Use of the clinical records themselves allows for attribution to residents, advanced practice clinicians, or consulted specialists where appropriate. Moreover, hand-offs between clinicians are not a problem once one can identify the precise individual who wrote the order.
Zoom in on practice patterns.
The specific actions listed above might not compel a hospital CFO, because the marginal cost of one day on telemetry, one unnecessary CK-MB lab test, and one comprehensive metabolic panel (where a basic panel might have sufficed) are peanuts in the total budget. Nevertheless, the impact will be significant because individual doctors will recognize themselves in this data and start to make practice changes—both to the specific feedback offered, and elsewhere. These changes add up, especially when applied to broad diagnostic categories and daily hospital events.
More importantly, these small practice changes pave the way for larger ticket items. Establishing an appetite for personal feedback and making that feedback safe and welcome is a way for the organization to “develop the muscle” of systematic ongoing clinician behavior change that will be a competitive advantage in the long run.
Too often quality and performance improvement initiatives start with outcomes-level feedback because of its practice economic value (relative to the individual practice patterns), if improved. This paradoxically is the wrong move to maximize value. Physicians will only be apt and motivated to improve if they first buy into and engage with data they recognize as theirs—namely, their practice patterns!
Allow transparency to reveal and compel improvement opportunity.
Transparency by itself is fundamental to both identifying the reasons for knowing-doing gaps and pinpointing where and when they take place. When data is reorganized, analyzed, and presented at the level of physicians’ practice patterns, physicians are instinctually inspired to change and improve.
Far from the cudgel of “telling doctors how to practice”, by merely revealing certain patterns in the data, clinical VR becomes an opportunity for everybody. Toward this end, some words of caution and motivation:
- First, top-down feedback typically fails because physicians are not intrinsically motivated by feedback from administrators, and an atmosphere of support and trust can be further jeopardized by inadequate attribution and unactionable feedback. Physicians are busy, often have valid concerns when buying into clinical VR, and need to know: “What’s in it for me?”
- Second, bottom-up transparency can intrinsically motivate if the feedback is specific to practice patterns using good data, relevant comparisons, without managerial curbsides. At Agathos we speak of these behavioral economic drivers as the four “Cs” (Compassion, Compensation, Competition, and Curiosity).
- Finally, and perhaps most importantly from the physicians’ perspective, when done right clinical VR is the exercise of finding and eliminating unnecessary or suboptimal tasks, and pursuing activities with greater efficiencies and proven immediate benefits for individual physicians, their teams, and ultimately patients.
By establishing trust and consistently delivering value, there is a clear motivation for physicians to lean into helpful transparency and, thus, clinical VR.
Embrace (and learn from) “grey areas” of medicine.
Healthcare is a serious and difficult business. Within it, a culture of loss aversion, simplistic storytelling, and small sample bias does no favors to fostering learning and feedback. When something clinical does not happen optimally (or, far more, when something goes wrong), animalistic instincts kick in. Complex, irreducible events become black or white—even before presuppositions and egos weigh in.
While there are some clear “dos” and “don’ts” (almost always in guidelines), the vast majority of medicine has some grey. Some matters, for instance an individual provider’s balance of quality and efficiency, could only ever be grey. How would a guideline be developed for Dr. W’s optimal X% of discharge orders submitted before Y time at facility Z?
Those who embrace the “grey” of medical practice in tandem with transparency, will inherit the tools and mindset for optimal clinical VR. Every data set is a clue on the spectrum where neither “black” nor “white” are the right answer. Instincts for rationalization can be set aside, and consequently a physician can reflect on the fact that they discharge inherited cases half as often on the first day of their shifts, or order labs twice as much as their peers. Scrutiny into any individual case would miss the larger picture, as these “grey” patterns are only visible over many cases.
Understand “VR” is a journey, not a destination.
While the causes of clinical variation and challenges to variation reduction will never go away, it is possible for clinical VR to transform our healthcare system. Other industries have been transformed by accumulated measurement, analytics, transparency, process optimization, and cultural shifts. So too, clinical VR has an important role within healthcare. In manufacturing, financial services, and energy, there are still instances of production variation, yield variation, and consumption variation, respectively. However, cumulative variation in all those cases has drastically declined, and when it reemerges it is quickly recognized, addressed, and filtered out.
Which leads to the question—is the cost of clinical VR worth its payout? Healthcare is stricken with messy data, labyrinthian and perverse incentives, and ever-shifting policies. Of all of the major problems ailing U.S. healthcare, it may seem like clinical VR is a minor issue to tackle. The yield is less visible, and many healthcare leaders tend to put clinical VR on the back-burner of more attractive concerns. Quality assurance in manufacturing has become commoditized, and it took time and early visionaries to get there. Is it time for clinical VR?
WHY IS REDUCING CLINICAL VARIATION IMPORTANT?
Better patient experience
As for client or user satisfaction in other industries, patient satisfaction results from a host of little things spanning quality, efficiency, and consistency of service. As Patient A experienced, patients benefit from avoiding days in the hospital, excessive lab tests and unnecessary procedures. It means fewer early-morning wake-up calls to have blood drawn, and fewer avoidable hospital acquired infections. Reducing variation in physician practice patterns is fundamental to higher quality care for patients.
Improved patient outcomes
An environment of transparency and culture of clinical VR does not merely reduce practice variation. It provides the means for new, local, and/or individual best practices to be discovered, optimized, and shared. The more physicians have a feedback loop into which of their past practice patterns are associated with which outcomes, the more nimbly they can identify opportunities, make adjustments, and report tips and tricks across the care team.
The result is a bottom-up approach to engineering better outcomes. If done at scale, such local responsiveness to transparency could revolutionize the way evidence is generated and guidelines expanded and nurtured. There is already a shift from relying upon clinical trials (with all of the incentivization challenges, design biases, and resource limitations) toward better and more dynamic means of capturing real-world evidence. If every physician had transparency into their practice variation and its contribution to patient outcomes, then the latter could be instinctually optimized.
Over $750 million of annual U.S healthcare spend is waste, and of that $265 million alone is attributable to unwarranted clinical variation (i.e., specifically, physicians acting differently in comparable clinical contexts). This represents a direct value proposition for hospitals reimbursed under DRG-based payments, where reductions in practice variation can save millions of dollars annually for even smaller facilities (i.e., under 200 beds). And, for outpatient care under bundled or capitated payments (and for payers everywhere by definition), there is even greater bottom-line savings potential in reducing unnecessary and otherwise avoidable services.
[CLINICAL] “VR” IS THE FUTURE
So, who would dare tell a doctor how to practice medicine?
Often the right data speaks best on its own, without voiceover. Transparency into practice patterns intrinsically motivates, and behavior change is instinctual. From the perspective of patient care, clinical VR means realizing up to $265 million in savings from unnecessary services, expanded access for underserved patient populations, and building a physician-driven platform for outcomes improvement.
If clinical VR is not yet a priority for your organization, we invite you to join the movement!
Curious about your group’s clinical variation? Contact us to request a demo.