We've joined our RTI Health Solutions colleagues under the RTI Health Solutions brand to offer an expanded set of research and consulting services.
As healthcare delves into new territories, generating decision-relevant evidence requires facing increasingly important unknowns. Whether that is reaching new patient populations or developing new kinds of therapeutics, uncertainty is a key part of the process when it comes to decision-making in healthcare.
Facing large amounts of uncertainty, healthcare decision-makers often fall into two traps: 1) decisions on new ideas can't be made without highly certain evidence, and/or 2) being "risk-neutral" means that you can ignore uncertainty. Both approaches are decision-making fallacies leading to suboptimal outcomes. Fortunately, there's a third, more strategic approach: embracing uncertainty.
Case example: Cost effectiveness modeling in digital therapeutics
Company X is developing a set of digital therapeutics. They are aiming for their products to serve historically under-resourced communities, who often have reduced access to healthcare facilities and resources that require substantial physical infrastructure. Company leadership seeks to develop cost-effectiveness models to assess the value of its product.
Company X faces three significant sources of uncertainty beyond those faced by traditional therapeutics:
- Digital therapeutics are relatively new and not well-studied.
- Reaching community members who are historically under-resourced means reaching community members who are also under-studied.
- The effects of challenges one and two above intersect and are multiplied because digital therapeutics often have heterogenous effects strongly tied to social and behavioral factors.
Common decision-making errors when facing uncertainty
Healthcare decision-makers might consider two common, but fallacious, paths when facing these uncertainties.
Error one: “Do nothing" if the evidence is uncertain
The first fallacious approach is that no decisions can be made if there is substantial uncertainty regarding cost-effectiveness. Often this comes in the form of simply sticking only to what is already well-known standards and practices in healthcare. However, this approach does not allow us to make and evaluate improvements, reach new communities, experiment with interventions, and adopt new paradigms in health and healthcare.
Error two: Ignore the uncertainty
The second fallacious approach to decision-making in healthcare is to disregard uncertainty and give a single-number “best guess" on what the cost-effectiveness should be. Often this approach takes the form of using only the point estimate (usually the average) value of parameters as the best guess. Many mistakenly justify this approach as being “risk-neutral" because it uses the same central measure of cost-effectiveness (e.g. mean or median). Unfortunately, this approach typically results in a number of major problems. Ignoring uncertainty typically leads to selecting high-risk, poor-quality options rather than being truly risk neutral. Over the long run, ignoring uncertainty results in non-optimal decision-making and loss of credibility.
This flawed approach is analogous to how many of our COVID-19 projection models were presented throughout the height of the pandemic: the “best guess" line about where the pandemic was headed, without clearly communicating the range of possibilities. When those best guesses were inevitably inaccurate, broad distrust bloomed for the sources of that information.
Both approaches scuttle the wealth of information that decision-makers could consider based on the evidence that is available. We may be able to tell within some range or with some probability to what degree therapies are effective and safe, the downstream impacts of our interventions, and how best to use our limited resources to serve our communities.
We recommend a third approach: build the uncertainty into the decision-making processes and models.
Uncertainty-forward cost-effectiveness modeling
There is a wide array of ways to deal with and understand uncertainty in cost-effectiveness modeling. Each starts from the same place: accounting for the uncertainty and understanding why we have it. Rather than focusing exclusively on what we know or what our best guess is, we need to embrace what we don't know.
While there is an enormous array of ways of dealing with uncertainty in cost-effectiveness, the processes can be broken down into these general steps:
- Understand what kinds of uncertainties and unknowns are relevant to the decision of interest.
- Make a decision rule or process that explicitly addresses the uncertainty around the estimates.
- Apply and account for the ranges of uncertainties in the model.
- Estimate the range(s) of plausible outcome values based on the uncertainty-forward model.
- Apply the decision rule or process.
Understand what kinds of uncertainties and unknowns are relevant to the decision of interest
Statistical uncertainty from sampling (e.g., typical confidence intervals) is one of the most visible and relatively easily understood kinds of uncertainty. However, every assumption, design, and modeling decision are a source of potential error in predicting cost-effectiveness. The critical first steps to any model are understanding what kinds of uncertainty there are, how important they are to account for, and what limitations we are willing to accept.
In digital health, for example, data may be available on how the product impacts key markers of disease progression, but little data on long term direct health outcomes. To model these, we need to make substantial assumptions, often with a variety of choices of reasonable ways to deal with the uncertainty through different impact calculations that are varying degrees of relevance to our population of interest. These decisions go well beyond statistical uncertainty and may be critical to forming sound models.
Make a decision rule or process that explicitly addresses the uncertainty around estimates
Once we understand the sources of uncertainty and the question of interest, we can start to tease out how we might go about using the model to make a decision. The app below is an example of two rules that use the range of uncertainty to make a go or no-go decision on a sample product based on a range of estimated cost-effectiveness values. The question might be “is this service sufficiently cost-effective in an underserved population to justify providing it over some other option?"
The tool above is a demonstration that takes a range of plausible cost-effectiveness values (simulated through the values on the left, but in practice would come from the next step). In the first option, we are ignoring the uncertainty and taking a “best guess" as described earlier in Error Two. In the second option, we use our measures of uncertainty to decide how best to proceed.
In the example given for an uncertainty-forward decision-making process, we are asking two questions: what is the minimal level of the effect that we are willing to consider, and with what minimal probability are we willing to accept that level of effect? This trade-off allows decision makers to make a go/no-go decision given what they believe to be an acceptable risk and communicate the uncertainty around that risk.
In a real-world setting, the decision-rule will reflect the specific decision and the relevant stakeholders of interest, including patients, clinicians, and payers. Simulation-based tools such as the one above can greatly help make these decisions clearer.
Apply and account for the ranges of uncertainties into your model
Rather than simply taking the “best guess" (usually the mean value) of the model parameters, the next step is to decide on the distribution of ranges each of the parameters is likely to take. Depending on what kinds of uncertainty are of interest, it may be beneficial to include uncertainties and multiple options around the model design itself.
As with all modeling decisions, this comes with a degree of expert subjectivity.
Estimate the range(s) of plausible outcome values based on the uncertainty-forward model
Whether this is performed via a Bayesian design, through probabilistic sensitivity analysis, or other options, the goal is to run the model for the full array of uncertainties. This will in turn generate a distribution of cost-effectiveness values (rather than just one).
Apply the decision rule or process
When the distribution of cost-effectiveness results is available, it's time to apply the decision-rule made earlier. Often that takes the form of go or no-go, but it could yield “how much do we fund this intervention" or other metrics.
Embrace uncertainty to better serve communities
The decision to better serve historically under-resourced populations, particularly with new types and categories of services, presents new kinds of uncertainties for evidence-based decision making. The choice to avoid accounting for uncertainty by maintaining traditional, often fallacious, metrics can lead to poor decisions due to the absence of important information. It's best to model uncertainty head on and incorporate it directly into our modeling and decision-making. We increasingly live in an uncertain world, and the best way to serve our communities is to embrace it.
How to improve cost-effectiveness modeling
Whether acting as a guide for understanding how to navigate and embrace uncertainty or taking the lead in developing uncertainty-forward models, RTI Health Advance can help decision-makers, providers, and producers deal with these increasingly important issues. Please reach out to our team to learn more about how we can help.
Subscribe Now
Stay up-to-date on our latest thinking. Subscribe to receive blog updates via email.
By submitting this form, I consent to use of my personal information in accordance with the Privacy Policy.