Decision analytic models are a foundational tool in health economics, shaping how we evaluate the costs and benefits of interventions. As discussed by Jeremy Labrecque and Maurice Korf at a recent CHDS seminar, these models rely on input parameters—estimates of treatment effects, costs, and other factors—that inform decision-making under uncertainty. Traditionally, decision science has focused on uncertainty driven by random error. However, this perspective may obscure more consequential sources of bias.
Labrecque is Assistant Professor of Epidemiology and the leader of the Causal Inference Group at Erasmus MC in the Netherlands, where Korf is a PhD student. They presented a framework that adopts the potential outcomes framework to understand and categorize three core types of bias in cost-effectiveness models: model bias, internal validity bias, and external validity bias.
Model bias occurs even when we have ideal randomized controlled trial (RCT) data that perfectly captures the target population and all relevant outcomes. In such cases, any discrepancy between the estimated and true incremental cost-effectiveness ratio (ICER) stems from the decision model itself. The model simplifies reality, and these simplifications can introduce error. Internal validity bias enters the picture, the speakers explained, when we rely on observational data, even if our model is otherwise flawless. These data may suffer from confounding or measurement error, leading to incorrect parameter estimates and, consequently, biased ICERs. External validity bias reflects the challenge of generalizing findings from one population to another. Even if we have perfect data and a sound model, bias can arise if the study population differs from the target population in meaningful ways—due to effect modification, selection effects, or other sources of heterogeneity.
To formalize these ideas, Labrecque and Korf decompose sources of the bias in counterfactual ICER estimates into model, internal validity, and external validity components, providing a clearer roadmap for where uncertainty enters – and how it might be reduced. Reframing the problem through this lens provides a more precise understanding of where decision models can go wrong. More importantly, this approach invites targeted strategies to address specific sources of bias, enhancing the credibility and usefulness of economic evaluations in health policy.
Learn more: Read the article, Directed Acyclic Graphs in Decision-Analytic Modeling: Bridging Causal Inference and Effective Model Design in Medical Decision Making
Related news: CHDS Faculty Recommend Directed Acyclic Graphs
Related news: Symposium Honoring Dr. Myriam Hunink