COMPUTATIONAL HEALTH ECONOMICS
[This page is no longer being updated.]
I coined the term computational health economics several years ago when I found myself otherwise needing a full sentence to encapsulate the work I do in machine learning, statistics, policy, and health economics. This label has been useful given there are not many scholars working at this intersection.
Computational health economics brings statistical advances for big data and data science to answer critical questions in health economics.
My methodological and applied research currently focuses on algorithmic fairness in risk adjustment and the generalizability of computational health economics tools. Much of my ongoing work is funded by my NIH Director's New Innovator Award.
+ Algorithmic Fairness in Plan Payment Risk Adjustment
It is well-known in health policy that financing changes can lead to improved health outcomes and tremendous gains in access to care. More than 50 million people in the U.S. are enrolled in an insurance product that risk adjusts payments, and this has huge financial implications - hundreds of billions of dollars. The risk adjustment system aims to incentivize insurers to compete for enrollees based on providing high-quality and efficient care, rather than competing for the lowest cost individuals. Current risk adjustment formulas rely on standard linear regression to predict health care spending using age, gender, and health diagnoses.
I've directed a series of projects proposing a computational overhaul of plan payment risk adjustment with a focus on fairness considerations: variable selection to prevent gaming (Rose 2016), demonstrating that insurers can identify unprofitable enrollees using the drug formulary (Rose et al. 2017), separate formulas for undercompensated groups (Shrestha et al. 2018), finding discrepancies in the impact of medical conditions (Rose 2018), interrupting feedback loops with transformations of biased data (Bergquist et al. 2019), tutorial on why global fit is insufficient for fairness (Rose & McGuire 2019), and new fair regression methods to improve group undercompensation (Zink & Rose 2019).
+ Generalizability of Computational Health Economics Tools
Algorithm development with various types of electronic data has become increasingly common in many applied literatures. Data include surveys, electronic health records, registries, and newer sources, such as tweets and internet search queries. While each class of data has benefits and limitations, the growth in data collection and its availability to researchers has provided opportunities to study outcomes that were previously difficult to examine. Despite 'generalizability' typically being discussed in the context of causal effects, we need to consider the proliferation of applied machine learning tools for prediction and clustering as well as the increasing use of linkages between disparate data types when framing generalizability. For which populations? For how many years? Even those algorithms that demonstrate generalizability may quickly become outdated. My recent and ongoing work examines building statistically grounded machinery for this broad view of generalizability.
SELECTED RELATED NEWS
Overview article: Intersections of machine learning and epidemiological methods for health services research
Editorial in STAT: Machine learning for clinical decision-making — pay attention to what you don’t see
Invited commentary series: Machine learning for causal inference in Biostatistics
ISPOR Announces Honorees for 2018 Award Program
NIH Makes $8.5M Investment in Promising Projects
Deep Dive: Machine Learning Tools Search Vast Oceans of Data for Insights on Health Economics