I coined the term computational health economics several years ago when I found myself otherwise needing a full sentence to encapsulate the work I do in applied statistics, policy, and health economics. This label has been useful given there are not many scholars working at this intersection.

Computational health economics brings statistical advances for big data and data science to answer critical questions in health economics.

My methodological and applied research currently focuses on algorithmic fairness in risk adjustment and the generalizability of computational health economics tools. Much of my ongoing work is funded by my NIH Director's New Innovator Award.

+ Algorithmic Fairness in Plan Payment Risk Adjustment

It is well-known in health policy that financing changes can lead to improved health outcomes and tremendous gains in access to care. More than 50 million people in the U.S. are enrolled in an insurance product that risk adjusts payments, and this has huge financial implications - hundreds of billions of dollars. The risk adjustment system aims to incentivize insurers to compete for enrollees based on providing high-quality and efficient care, rather than competing for the lowest cost individuals. Current risk adjustment formulas rely on standard linear regression to predict health care spending using age, gender, and health diagnoses. I've directed a series of projects proposing a computational overhaul of plan payment risk adjustment that focuses on modern methods and fairness considerations.

+ Generalizability of Computational Health Economics Tools

Algorithm development with various types of electronic data has become increasingly common in many applied literatures. Data include surveys, electronic health records, registries, and newer sources, such as tweets and internet search queries. While each class of data has benefits and limitations, the growth in data collection and its availability to researchers has provided opportunities to study outcomes that were previously difficult to examine. Despite 'generalizability' typically being discussed in the context of causal effects, we need to consider the proliferation of applied machine learning tools for prediction and clustering as well as the increasing use of linkages between disparate data types when framing generalizability. For which populations? For how many years? Even those algorithms that demonstrate generalizability may quickly become outdated. My recent and ongoing work examines building statistically grounded machinery for this broad view of generalizability.