TY - JOUR
T1 - Evaluating Model Robustness and Stability to Dataset Shift
AU - Subbaswamy, Adarsh
AU - Adams, Roy
AU - Saria, Suchi
N1 - Funding Information:
discussions. This publication was supported by the Food and Drug Administration (FDA) of the U.S. Department of Health and Human Services (HHS) as part of a financial assistance award U01FD005942 totaling $97, 144 with 100 percent funded by FDA/HHS. The contents are those of the author(s) and do not necessarily represent the o cial views of, nor an endorsement, by FDA/HHS, or the U.S. Government.
Publisher Copyright:
Copyright © 2021 by the author(s)
PY - 2021
Y1 - 2021
N2 - As the use of machine learning in high impact domains becomes widespread, the importance of evaluating safety has increased. An important aspect of this is evaluating how robust a model is to changes in setting or population, which typically requires applying the model to multiple, independent datasets. Since the cost of collecting such datasets is often prohibitive, in this paper, we propose a framework for analyzing this type of stability using the available data. We use the original evaluation data to determine distributions under which the algorithm performs poorly, and estimate the algorithm's performance on the “worst-case” distribution. We consider shifts in user defined conditional distributions, allowing some distributions to shift while keeping other portions of the data distribution fixed. For example, in a healthcare context, this allows us to consider shifts in clinical practice while keeping the patient population fixed. To address the challenges associated with estimation in complex, high-dimensional distributions, we derive a “debiased” estimator which maintains pN-consistency even when machine learning methods with slower convergence rates are used to estimate the nuisance parameters. In experiments on a real medical risk prediction task, we show this estimator can be used to analyze stability and accounts for realistic shifts that could not previously be expressed. The proposed framework allows practitioners to proactively evaluate the safety of their models without requiring additional data collection.
AB - As the use of machine learning in high impact domains becomes widespread, the importance of evaluating safety has increased. An important aspect of this is evaluating how robust a model is to changes in setting or population, which typically requires applying the model to multiple, independent datasets. Since the cost of collecting such datasets is often prohibitive, in this paper, we propose a framework for analyzing this type of stability using the available data. We use the original evaluation data to determine distributions under which the algorithm performs poorly, and estimate the algorithm's performance on the “worst-case” distribution. We consider shifts in user defined conditional distributions, allowing some distributions to shift while keeping other portions of the data distribution fixed. For example, in a healthcare context, this allows us to consider shifts in clinical practice while keeping the patient population fixed. To address the challenges associated with estimation in complex, high-dimensional distributions, we derive a “debiased” estimator which maintains pN-consistency even when machine learning methods with slower convergence rates are used to estimate the nuisance parameters. In experiments on a real medical risk prediction task, we show this estimator can be used to analyze stability and accounts for realistic shifts that could not previously be expressed. The proposed framework allows practitioners to proactively evaluate the safety of their models without requiring additional data collection.
UR - http://www.scopus.com/inward/record.url?scp=85136518627&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85136518627&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85136518627
SN - 2640-3498
VL - 130
SP - 2611
EP - 2619
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021
Y2 - 13 April 2021 through 15 April 2021
ER -