TY - JOUR
T1 - A unifying causal framework for analyzing dataset shift-stable learning algorithms
AU - Subbaswamy, Adarsh
AU - Chen, Bryant
AU - Saria, Suchi
N1 - Funding Information:
The authors gratefully acknowledge support from the Sloan Foundation (FG-2018–10877).
Publisher Copyright:
© 2022 Adarsh Subbaswamy et al., published by De Gruyter.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Recent interest in the external validity of prediction models (i.e., the problem of different train and test distributions, known as dataset shift) has produced many methods for finding predictive distributions that are invariant to dataset shifts and can be used for prediction in new, unseen environments. However, these methods consider different types of shifts and have been developed under disparate frameworks, making it difficult to theoretically analyze how solutions differ with respect to stability and accuracy. Taking a causal graphical view, we use a flexible graphical representation to express various types of dataset shifts. Given a known graph of the data generating process, we show that all invariant distributions correspond to a causal hierarchy of graphical operators, which disable the edges in the graph that are responsible for the shifts. The hierarchy provides a common theoretical underpinning for understanding when and how stability to shifts can be achieved, and in what ways stable distributions can differ. We use it to establish conditions for minimax optimal performance across environments, and derive new algorithms that find optimal stable distributions. By using this new perspective, we empirically demonstrate that that there is a tradeoff between minimax and average performance.
AB - Recent interest in the external validity of prediction models (i.e., the problem of different train and test distributions, known as dataset shift) has produced many methods for finding predictive distributions that are invariant to dataset shifts and can be used for prediction in new, unseen environments. However, these methods consider different types of shifts and have been developed under disparate frameworks, making it difficult to theoretically analyze how solutions differ with respect to stability and accuracy. Taking a causal graphical view, we use a flexible graphical representation to express various types of dataset shifts. Given a known graph of the data generating process, we show that all invariant distributions correspond to a causal hierarchy of graphical operators, which disable the edges in the graph that are responsible for the shifts. The hierarchy provides a common theoretical underpinning for understanding when and how stability to shifts can be achieved, and in what ways stable distributions can differ. We use it to establish conditions for minimax optimal performance across environments, and derive new algorithms that find optimal stable distributions. By using this new perspective, we empirically demonstrate that that there is a tradeoff between minimax and average performance.
KW - dataset shift
KW - invariance
KW - stability
KW - transportability
UR - http://www.scopus.com/inward/record.url?scp=85131002750&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131002750&partnerID=8YFLogxK
U2 - 10.1515/jci-2021-0042
DO - 10.1515/jci-2021-0042
M3 - Article
AN - SCOPUS:85131002750
SN - 2193-3677
VL - 10
SP - 64
EP - 89
JO - Journal of Causal Inference
JF - Journal of Causal Inference
IS - 1
ER -