CryptoDB
Indistinguishable Predictions and Multi-Group Fair Learning
Authors: |
|
---|---|
Download: | |
Conference: | EUROCRYPT 2023 |
Honor: | Invited paper |
Abstract: | Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities"---what is the probability that an applicant will repay a loan? Automated predictions increasingly form the basis for life-altering decisions, and this raises a host of concerns. Concerns about the fairness of the resulting predictions are particularly alarming: for example, the predictor might perform poorly on a protected minority group. We survey recent developments in formalizing and addressing such concerns. Inspired by the theory of computational indistinguishability, the recently proposed notion of Outcome Indistinguishability (OI) [Dwork et al., STOC 2021] requires that the predicted distribution of outcomes cannot be distinguished from the real-world distribution. Outcome Indistinguishability is a strong requirement for obtaining meaningful predictions. Happily, it can be obtained: techniques from the algorithmic fairness literature [Hebert-Johnson et al., ICML 2018] yield algorithms for learning OI predictors from real-world outcome data. Returning to the motivation of addressing fairness concerns, Outcome Indistinguishability can be used to provide robust and general guarantees for protected demographic groups [Rothblum and Yona, ICML 2021]. This gives algorithms that can learn a single predictor that "performs well" for every group in a given rich collection G of overlapping subgroups. Performance is measured using a loss function, which can be quite general and can itself incorporate fairness concerns. |
BibTeX
@inproceedings{eurocrypt-2023-33023, title={Indistinguishable Predictions and Multi-Group Fair Learning}, publisher={Springer-Verlag}, note={Invited paper}, author={Guy Rothblum}, year=2023 }