Cohere and Fujitsu Announce Strategic Partnership To Provide Japanese Enterprise AI Services

Learn More
< Back to more papers

FAIR-Ensemble: When Fairness Naturally Emerges From Deep Ensembling

AUTHORS

Wei-Yin Ko, Daniel D’souza, Karina Nguyen, Randall Balestriero, Sara Hooker

ABSTRACT

Ensembling independent deep neural networks (DNNs) is a simple and effective way to improve top-line metrics and to outperform larger single models. In this work, we go beyond top-line metrics and instead explore the impact of ensembling on subgroup performances. Surprisingly, even with a simple homogenous ensemble – all the individual models share the same training set, architecture, and design choices – we find compelling and powerful gains in worst-k and minority group performance, i.e. fairness naturally emerges from ensembling. We show that the gains in performance from ensembling for the minority group continue for far longer than for the majority group as more models are added. Our work establishes that simple DNN ensembles can be a powerful tool for alleviating disparate impact from DNN classifiers, thus curbing algorithmic harm. We also explore why this is the case. We find that even in homogeneous ensembles, varying the sources of stochasticity through parameter initialization, mini-batch sampling, and the data-augmentation realizations, results in different fairness outcomes.