ABSTRACT
Computer vision models trained on unparalleled amounts of data hold promise for making impartial, well-informed decisions in a variety of applications. However, more and more historical societal biases are making their way into these seemingly innocuous systems. Visual recognition models have exhibited bias by inappropriately correlating age, gender, sexual orientation and race with a prediction. The downstream effects of such bias range from perpetuating harmful stereotypes on an unparalleled scale to increasing the likelihood of being unfairly predicted as a suspect in a crime (when face recognition, which is notoriously less accurate on Black than White faces, is used in surveillance cameras). In this talk, we’ll dive deeper both into the technical reasons and the potential solutions for algorithmic fairness in computer vision. Among other things, we will discuss our most recent work (in submission) on training deep learning models that de-correlate a sensitive attribute (such as race or gender) from the target prediction.