Home > Research > Publications & Outputs > Oblivious Data for Fairness with Kernels

Links

View graph of relations

Oblivious Data for Fairness with Kernels

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Oblivious Data for Fairness with Kernels. / Grunewalder, Steffen; Khaleghi, Azadeh.
In: Journal of Machine Learning Research, Vol. 22, 31.08.2021.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Grunewalder S, Khaleghi A. Oblivious Data for Fairness with Kernels. Journal of Machine Learning Research. 2021 Aug 31;22.

Author

Grunewalder, Steffen ; Khaleghi, Azadeh. / Oblivious Data for Fairness with Kernels. In: Journal of Machine Learning Research. 2021 ; Vol. 22.

Bibtex

@article{52403fe85eb94b1e8faa5653d1ea688e,
title = "Oblivious Data for Fairness with Kernels",
abstract = "We investigate the problem of algorithmic fairness in the case where sensitive and non-sensitive features are available and one aims to generate new, `oblivious', features that closely approximate the non-sensitive features, and are only minimally dependent on the sensitive ones. We study this question in the context of kernel methods. We analyze a relaxed version of the Maximum Mean Discrepancy criterion which does not guarantee full independence but makes the optimization problem tractable. We derive a closed-form solution for this relaxed optimization problem and complement the result with a study of the dependencies between the newly generated features and the sensitive ones. Our key ingredient for generating such oblivious features is a Hilbert-space-valued conditional expectation, which needs to be estimated from data. We propose a plug-in approach and demonstrate how the estimation errors can be controlled. Our theoretical results are accompanied by experimental evaluations.",
author = "Steffen Grunewalder and Azadeh Khaleghi",
year = "2021",
month = aug,
day = "31",
language = "English",
volume = "22",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",

}

RIS

TY - JOUR

T1 - Oblivious Data for Fairness with Kernels

AU - Grunewalder, Steffen

AU - Khaleghi, Azadeh

PY - 2021/8/31

Y1 - 2021/8/31

N2 - We investigate the problem of algorithmic fairness in the case where sensitive and non-sensitive features are available and one aims to generate new, `oblivious', features that closely approximate the non-sensitive features, and are only minimally dependent on the sensitive ones. We study this question in the context of kernel methods. We analyze a relaxed version of the Maximum Mean Discrepancy criterion which does not guarantee full independence but makes the optimization problem tractable. We derive a closed-form solution for this relaxed optimization problem and complement the result with a study of the dependencies between the newly generated features and the sensitive ones. Our key ingredient for generating such oblivious features is a Hilbert-space-valued conditional expectation, which needs to be estimated from data. We propose a plug-in approach and demonstrate how the estimation errors can be controlled. Our theoretical results are accompanied by experimental evaluations.

AB - We investigate the problem of algorithmic fairness in the case where sensitive and non-sensitive features are available and one aims to generate new, `oblivious', features that closely approximate the non-sensitive features, and are only minimally dependent on the sensitive ones. We study this question in the context of kernel methods. We analyze a relaxed version of the Maximum Mean Discrepancy criterion which does not guarantee full independence but makes the optimization problem tractable. We derive a closed-form solution for this relaxed optimization problem and complement the result with a study of the dependencies between the newly generated features and the sensitive ones. Our key ingredient for generating such oblivious features is a Hilbert-space-valued conditional expectation, which needs to be estimated from data. We propose a plug-in approach and demonstrate how the estimation errors can be controlled. Our theoretical results are accompanied by experimental evaluations.

M3 - Journal article

VL - 22

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -