Publication Type
Journal Article
Version
publishedVersion
Publication Date
9-2023
Abstract
Discrimination has been shown in many machine learning applications, which calls for sufficient fairness testing before their deployment in ethic-relevant domains. One widely concerning type of discrimination, testing against group discrimination, mostly hidden, is much less studied, compared with identifying individual discrimination. In this work, we propose TestSGD, an interpretable testing approach which systematically identifies and measures hidden (which we call ‘subtle’) group discrimination of a neural network characterized by conditions over combinations of the sensitive attributes. Specifically, given a neural network, TestSGD first automatically generates an interpretable rule set which categorizes the input space into two groups. Alongside, TestSGD also provides an estimated group discrimination score based on sampling the input space to measure the degree of the identified subtle group discrimination, which is guaranteed to be accurate up to an error bound. We evaluate TestSGD on multiple neural network models trained on popular datasets including both structured data and text data. The experiment results show that TestSGD is effective and efficient in identifying and measuring such subtle group discrimination that has never been revealed before. Furthermore, we show that the testing results of TestSGD can be used to mitigate such discrimination through retraining with negligible accuracy drop.
Keywords
Fairness Improvement, Fairness, Fairness Testing, Machine Learning
Discipline
Information Security | Numerical Analysis and Scientific Computing | Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
ACM Transactions on Software Engineering and Methodology
Volume
32
Issue
6
First Page
1
Last Page
24
ISSN
1049-331X
Identifier
10.1145/3591869
Publisher
Association for Computing Machinery (ACM)
Citation
ZHANG, Mengdi; SUN, Jun; WANG, Jingyi; and SUN, Bing.
TESTSGD: Interpretable testing of neural networks against subtle group discrimination. (2023). ACM Transactions on Software Engineering and Methodology. 32, (6), 1-24.
Available at: https://ink.library.smu.edu.sg/sis_research/8144
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Additional URL
https://doi.org/10.1145/3591869
Included in
Information Security Commons, Numerical Analysis and Scientific Computing Commons, Software Engineering Commons