Publication Type
Journal Article
Version
publishedVersion
Publication Date
11-2023
Abstract
Deep Neural Networks (DNNs) have achieved tremendous success in many applications, while it has been demonstrated that DNNs can exhibit some undesirable behaviors on concerns such as robustness, privacy, and other trustworthiness issues. Among them, fairness (i.e., non-discrimination) is one important property, especially when they are applied to some sensitive applications (e.g., finance and employment). However, DNNs easily learn spurious correlations between protected attributes (e.g., age, gender, race) and the classification task and develop discriminatory behaviors if the training data is imbalanced. Such discriminatory decisions in sensitive applications would introduce severe social impacts. To expose potential discrimination problems in DNNs before putting them in use, some testing techniques have been proposed to identify the discriminatory instances (i.e., instances that show defined discrimination1). However, how to repair DNNs after detecting such discrimination is still challenging. Existing techniques mainly rely on retraining on a large number of discriminatory instances generated by testing methods, which requires huge time overhead and makes the repairing inefficient.In this work, we propose the method Faire to effectively and efficiently repair the fairness issues of DNNs, without using additional data (e.g., discriminatory instances). Our basic idea is inspired by the traditional program repair method that synthesizes proper condition checking. To repair traditional programs, a typical method is to localize the program defects and repair the program logic by adding condition checking. Similarly, for DNNs, we try to understand the unfair logic and reformulate it with well-designed condition checking. In this article, we synthesize the condition that can reduce the effect of features relevant to the protected attributes in the DNN. Specifically, we first perform the neuron-based analysis and check the functionalities of neurons to identify neurons whose outputs could be regarded as features relevant to protected attributes and original tasks. Then a new condition layer is added after each hidden layer to penalize neurons that are accountable for the protected features (i.e., intermediate features relevant to protected attributes) and promote neurons that are accountable for the non-protected features (i.e., intermediate features relevant to original tasks). In sum, the repair rate2 of Faire reaches up to more than 99%, which outperforms other methods, and the whole repairing process only takes no more than 340 s. The evaluation results demonstrate that our approach can effectively and efficiently repair the individual discriminatory instances of the target model.
Keywords
Computing methodologies, Machine learning, Machine learning approaches, Neural networks, Software and its engineering, Software creation and management, Software verification and validation, Software defect analysis, Software testing and debugging
Discipline
Databases and Information Systems
Research Areas
Data Science and Engineering; Cybersecurity; Information Systems and Management
Publication
ACM Transactions on Software Engineering and Methodology
Volume
33
Issue
1
First Page
1
Last Page
24
ISSN
1049-331X
Identifier
10.1145/3617168
Publisher
Association for Computing Machinery (ACM)
Citation
LI, Tianlin; XIE, Xiaofei; WANG, Jian; GUO, Qing; LIU, Aishan; MA, Lei; and LIU, Yang.
Faire: Repairing fairness of neural networks via neuron condition synthesis. (2023). ACM Transactions on Software Engineering and Methodology. 33, (1), 1-24.
Available at: https://ink.library.smu.edu.sg/sis_research/8475
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1145/3617168