Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
1-2021
Abstract
Visual privacy concerns associated with image sharing is a critical issue that need to be addressed to enable safe and lawful use of online social platforms. Users of social media platforms often suffer from no guidance in sharing sensitive images in public, and often face with social and legal consequences. Given the recent success of visual attention based deep learning methods in measuring abstract phenomena like image memorability, we are motivated to investigate whether visual attention based methods could be useful in measuring psychophysical phenomena like “privacy sensitivity”. In this paper we propose PrivAttNet – a visual attention based approach, that can be trained end-to-end to estimate the privacy sensitivity of images without explicitly detecting sensitive objects and attributes present in the image. We show that our PrivAttNet model outperforms various SOTA and baseline strategies – a 1.6 fold reduction in L1 − error over SOTA and 7%–10% improvement in Spearman-rank correlation between the predicted and ground truth sensitivity scores. Additionally, the attention maps from PrivAttNet are found to be useful in directing the users to the regions that are responsible for generating the privacy risk score.
Discipline
Information Security | Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
Proceedings of the 25th International Conference on Pattern Recognition, ICPR 2020, Virtual Conference, 2021 January 10-15
First Page
1
Last Page
8
City or Country
Milan, Italy
Citation
CHEN, Zhang; KANDAPPU, Thivya; and SUBBARAJU, Vigneshwaran.
PrivAttNet: Predicting privacy risks in images using visual attention. (2021). Proceedings of the 25th International Conference on Pattern Recognition, ICPR 2020, Virtual Conference, 2021 January 10-15. 1-8.
Available at: https://ink.library.smu.edu.sg/sis_research/5448
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.