Privacy Loss in Distributed Constraint Reasoning: A Quantitative Framework for Analysis and Its Applications
It is critical that agents deployed in real-world settings, such as businesses, offices, universities and research laboratories, protect their individual users’ privacy when interacting with other entities. Indeed, privacy is recognized as a key motivating factor in the design of several multiagent algorithms, such as in distributed constraint reasoning (including both algorithms for distributed constraint optimization (DCOP) and distributed constraint satisfaction (DisCSPs)), and researchers have begun to propose metrics for analysis of privacy loss in such multiagent algorithms. Unfortunately, a general quantitative framework to compare these existing metrics for privacy loss or to identify dimensions along which to construct new metrics is currently lacking. This paper presents three key contributions to address this shortcoming. First, the paper presents VPS (Valuations of Possible States), a general quantitative framework to express, analyze and compare existing metrics of privacy loss. Based on a state-space model, VPS is shown to capture various existing measures of privacy created for specific domains of DisCSPs. The utility of VPS is further illustrated through analysis of privacy loss in DCOP algorithms, when such algorithms are used by personal assistant agents to schedule meetings among users. In addition, VPS helps identify dimensions along which to classify and construct new privacy metrics and it also supports their quantitative comparison. Second, the article presents key inference rules that may be used in analysis of privacy loss in DCOP algorithms under different assumptions. Third, detailed experiments based on the VPS-driven analysis lead to the following key results: (i) decentralization by itself does not provide superior protection of privacy in DisCSP/DCOP algorithms when compared with centralization; instead, privacy protection also requires the presence of uncertainty about agents’ knowledge of the constraint graph. (ii) one needs to carefully examine the metrics chosen to measure privacy loss; the qualitative properties of privacy loss and hence the conclusions that can be drawn about an algorithm can vary widely based on the metric chosen. This paper should thus serve as a call to arms for further privacy research, particularly within the DisCSP/DCOP arena.
Artificial Intelligence and Robotics | Business | Operations Research, Systems Engineering and Industrial Engineering
Intelligent Systems and Decision Analytics
Journal of Autonomous Agents and Multi-Agent Systems, JAAMAS
MAHESWARAN, Rajiv; Pearce, Jonathan; Bowring, Emma; Varakantham, Pradeep Reddy; and Tambe, Milind.
Privacy Loss in Distributed Constraint Reasoning: A Quantitative Framework for Analysis and Its Applications. (2006). Journal of Autonomous Agents and Multi-Agent Systems, JAAMAS. 13, (1), 27-60. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/22