Conference Proceeding Article
As human-agent teams get increasingly deployed in the real-world, agent designers need to take into account that humans and agents have different abilities to specify preferences. In this paper, we focus on how human biases in specifying preferences for resources impacts the performance of large, heterogeneous teams. In particular, we model the inclination of humans to simplify their preference functions and to exaggerate their utility for desired resources, and show the effect of these biases on the team performance. We demonstrate this on two different problems, which are representative of many resource allocation problems addressed in literature. In both these problems, the agents and humans optimize their constraints in a distributed manner. This paper makes two key contributions: (a) Proves theoretical properties of the algorithm used (named DSA) for solving distributed constraint optimization problems, which ensures robustness against human biases; and (b) Empirically illustrates that the effect of human biases on team performancefor different problem settings and for varying team sizes is not significant. Both our theoretical and empirical studies support the fact that the solutions provided by DSA for mid to large sized teams are very robust to the common types of human biases.
Artificial Intelligence and Robotics | Business | Operations Research, Systems Engineering and Industrial Engineering
Intelligent Systems and Decision Analytics
IEEE International Conference on Intelligent Agent Technology (IAT)
City or Country
PARUCHURI, Praveen; VARAKANTHAM, Pradeep Reddy; and Scerri, Paul.
Effect of human biases on human-agent teams. (2010). IEEE International Conference on Intelligent Agent Technology (IAT). 327-334. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/618
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.