Publication Type

Conference Proceeding Article

Publication Date



As human-agent teams get increasingly deployed in the real-world, agent designers need to take into account that humans and agents have different abilities to specify preferences. In this paper, we focus on how human biases in specifying preferences for resources impacts the performance of large, heterogeneous teams. In particular, we model the inclination of humans to simplify their preference functions and to exaggerate their utility for desired resources, and show the effect of these biases on the team performance. We demonstrate this on two different problems, which are representative of many resource allocation problems addressed in literature. In both these problems, the agents and humans optimize their constraints in a distributed manner. This paper makes two key contributions: (a) Proves theoretical properties of the algorithm used (named DSA) for solving distributed constraint optimization problems, which ensures robustness against human biases; and (b) Empirically illustrates that the effect of human biases on team performancefor different problem settings and for varying team sizes is not significant. Both our theoretical and empirical studies support the fact that the solutions provided by DSA for mid to large sized teams are very robust to the common types of human biases.


Artificial Intelligence and Robotics | Business | Operations Research, Systems Engineering and Industrial Engineering

Research Areas

Intelligent Systems and Decision Analytics


IEEE International Conference on Intelligent Agent Technology (IAT)

First Page


Last Page








City or Country

Toronto, Canada

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Additional URL