Publication Type

Journal Article

Version

acceptedVersion

Publication Date

10-2022

Abstract

Collaborative Robots (cobots) are regarded as highly safety-critical cyber-physical systems (CPSs) owing to their close physical interactions with humans. In settings such as smart factories, they are frequently augmented with AI. For example, in order to move materials, cobots utilize object detectors based on deep learning models. Deep learning, however, has been demonstrated as vulnerable to adversarial attacks: a minor change (noise) to benign input can fool the underlying neural networks and lead to a different result. While existing works have explored such attacks in the context of picture/object classification, less attention has been given to attacking neural networks used for identifying object locations, and demonstrating that this can actually lead to a physical attack in a real CPS. In this paper, we propose a method to generate adversarial patches for the object detectors of CPSs, in order to miscalibrate them and cause potentially dangerous physical effects. In particular, we evaluate our method on an industrial robotic arm for card gripping, demonstrating that it can be misled into clipping the operator's hand instead of the card. To our knowledge, this is the first work to attack object locations and lead to an incident on human users by an actual system.

Keywords

Cyber-physical systems, YOLO, object detection, adversarial patch attack, physical attacks

Discipline

Artificial Intelligence and Robotics | Software Engineering

Research Areas

Software and Cyber-Physical Systems

Publication

IEEE Robotics and Automation Letters

Volume

7

Issue

4

First Page

9334

Last Page

9341

ISSN

2377-3766

Identifier

10.1109/LRA.2022.3189783

Publisher

Institute of Electrical and Electronics Engineers

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1109/LRA.2022.3189783

Share

COinS