Publication Type

Journal Article

Version

publishedVersion

Publication Date

4-2018

Abstract

Providing secure access to smart devices such as mobiles, wearables and various other IoT devices is becoming increasinglyimportant, especially as these devices store a range of sensitive personal information. Breathing acoustics-based authentication offers a highly usable and possibly a secondary authentication mechanism for such authorized access, especially as it canbe readily applied to small form-factor devices. Executing sophisticated machine learning pipelines for such authenticationon such devices remains an open problem, given their resource limitations in terms of storage, memory and computational power. To investigate this possibility, we compare the performance of an end-to-end system for both user identification anduser verification tasks based on breathing acoustics on three type of smart devices: smartphone, smartwatch and Raspberry Piusing both shallow classifiers (i.e., SVM, GMM, Logistic Regression) and deep learning based classifiers (e.g., LSTM, MLP). Viadetailed investigation, we conclude that LSTM models for acoustic classification are the smallest in size, have lowest inference time and are more accurate than all other compared classifiers. An uncompressed LSTM model provides 80%-94% accuracy while requiring only 50–180 KB of storage (depending on the breathing gesture). The resulting inference can be done on smartphones and smartwatches within approximately 7–10 ms and 18–66 ms respectively, thereby making them suitable for resource-constrained devices. Further memory and computational savings can be achieved using model compression methods such as weight quantization and fully connected layer factorization: in particular, a combination of quantization and factorization achieves 25%–55% reduction in LSTM model size, with almost no loss of accuracy. We also compare the performance on GPUs and show that the use of GPU can reduce the inference time of LSTM models by a factor of 300%. These results provide a practical way to deploy breathing based biometrics, and more broadly LSTM-based classifiers, in futureubiquitous computing applications.

Keywords

Authentication, Breathing Gestures, GMM, IoT, LSTM, MLP, SVM, Security, Wearables

Discipline

Software Engineering

Research Areas

Software and Cyber-Physical Systems

Publication

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Volume

2

Issue

4

First Page

158: 1

Last Page

26

ISSN

2474-9567

Identifier

10.1145/3287036

Publisher

Association for Computing Machinery (ACM)

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1145/3287036

Share

COinS