Conference Proceeding Article
This work presents Sonicnect, an acoustic sensing system with smartphone that enables accurate hands-free gesture input. Sonicnect leverages the embedded microphone in the smartphone to capture the subtle audio signals generated with fingers touching on the table. It supports 9 commonly used gestures (click, flip, scroll and zoom, etc) with above 92% recognition accuracy, and the minimum gesture movement could be 2cm. Distinguishable features are then extracted by exploiting spatio-temporal and frequency properties of the subtle audio signals. We conduct extensive real environment experiments to evaluate its performance. The results validate the effectiveness and robustness of Sonicnect.
Acoustic sensing, Audio signal, Gesture input, Hands-free, Recognition accuracy, Spatio temporal
Computer Sciences | Software Engineering
Software and Cyber-Physical Systems
MobiSys '16: Companion Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services: Singapore, Singapore, June 25-30, 2016
City or Country
CHANG, Maotian; LI, Ping; YANG, Panlong; Jie XIONG; and TIAN, Chang.
Poster: Sonicnect: Accurate hands-free gesture input system with smart acoustic sensing. (2016). MobiSys '16: Companion Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services: Singapore, Singapore, June 25-30, 2016. 91-91. Research Collection School Of Information Systems.
Available at: http://ink.library.smu.edu.sg/sis_research/3393
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.