A Personalized Acoustic Interface for Wearable Human–Machine Interaction


Communication and interaction with machines are changing our ways of life. However, developing an acoustic interface that simultaneously features waterproofness, wearability, high fidelity, and high accuracy for human–machine interaction remains a grand challenge. Herein, a waterproof acoustic sensor (WAS) as a wearable translation interface to communicate with machines is reported. Owing to the sound-response ability of internal microparticles, the WAS holds a significantly broad frequency response range of 0.1–20 kHz, covering almost the entire human audible range. The WAS is stable against human perspiration, shows omnidirectional response, and displays an excellent frequency detection resolution of 0.0001 kHz. With a collection of compelling features, the WAS can serve as a wearable acoustic human–machine interface and a high-fidelity auditory platform for music recording. Moreover, the WAS-based acoustic interface holds a remarkable 98% accuracy for speech recognition with the assistance of an artificial intelligence algorithm. Finally, the WAS-based acoustic interface demonstrates speaker verification and identification for implementation in highly secure biometric authentication systems and wireless control of an intelligent car using speech recognition. Such a WAS-based acoustic interface represents the advancement of high-fidelity translation platforms for human–machine interactions toward practical applications, including the Internet of Things, assistive technology, and intelligent recognition systems.

Advanced Functional Materials
Xiao Xiao (肖潇)
Xiao Xiao (肖潇)
PhD researcher

My research focuses on bioelectronics, bioinspired materials, as well as nanotechnology for energy and healthcare applications.