Semantic Hearing: Programming Acoustic Scenes with Binaural Hearables

Bandhav Veluri*, Malek Itani*, Justin Chan, Takuya Yoshioka, Shyam Gollakota

Paul G. Allen School of Computer Science & Engineering, University of Washington, USA
Microsoft, One Microsoft Way, Redmond, WA, USA
* Equal contribution

UIST '23: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology



Project video


Imagine being able to listen to the birds chirping in a park without hearing the chatter from other hikers, or being able to block out traffic noise on a busy street while still being able to hear emergency sirens and car honks. We introduce semantic hearing, a novel capability for hearable devices that enables them to, in real-time, focus on, or ignore, specific sounds from real-world environments, while also preserving the spatial cues. To achieve this, we make two technical contributions: 1) we present the first neural network that can achieve binaural target sound extraction in the presence of interfering sounds and background noise, and 2) we design a training methodology that allows our system to generalize to real-world use. Results show that our system can operate with 20 sound classes and that our transformer-based network has a runtime of 6.56 ms on a connected smartphone. In-the-wild evaluation with participants in previously unseen indoor and outdoor scenarios shows that our proof-of-concept system can extract the target sounds and generalize to preserve the spatial cues in its binaural output.

[Paper] [Code]