Sensor Image-Based Environmental Listening Assistant

Period of Performance: 03/15/2016 - 09/30/2016


Phase 1 SBIR

Recipient Firm

Speech Technology/applied Research Corp.
Bedford, MA 01730
Principal Investigator


? DESCRIPTION (provided by applicant): Environmental Listening Assistance from Sensor Images Recent years have witnessed a veritable explosion of innovative personal audio devices, wireless sound systems, active earpieces, and other personal digital audio devices. These appliances are typically much less expensive, and seemingly much more innovative, than traditional hearing aids. But hearing aids are subject to many constraints that their more innovative and uninhibited cousins ignore. These constraints include significant size and power limitations, and stringent signal processing latency. These greatly limit the potential of many innovations, even though they could be relaxed if an aid could be environmentally connected and aware. From a recent NIDCD workshop report: The result [of these constraints] is a wide `valley of death' that limits the ability to translate innovations from academic research into widespread commercial use. We propose a system for environmental awareness based on separated sensor images: the signals that microphones would produce from single sources, if all others were quiet. Critically, these allow determining whether a latency constraint even matters for a given source of interest. Modular system design will allow it to make use of further innovations in both consumer audio and academic research as they arise. The many different audio processing capabilities of present-day hearing aids fall into two categories: user- centered compensations for hearing deficits (e.g. amplification, dynamic range compression, frequency transposition), and for the undesirable side-effects of hearing aid operation (e.g. feedback suppression); and acoustic space-centered signal enhancement techniques designed to compensate for the complexities of a user's acoustic environment (e.g. speaker separation, noise reduction, speech enhancement, echo cancella- tion). Broadly speaking, present day hearing aids do a much better job of compensating for user-centered characteristics than space-centered characteristics. It is precisely this weakness that this project addresses. We propose to develop an assistive listening product designed to be located in an acoustic space, rather than worn by a listener. This Environmental Listening Assistant or ELA would be part of the every-day envi- ronmental support that hearing aid users could expect to find in their home, car, or office-just as they expect to find appropriate lighting that allows them to see well. The ELA would communicate bidirectionally with any ELA-aware hearing aid, and could support multiple hearing aid users simultaneously. Just as a room's lighting infrastructure typically includes multiple light emitting devices and a switch, a room's ELA infrastructure would include multiple microphones and a control box. The ELA control box would continually process microphone signals to create separate output audio channels, one for each active audio source. When a listener wearing an ELA-aware hearing aid entered the room, they would use a simple interface running on their smartphone, smartwatch, or similar device to choose one of the ELA source images to listen to. The ELA will help hearing aid users to better hear the sounds they want to hear in complex acoustic spaces.