Postdoc Researcher: Ivan Lopez-Espejo
Supervisors: Zheng-Hua Tan, Jesper Jensen
Manual operation of hearing assistive devices is cumbersome in a number of situations. To assist in addressing this issue, voice interfaces are being deployed for hearing assistive devices in order to comfortably handle them. Furthermore, it is key that such voice interfaces take into account that hearing assistive devices are characterized by strict memory and computational complexity constraints.
In spite all the progress made in both machine learning and speech technology in recent years, there is still a long way to go in the development of voice interfaces that operate flawlessly even in acoustically challenging (i.e., noisy) situations. Therefore, the goal of this project is the research and development of personalized, noise-robust and low-resource keyword spotting systems for hearing assistive devices. To meet all these requirements, we will explore the combined use of multi-microphone signals from hearing assistive devices along with signal processing and the latest deep learning techniques. In addition, we will investigate whether other signal modalities may aid to further improve the performance of the developed voice interfaces. As a result, we expect to contribute to enhance the life quality of hearing-impaired people.