PhD student: Poul Hoang
Supervisors: Zheng-Hua Tan, Jan-Mark de Haan, Thomas Lunner, Jesper Jensen
Support: Innovation Fund Denmark
The human hearing is one of our most important senses and is crucial for us to communicate through spoken language. It is often the case for hearing impaired, that the ability to understand speech degrades depending on the severity of the hearing impairment and the amount of noise in the environment. To help increase speech intelligibility and listening comfort, modern hearing aids typically apply advanced signal processing algorithms to the noisy microphone signals in order to enhance the desired speech signal by reducing the noise from the environment. Although noise reduction is not a new concept in hearing aid technology, existing algorithms still lack robustness in very noisy environments, where many competing speakers are present.
One of the problems faced when applying noise reduction algorithms, such as beamforming, is that the algorithms require that the direction of the desired speaker to be known. Traditionally, beamforming and noise reduction algorithms have been relying on mathematical models that only to some extent describe the signals we observe at the microphones. The advantage of this approach is that the resulting algorithms tend to be computationally simple, which is important for battery-driven, low-complexity computational devices such as, e.g., hearing aids. On the other hand, these simple mathematical models do not capture all details of the observed microphone signals, leading to algorithms, which do not fully exploit all information available in the microphone signals.
As an alternative, this Ph.D. project explores the use of deep learning methods, that do not rely on simple parametric models of the microphone signals. In particular, we develop artificial neural networks to estimate the speaker position, and, subsequently, the target sound signal, based on the noisy microphone signals. We expect this approach to significantly outperform model-based approaches in very noisy environments.
Another aspect we will explore in the Ph.D. project, is improving the performance and robustness of the noise reduction algorithms by designing algorithms that work in close symbiosis with the hearing aid user. More specifically, we believe that we can improve the noise reduction algorithms by providing them with additional information about the user collected from sensors besides microphone signals.