The lecture has the following format:
For further information, please contact Prof. Dr. Emanuël Habets.
We live in a noisy world! In all applications that are related to speech from hands-free communication, teleconferencing, hearing aids, cochlear implants, to human-machine interfaces such as smart speakers, a speech signal of interest captured by one or more microphones is contaminated by noise and reverberation. Depending on the level of noise and reverberation, the quality and intelligibility of the captured speech can be greatly reduced. Therefore, it is highly desirable, and sometimes even indispensable, to "clean up" the noisy signals using signal processing techniques before storage, transmission or reproduction.
In this course both traditional and deep learning methods for noise reduction and dereverberation, with one or multiple microphones, are discussed.
The goal of this course is to provide a strong foundation for researchers, engineers, and graduate students who are interested in the problem of signal and speech enhancement.
The lecture slides can be downloaded here.
Jupyter notebooks have been created that go with the exercises. To access them you need to
Further audio-related courses offered by the AudioLabs can be found at: