Using AI To Make Hearing Aids Better
Using AI To Make Hearing Aids Better
How can we improve hearing aids by using AI?
Join the DZone community and get the full member experience.Join For Free
Bias comes in a variety of forms, all of them potentially damaging to the efficacy of your ML algorithm. Read how Alegion's Chief Data Scientist discusses the source of most headlines about AI failures here.
Hearing loss can be debilitating and can significantly hinder the life of an individual. One of the key challenges is in distinguishing voices in noisy environments.
A Danish team believes they may have come up with a solution, with AI being deployed to both recognize and separate voices even in unknown sound environments. The work, which was documented in a recently published paper, aims to improve the ability of hearing aids to process sounds, even in unknown environments.
"When the scenario is known in advance, as in certain clinical test setups, existing algorithms can already beat human performance when it comes to recognizing and distinguishing speakers. However, in normal listening situations without any prior knowledge, the human auditory brain remains the best machine," the team explain.
Helping to Hear
The project was conducted in two chunks, the first of which was to solve the challenge of conducting a one-to-one conversation in a noisy environment, such as on a train. To tackle this, the team developed an algorithm that can amplify the sound of the individual speaker while also dampening outside noise, even without any prior knowledge of the specific situation.
"Current hearing aids are pre-programmed for a number of different situations, but in real life, the environment is constantly changing and requires a hearing aid that is able to read the specific situation instantly," the team explains.
The second part of the research then focused on speech separation, which is vital when there are multiple people speaking, as in a group scenario like a family meal. In this scenario, the team developed an algorithm that could accurately distinguish each of the separate voices, while still dampening outside noise.
The team believes that their deep learning-based approach provides unique benefits in being able to distinguish noise to be dampened from voices to be amplified. What's more, it can do this even in unfamiliar environments.
"The power of deep learning comes from its hierarchical structure that is capable of transforming noisy or mixed voice signals into clean or separated voices through layer-by-layer processing. The widespread use of deep learning today is due to three major factors: ever-increasing computation power, increasing amount of big data for training algorithms, and novel methods for training deep neural networks, " they explain.
Taking It to Market
Of course, it's one thing getting the technology working in a lab environment, but quite another to make it fit for market. At the moment, the technology is far too big to be worn by a user, and so the next challenge is to make the algorithm efficient enough so that it can be worn behind the ear, like existing hearing aids do. They are confident that these challenges are all surmountable, however.
Settings with many people, such as a party, are the biggest challenge. People with normal hearing are usually able to focus on the specific person of interest and shut out all background noise. The so-called cocktail party phenomenon is one that has generated a lot of interest from the research community to try and understand how the brain achieves this. The researchers believe that their work gives us another step towards that goal.
"You sometimes hear that the cocktail party problem has been solved. This is not yet the case. If the environment and voices are completely unknown, which is often the case in the real world, current technology simply cannot match the human brain which works extremely well in unknown environments. But Morten's algorithm is a major step toward getting machines to function and help people with normal hearing and those with hearing loss in such environments, " the researchers explain.
It's a fascinating project, and you can see the technology in action via the video below.
Published at DZone with permission of Adi Gaskell , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.