Machine learning is a branch of artificial intelligence and computer science that focuses on the use of data and algorithms to imitate the way that humans learn. It can be used to help people with hearing loss by improving hearing aid algorithms to provide better speech recognition in louder environments.
Hearing Loss Makes It Harder to Communicate
One of the biggest ways hearing loss can impact you is that it makes it harder to communicate. It can do this in several ways:
- Make speech sound muffled so that you often have to ask people to repeat themselves
- Mishear what people are saying to you, making conversation difficult to follow
- Make it especially hard to differentiate speech in places with background noise or where multiple people are talking at once
Because it’s difficult to hear, your brain must work harder to pick up on sounds. This can leave you feeling frustrated, depressed and mentally fatigued. Eventually, you may isolate yourself from others because socialization becomes too exhausting. Unfortunately, this can worsen your hearing loss, as well as negatively affect your physical, mental and cognitive health.
Hearing Aids Help, But Can Still Struggle with Background Noise
Current hearing aids make it easier to understand speech in all environments. This is thanks in large part to the continued improvement of hearing aid algorithms, which work by digitizing and processing sounds before delivering an amplified version into the ear.
However, despite the progress these algorithms have created, they do also have some limitations:
- Current ways of testing their accuracy are costly and time-consuming
- They cannot easily factor in changes in acoustic environments nor adjust for different degrees of hearing loss
This is part of the reason why hearing aids can still have trouble distinguishing between speech and background noise in places like Miramar Bistro.
Machine Learning May Improve Hearing Aid Algorithms
Researchers in Germany have created a human speech recognition model based on deep machine learning that may help solve both these issues.
To train their model, they used recordings of basic sentences from men and women, then masked them with background noise or another voice talking over it. They also degraded the sound quality to mimic hearing loss.
After that, they played the masked recordings to participants with both normal hearing, as well as those with different degrees of hearing loss.
They found that their model could accurately predict when people with different levels of hearing would mishear over 50% of words in a variety of different environments. The hope is that with more time and research this model could offer insight into creating a better, more personalized listening experience.
If you have additional questions about hearing aid technology or wish to schedule an appointment, contact North Shore Audio-Vestibular Lab today.