DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Using Wearables and Machine Learning to Help With Speech Disorders

Using Wearables and Machine Learning to Help With Speech Disorders

We take a look at how the IoT combined with machine learning is helping treat people with speech disorders. Read on to find out how this is being done and the progress being made.

Adi Gaskell user avatar by
Adi Gaskell
·
Oct. 22, 16 · News
Like (2)
Save
Tweet
Share
7.68K Views

Join the DZone community and get the full member experience.

Join For Free

speech-disorder speech is a fundamental aspect of human behavior, yet it remains something that many of us struggle with. it’s believed that around 1 in 14 adults in the united states have some kind of voice disorder, and our understanding of such disorders makes it difficult to both diagnose and treat.

a team from mit and the massachusetts general hospital believe that machine learning can play a part in better understanding speech disorders.

in a recent paper , they describe using a wearable device to collect accelerometer data to detect differences in people with muscle tension dysphonia (mtd) and a control group. after such individuals with mtd had received therapy for the condition, their behaviors appeared to converge with that of the control group.

“we believe this approach could help detect disorders that are exacerbated by vocal misuse, and help to empirically measure the impact of voice therapy,” the authors say. “our long-term goal is for such a system to be used to alert patients when they are using their voices in ways that could lead to problems.”

machine learning and iot

the team used unsupervised learning to try and gain a better understanding of just when vocal misuse was occurring, and their correlation between misuse and accelerometer data.

“people with vocal disorders aren’t always misusing their voices, and people without disorders also occasionally misuse their voices,” the authors explain. “the difficult task here was to build a learning algorithm that can determine what sort of vocal cord movements are prominent in subjects with a disorder.”

so, the team divided participants into two groups depending on whether they had a voice disorder or not. the two groups then went about their lives as per normal, whilst wearing a wearable accelerometer device to capture the motion of their vocal folds.

the data was then crunched, with over 110 million glottal pulses captured during the test period. there was a noticeable difference in the clustering of these pulses between the two groups, but this difference was shown to vanish when the voice disorder group received therapy for their condition.

the study is important as it’s the first of its kind to use machine learning to provide clear evidence of the impact voice therapy has on a patient.

“when a patient comes in for therapy, you might only be able to analyze their voice for 20 or 30 minutes to see what they’re doing incorrectly and have them practice better techniques,” the team explain. “as soon as they leave, we don’t really know how well they’re doing, and so it’s exciting to think that we could eventually give patients wearable devices that use round-the-clock data to provide more immediate feedback.”

the team hopes that they will be able to further develop the approach such that it can help to diagnose specific disorders and potentially even provide insight into how disorders work.

this could potentially be done via a smartphone app that provides a level of biofeedback to help patients better manage their conditions and live a life that’s conducive to healthier vocal behaviors.

Machine learning

Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Using AI and Machine Learning To Create Software
  • Easy Smart Contract Debugging With Truffle’s Console.log
  • The 31 Flavors of Data Lineage and Why Vanilla Doesn’t Cut It
  • Automated Performance Testing With ArgoCD and Iter8

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: