{{announcement.body}}
{{announcement.title}}

How AI Is Making the Web More Accessible

DZone 's Guide to

How AI Is Making the Web More Accessible

In this article, let's see how AI is making the web more accessible.

· AI Zone ·
Free Resource

According to a household survey by the Bureau of Labor Statistics, the rapidly-growing economy in the US has benefitted different demographics.

Disabled man sitting in wheelchair with a laptop on his lap.

This includes disabled individuals, a group whose unemployment rate fell to 6.3 percent in April 2019, the lowest since 2008. But unemployment among people with disabilities still remains almost twice as high as the rest of the population.

Thankfully, times have changed, and so have the perceptions regarding disability. Global businesses are slowly changing the way they work to fit the current global narratives surrounding disability and development. Thus, disability inclusion has become a focal point, and the internet is right at the center.

Where Does AI Fit Into the Picture?

The rising domination of the online space comes with heavier expectations around accessibility. The stark disability digital divide, however, has spurred numerous ADA lawsuits since 2015 concerning web access.

AI-powered technologies can help:

  • Overcome general communication, transportation, and accessibility problems
  • Enable disabled individuals to live independently
  • Reduce technology barriers that block people with disabilities from taking part in digital activities such as shopping online

As more AI technology gets integrated into product offerings, businesses are realizing how important it is to empower a greater number of individuals with different abilities, not just from a market standpoint but also to satisfy legal imperatives.

No wonder Microsoft invested $25 million on the AI for Accessibility project that aims to help disabled individuals with daily life, work, and communication.

Image Recognition

Google launched the Lookout app in 2019 to help blind people learn about their environment through a combination of image processing and machine learning. That’s a step in the right direction and should point other businesses towards the right thing to do both from a moral and business perspective.

An example of Google's Lookout app at work, showing a screenshot of a smartphone taking a picture of a dog, and the software recognizing the object correctly as a dog.

After all, 3.2 billion images are shared daily on the internet. Without the help of AI, people with disabilities would have no way of knowing the contents of these pictures.

Facebook was the first social media giant to do something about the issue by launching a revolutionary automatic alt text feature, capable of dynamically describing images to visually impaired and blind individuals. Using neural networks and machine learning, Facebook can recognize the different components in the image and describe each one with amazing accuracy.

    Examples of Facebook's automatic alt-text feature, where the AI recognizes objects in the images such as pizza, tree, sky, people, smiling, outdoor and more.

In another five to seven years, image recognition software will make alt text obsolete. Already, image identification has been implemented in various fields with remarkable success. Large databases and visual sites use it for automated image organization while marketers rely on the technology for creating interactive brand campaigns.

AI Technology for Website Accessibility

Fueled by the requirements of people with disabilities, AI building blocks are now being assembled to create complex, creative services that can improve their lives and fulfill tasks on their behalf. Find out how these building blocks overcome disability challenges and contribute to making the online experience more accessible:

accessiBe is an AI-powered, pioneering web accessibility tool that simplifies how site owners and companies can make their content accessible to disabled users.

It enables automatic and bulk creation of accurate alt text descriptions for all your website’s images, making them accessible to visually impaired uses.

On top of that, accessiBe’s background application makes sure that the site’s infrastructure is compliant with the entire WCAG 2.1, which is the standard of web accessibility legislation:

  • Enable a single-click option to disable animations/blinks etc. for users with epilepsy
  • Optimize websites for keyboard navigation for people with motor impairments
  • Offer a built-in dictionary explaining expressions, slang, and phrases for people with cognitive disorders.
  • Make granular adjustments in colors, fonts, and typography to make the content accessible for people with visual disabilities
  • A screenshot of accessiBe's AI-driven website accessibility tool in action, showcasing the solution's UI which enables the user to make many accessibility adjustments to a website.

Facial-Recognition Based CAPTCHA Entry

Facial recognition has been a boon of sorts for people with disabilities. But it is not devoid of privacy or security issues. By analyzing data — normally, numerous photos of a person’s face from different angles — artificial intelligence can assume who’s in front of the camera.

This can prove useful in overcoming challenges associated with online authentication in various contexts.

    An illustration image of a man checking in in an airport.

Apple used facial recognition technology to unlock iPhones back in 2017 while Microsoft has its proprietary Hello software. Both these technologies enable users to log in using only their face. No passwords are required.

Examples of use-cases where users are asked to set up with their smile or fingerprint.

In spite of existing security flaws and limitations, facial recognition is all set to overthrow traditional CAPTCHA tests, especially as the internet becomes more accessible to people with disabilities. Once the system recognizes the person interacting with it via the camera lens, it can weed out bots effectively while leveling the playing field.

Lip Reading

Researchers came up with the idea of Deep Video Portraits in 2018 that used AI to edit facial expressions of actors so they accurately matched dubbed voices, thereby saving time and decreasing costs of the film industry. The application of this software also corrects head poses and gazes in video conferencing and allows for new visual and postproduction effects.

It is AI-powered technology like this that will soon make dodgy subtitles a thing of the past and allow people with hearing disabilities to enjoy video content online. Integrating the program into a phone would permit hearing-impaired individuals to interpret what others are saying.

Google DeepMind researched more than 100,000 natural sentences taken from various BBC videos covering a wide range of accents, head positions, lighting, speech rates, and languages. They ran these videos against Google DeepMind’s neural networks to surprising results.

While the best lip-reading professionals interpreted only 12.4 percent of the content, artificial intelligence attained a whopping 46.8 percent accuracy.

    A screenshot of DeppMind's website, where it says "What if solving one problem could unlock solutions to thousands more?". In the background there's a team of 3 people.

Consider the implications such technology can have on the lives of people with hearing problems. Not only will they be able to consume online videos but the quality and relevancy of automated captions will improve as well.

Automated Text Summaries

Think about it — 2.5 quintillion bytes of data is generated daily, and that is likely to increase. This makes automatically summarizing content a serious challenge. The situation is a lot more difficult for people with low literacy skills.

Summaries of long news articles, conversations, and documents allow for faster and more efficient consumption. And that is now possible with the help of AI. Automatic text summarization can have a wide variety of real-world applications including media monitoring, marketing, research, and analysis.

Salesforce, for example, developed a summarization algorithm in 2017. Using machine learning to generate shorter text abstracts, this feature can help people with memory issues, attention deficit disorders, and learning disabilities like dyslexia.

    An example of a summarization algorithm at work, while taking specific parts of a text document and summarizing them,

The company has now gone from an extractive model to an abstractive one, introducing new related synonyms and words and summarizing the text.

Real-Time Translation

The sheer number of cultures and languages worldwide can present various communication problems online. That’s why extensive research went into building systems that enabled people to communicate sans any language barriers.

Think Google Translate. Sure, the earlier translations were full of inconsistencies and errors. But all that changed in November 2016 when Google launched the Neural Machine Translation system, lowering error rates by 85 percent.

GNMT also promoted the idea of globally operated translations that operate on a sentence-per-sentence, idea-per-idea basis.

    An illustration of how Google Nerual Machine Translation works, where the 3 languages of English, Japanese and Korean are inputted into a blue machine, and outputted as translated, post-GNMT versions.

As artificial intelligence gains greater exposure to a particular language, it learns more and generates accurate translations.

    Image of Google's Pixel Buds phones.

All that culminated in the release of Google’s Pixel Buds in 2017. The earbuds work together with the company’s pixel phones to auto-translate what the user is hearing. The technology works in real-time and supports almost 40 different languages. This goes a long way in bringing down communication barriers for people with disabilities.

Concluding Remarks

The use of AI technology offers new opportunities for people with disabilities. Not only does it make the internet a more accessible space, but it allows for greater workplace support as well. Businesses must remain aware of AI developments that make the workplace accessible for employees with disabilities if they wish to avoid lawsuits.

AI advancements will help businesses meet legal obligations and support a diverse user base. However, ease of use and trust in AI must be fostered over time. Till then, the internet needs to do its part to foster an inclusive and accessible environment for all users, whether disabled or non-disabled.

Topics:
artifical intelligence ,accessibility ,ai and accessibility ,image recognition ,lip reading ,automated text summaries

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}