Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using Artificial Intelligence for Visual Impairments

DZone's Guide to

Using Artificial Intelligence for Visual Impairments

Years of research have been invested into AI technology for these devices to be able to recognize objects and items and to read and describe emotions.

· AI Zone ·
Free Resource

Start coding something amazing with the IBM library of open source AI code patterns.  Content provided by IBM.

I recently attended Tech Open Air, an annual tech festival here in Berlin. I am always interested to see what people are working on, especially when it comes to hardware. My interest was piqued when I noticed that three hardware companies were exhibiting assistive technology for visually impaired. It is estimated that as many as ten million Americans are blind or visually impaired. Legal blindness refers to central visual acuity of 20/200 or less in the better eye with the best possible correction, or a visual field of 20 degrees or less. Further, each year, 75,000 more people in the United States will become blind or visually impaired.

What exists at present is a range of apps and wearable tech that use AI to recognize and verbally narrate the world around the visually impaired. Let's take a look!

Seeing AI

Image title

Microsoft has released an iPhone app for people who are blind or have significant visual impairment. When the user points the smartphone’s camera at something, it narrates what it sees. Using facial recognition algorithms, it recognizes saved friends and describes the emotions of those around you. It can read text out loud including signs and can also scan documents and read them. It can also recognize bank notes and is equipped with a barcode scanner to identify items in the supermarket or pantry. Its experimental options include a verbal narration of the environment

An experimental "Scenes" feature allows the app to analyze what's going on in a photograph though it needs some refinement. Seeing AI can be downloaded from the App Store for free.

NavCog

I love me an open-source project — and the Carnegie Mellon University Cognitive Assistance Laboratory and IBM have been working on a navigation app called NavCog that utilizes that uses BLE beacons together with various kinds of sensors with a new localization algorithm for both indoors and outdoors. The app analyzes signals from Bluetooth beacons located along walkways and from smartphone sensors to help enable users to move without human assistance, whether inside campus buildings or outdoors.

The first set of cognitive assistance tools for developers is now available via the cloud through IBM BlueMix. The open toolkit consists of an app for navigation, a map editing tool, and localization algorithms that can help the blind identify in near real time where they are, identify which direction they are facing, and provide additional surrounding environmental information. The computer vision navigation application tool turns smartphone images of the surrounding environment into a 3D space model to help improve localization and navigation for the visually impaired. 

Horus

Image title

Horus is essentially a headset with a 3D forward-facing camera and a pocket unit. The headset runs along the back of the head and it holds the cameras and the bone conduction transducers, while the pocket unit contains the battery and the processor. The two units are connected by a thin wire. The computer constantly runs a computer vision algorithm that recognizes objects and places them in front of the user.

Once identified, the system reads aloud what it’s seeing. It can spot obstacles due to its 3D vision and warns wearers using various tones. It can also read texts and tell faces apart.

Horus is expected to cost around $2,000. The company is currently testing Horus in communities in Italy and will launch wider beta programs in January next year before a full launch.

MyEye

82c7b2_d85f029584bf46c081aa4d4c632afa78-mv2

MyEye is the creation of OrCam, a company started by the co-founders of one of the collision avoidance technology company Mobileye. It offers similar functionality to the Horus and Seeing AI but can be activated by a simple pointing gesture. It stores up to 150 of your favorite products — such as supermarket items, drugstore necessities, and credit cards — and instantly communicates them through your own voice tag, enabling an independent shopping experience. The device is available in multiple languages, including English, Hebrew, German, French, Spanish, and Italian.

It's currently priced at a rather steep $3,500 and you'd hope insurance companies would cover the cost.

Aira

Image title

Aira develops transformative remote assistive technology that connects the blind with a network of certified agents via wearable smart glasses and an augmented reality dashboard that allows agents to see what the blind person sees in real time. Agents, serving as visual interpreters for the blind, help users accomplish a wide range of daily tasks and activities — from navigating busy streets to recognizing faces and literally traveling the world. 

Aira's platform works with AR glasses such as Glass and Vuvix. Via a partnership with AT&T, the glasses can stream what a blind or visually impaired person would see to a remote agent who can then help them with things like navigation, reading signs, or shopping. The agent is supported by a learning AI dashboard.

The company recently announced Chloe, a context-based AI agent that connects through the smart glasses. Chloe will be powered by Amazon Lex which also powers Echo Alexa, Microsoft Oxford, and Google Cloud vision. The first roll out of this technology will feature Chloe’s reading capabilities, eliminating the need for an interactive session with a human Aira Agent to read. You can expect to pay $1,400 plus a monthly subscription fee of up to $300.

AiServe 

600_462503542

The only German competitor is this Berlin startup, AiServe, currently developing smart glasses that also offer a similar service to Aira and Orcam with an agent involved in some occasions. In the future, the agents will be removed thanks to AI and computer vision. They offer a rather streamlined service without the clunkiness of multiple device systems with the user simply donning a pair of their glasses. 

Their biggest competitive advantage is the price. The glasses are co-designed and manufactured by a Shenzhen manufacturer. They will retail at a price point substantially lower than their competitors and eventually they will be free with a monthly subscription. They'll be launching in German and English in 2018 with over 1,000 paying customers already enlisted.  The glasses will retail at around $400 with a monthly paid subscription package that is substantially less than their competitors.

In comparing and contrasting the current AI embedded products designed with the visually impaired in mind, it's important to also emphasize the years of research that have been invested into AI technology for these devices to be able to recognize objects and items and to read and describe emotions. The hardware market is a tough one and products often battle it out in a survival of the fittest scenario or who has the best deals with insurers — assuming they'll be willing to pay for the millions of aging people who will experience sight deterioration. The market predictions of the future customers make it big enough for multiple players. We just need to watch and wait who succeeds and who does not.

Start coding something amazing with the IBM library of open source AI code patterns.  Content provided by IBM.

Topics:
machine learning ,ai ,facial recognition ,visual impairments

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}