Making Machines Sound Human
Computer speech, just strings some syllables together, right? We've been working on this for more than half a century. Maybe one of our most natural, organic, and human behaviors isn't that easy to simulate.
Join the DZone community and get the full member experience.Join For Free
recently i wrote about a study that had tested the potential for automated speech writing. it uses a rule based approach that was developed after researching thousands of successful and unsuccessful speeches down the years.
meanwhile, there are also advances being made in speech analytics, with a google glass-like invention giving speakers live feedback on everything from their pitch to their cadence.
whilst this may sound as though we’re rapidly hurtling towards a time where robots will be able to deliver speeches, the actual voice of the machine is still something that researchers are struggling to crack.
making machines sound human
it’s a problem that ibm attempted to tackle when they developed a voice for watson. early attempts didn’t really sound particularly human, but they weren’t quite as foreboding as hal either.
since those pioneering days, we’ve had voice added to a range of computerized platforms, from your gps device to the siri-like personal assistant on your mobile phone.
they’re also increasingly deployed in robotic assistants for the home, the factory and for various medical environments.
developments in this area revolve around what are known as ‘conversational agents’, which are programs that can both understand natural language, and then respond in kind.
alas, the field is still a long way from developing a device that is indistinguishable from that of a human. we’re a long way from a machine being able to pass an audible turing test, for instance.
there is also the issue of the ‘uncanny valley’, which describes our repulsion to things that are broadly similar to us, but still noticeably not us. so the more human robots become, the more turned off we are.
mixing and matching
at the moment, most synthesized speech is generated using a huge database of words and other subsets of speech that can then be put together into something sensible sounding.
this database consists of humans recording those words, but even then there are distinct challenges involved in the inflection used to portray emotions and context in particular circumstances. simply having one iteration of each word is therefore not enough, and this is before things like accents, dialects and slang are taken into account.
this remains a challenge that the industry has not managed to overcome, so whilst synthesized speech is largely functional, it is still some way from really reflecting our own speech.
whether we really want synthesized speech to become lifelike however, is another matter again. i mentioned at the start of this post about a project designed to automate speeches, and it doesn’t seem a stretch to think that once the content and delivery are automated it could have some severe implications.
for instance, the israeli tech company imperson are believed to be considering a foray into politics, with politicians deploying an avatar developed by imperson to represent them online.
so you could have a digital donald trump cut loose on twitter, talking in the same way as the donald does in real life.
there is already evidence that organizations are using bots online to engage with stakeholders, but giving those bots the ability to talk fluently and convincingly gives them a whole new level of power.
Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.