Roy Tang

Programmer, engineer, scientist, critic, gamer, dreamer, and kid-at-heart.

Blog Notes Photos Links Archives About

Google recently had a demo of their new AI assistant Duplex at Google IO 2018:

It’s an amazing demo to watch, from an engineering perspective. Basically a combination of natural language processing + text-to-speech that can emulate human speaking patterns. It’s not that much of a breakthrough (more like putting several different things together), but it’s impressive and is a good indicator of where we are with regards to true conversant AI. On the other hand, it’s a staged demo, it’s yet to be seen how robust the technology will turn out to be in real life. And I doubt it passes the Turing test. The ovation from the crowd was understandable.

Outside the technical aspect, there is another concern that many in the tech community (outside of the crowd) immediately raised though: this is deception. In the demo, the assistant calls up restaurants and basically pretends to be a real human. It adds “umms” and other human-like pauses to sound like a human. Not only is that deception, it’s creepy deception. And totally unnecessary for the proposed use cases. If it was obviously a bot calling a restaurant to make a reservation, it could still work, the staff receiving the call would just need to get used to it. People already know how to talk to voicemail and answering machines and dial-in menus so it’s not too far of a leap.

It’s problematic from an ethics point of view so there was some blowback. Google got the hint, and soon after announced that they were staying ahead of the ethical concerns and that the AI will declare itself to the other party. This doesn’t erase the concerns about what happens when this kind of technology becomes widespread and available outside of Google though. (This may not be a concern for a long time to come, since the computational power involved here is almost certainly not yet available to the normal consumer.)

It is an understandable fear. While we have reaped the benefits of technological advances over the past couple of decades, there are almost always bad actors that use the new technology for nefarious purposes. Mostly scams or to try to sell you things. See: emails from Nigerian princes, spam SMS messages trying to get you to transfer load, social media bots trying to promote their agenda, and so on. The future scenario of some bad guy robocalling your non-tech savvy parents and trying to scam them is not too far-fetched. One can assume regulation or such would keep them in line, but as you know, bad guys don’t always follow the rules.

Science fiction of course is replete with examples of AI that can mimic human communications, such as Iron Man’s Jarvis or the droids from Star Wars, or Data from Star Trek. I believe none of those examples have human “umms” probably because that seems obviously creepy for some reason. (They do use human expressions of course, like C3PO’s “Oh my goodness!"). I’m also reminded of Tad Williams’ Otherworld series of books, of which I don’t remember much but I do recall that AIs were required to answer truthfully if asked whether they were AIs. (It was very insulting to ask a human if it was an AI.)

Given how much we are able to imagine it, such tech is probably inevitable in the trajectory of human advancement (assuming we don’t bomb ourselves to extinction). I’m sure eventually humanity will be able to adapt (soon inbound phone calls will require some kind of captcha), but there’s bound to be some transition time where it’s problematic.

Posted by under post at #Tech Life
Also on: tumblr twitter / 0 / 598 words

See Also