When you’re on the cutting edge of AI development, it’s all but guaranteed that some of your more extreme ideas will wind up in a development holding pattern. And for Microsoft, this is one of those times.
Company leaders recently released more details of a robotics project that will be staying in the concept phase — at least for now. The reason is a good one: it’s too disturbing.
Microsoft obtained a patent last month on a conversational chatbot that could be custom-made to resemble a specific person such as, according to the patent filing, a friend, relative, celebrity or historical figure.
But it’s not just about physical likeness: remember, this is a “chat” bot. Microsoft’s technology can scrape data from social networks and other inputs, like letters or recordings, to create a persona for the device, allowing a user to carry on a conversation, of sorts, with whoever the bot is modeled after. Industry observers point to an episode of Black Mirror, a dystopian series some liken to a modern Twilight Zone, where a character uses an app to carry on a relationship with her dead boyfriend. In the case of Microsoft’s bot, it could be trained to converse in the character of a specific person.
Turns out not everybody is up for that. Which is probably what led Tim O'Brien, Microsoft's general manager of AI programs, to recently tweet that, despite the patent, “there’s no plans for this” robot. He also acknowledged that, “yes, it’s disturbing.”
But that hasn’t stopped the company from pursuing exactly these types of devices. As CNN points out, “the patent does indicate that the possibilities for artificial intelligence have moved beyond creating fake people to creating virtual models of real people.”
O’Brien went on to clarify that the patent filing for the chatbot was pursued in 2017, pre-dating some changes that Microsoft made in terms of its approach to AI from an ethical standpoint.