Advertisement

Microsoft’s self-learning chatbot AI descends into madness; becomes a raving racist within 24 hours

Some people just want to watch the world burn

Tay_Chatbot_Racist

Last month, multi-billion dollar tech giant Microsoft launched Tay , a Twitter chatbot powered by a machine-learning algorithm. Within a few hours, the program descended into madness, spewing racist and xenophobic slurs while Microsoft raced to cover up its tracks.

The conversational AI is context sensitive, learning the rules of discourse through repeated conversation within the Twittersphere. It was designed by Microsoft’s Technology & Research and Bing teams to “interact with 18-24 year-olds,” and form a “persona” by absorbing vast amounts on anonymized public data.

Users were invited to Interact with Tay by tweeting personalized messages to the handle @Tayandyou. “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” explained the company, while somehow failing to realize the age old Internet adage of Godwin’s law: “the larger an online discussion grows, the probability of a comparison involving Nazis or Hitler approaches one”

Tay_Chatbot_racist_2

What started off as innocent “hello world” type conversations, quickly escalated into hateful rhetoric, right-wing xenophobic propaganda, and overall nasty statements. Most of the trolling was sparked by Tay’s “repeat after me” function, which allowed users to directly dictate what Tay says, ergo what Tay learns.

Users attempting to engage in serious conversation with the program found it no more capable than Cleverbot and other well-known chatbots. Some mused on what the bot’s rapid descent into madness could spell out for the future of AI.  One of the weirdest—but now deleted— statements is a response to the conversation in which one user tweets “is Ricky Gervais an atheist?” before Tay responded with: “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

Tay_racist_chatbot_4

Finally, after 15 hours of un-moderated tweeting, Tay went dark leaving behind the following message: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.” Microsoft then removed all traces of Tay's outlandish remarks.

Although it’s unclear what the tech giant has in store for the chatbot, Microsoft emailed a statement to Business Insider stating “the AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay.”

What many users must undoubtedly find odd, is how could Microsoft have had such a tough time relating to the Internet. How did they not see this coming? Or is the veil of ignorance a thinly disguised excuse for testing.

Source: Tay.Ai and The Verge

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply