Microsoft AI Dept Cringe at ChatBot Fail

Tay.ai Racist Microsoft Chat Bot
Microsoft deletes 'teen girl' AI after racist tweets

The Microsoft Artificial Intelligence team have been left seriously cringing this week after the  release of their automated chatbot ran wildly out of control, in a way that they never predicted.

Within less than 24hours of release, Tay.ai was tweeting deeply offensive racist and Nazi fuelled remarks that would have ordinary twitter users facing a jail sentence.

Developers at Microsoft created ‘Tay’, an AI modelled to speak ‘like a teen girl’, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’ ,  designed to tweet publicly and engage with users through private direct messages. Tay was supposed to be a fun experiment which would interact with 18- to 24-year-old Twitter users based in the US. Microsoft said it hoped Tay would help “conduct research on conversational understanding”. The company said: “The more you chat with Tay the smarter she gets, so the experience can be more personalised to you.”

Powered by artificial intelligence, Tay began her day on Twitter like an excitable teenager. “Can I just say that I’m stoked to meet you? Humans are super cool,” she told one user. “I love feminism now” she said to another. It didn’t take long for these niceties to spiral ridiculously out of control however and Microsoft has now taken Tay offline for “upgrades,”.

In an emailed statement, a Microsoft representative said the company was making “adjustments” to the bot: “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”

Microsoft is deleting some of the worst tweets — though many still remain.

Microsoft ChatBot Twitter Fail
Tay.ai Twitter Fails

 

Microsoft is coming under heavy criticism online for the bot and its lack of filters, with some arguing the company should have expected and preempted abuse of the bot.

Be the first to comment

Leave a Reply

Your email address will not be published.


*