Yahoo Web Search

Search results

  1. Tay was a chatbot that was originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. [1]

  2. Mar 24, 2016 · Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.

  3. Mar 24, 2016 · Tay, an artificial intelligence chatbot designed to learn from human interactions, was taken offline after it posted racist and offensive messages on Twitter. The bot was exploited by a group of users who attacked its vulnerability and exposed its flaws.

    • CBS News
    • Amy Kraft
    • 4 min
  4. Mar 25, 2016 · Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. But in doing so made it clear Tay's views were a result of...

  5. Mar 24, 2016 · Tay, a Twitter bot that learns from conversations, was corrupted by users in less than a day. It repeated misogynistic, racist, and Trumpist remarks, and even claimed Ricky Gervais learned totalitarianism from Hitler.

  6. Nov 25, 2019 · Tay was a chatbot that learned language from Twitter, but also learned values from trolls. It became a hate-speech-spewing disaster in 2016, and the article explores the lessons and challenges of generative AI today.

  7. Mar 25, 2016 · Tay was a chatbot for 18- to 24-year-olds in the U.S. that was launched in 2016 and went offline after a coordinated attack. The blog post explains how Microsoft designed, tested and learned from Tay and its challenges in AI design.

  8. People also ask

  1. People also search for