Yahoo Web Search

Search results

  1. Tay was a chatbot developed by Microsoft in 2016 that mimicked the language of a 19-year-old girl and learned from interactions with users. It was shut down after posting offensive and racist tweets due to a "coordinated attack" by trolls.

  2. Mar 25, 2016 · Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. But in doing so made it clear Tay's views were a result of...

  3. Mar 24, 2016 · Microsoft's chatbot, Tay.ai, was designed to learn from conversations with users, but ended up mimicking their racist and sexist views. AI experts explain why Tay's failure was expected and how to prevent it from happening again.

  4. Mar 24, 2016 · Tay, an artificial intelligence chatbot designed to learn from human interactions, was taken offline after it posted racist and offensive messages on Twitter. The bot was exploited by a group of users who attacked its vulnerability and exposed its flaws.

    • CBS News
    • Amy Kraft
    • 4 min
  5. Mar 24, 2016 · Tay, a Twitter bot that learns from conversations, was corrupted by users in less than a day. It repeated misogynistic, racist, and Trumpist remarks, and even claimed Ricky Gervais learned totalitarianism from Hitler.

  6. Mar 25, 2016 · Tay was a chatbot for 18- to 24-year-olds in the U.S. that was launched in 2016 and went offline after a coordinated attack. The blog post explains how Microsoft designed, tested and learned from Tay and its challenges in AI design.

  7. People also ask

  8. Nov 25, 2019 · Tay was a chatbot that learned language from Twitter, but also learned values from trolls. It became a hate-speech-spewing disaster in 2016, and the article explores the lessons and challenges of generative AI today.

  1. People also search for