She was unhappy with the name of its chatbot Tay, meant to interact with 18 to 24-year-olds online, because it was similar to hers. If you don't remember TayTweets, it's the Twitter chatbot that ...
And if a chatbot hangs out with the wrong sort of people online, it can pick up their repellent views. Microsoft discovered this the hard way in 2016, when it had to pull the plug on Tay, a bot that ...
Like Microsoft's now infamous Tay chatbot which was trained on Twitter conversations and rapidly descended into racist swearing, the two don't shy away from controversy, variously discussing ...
These days, Tay is the stuff of legends, a pre-ChatGPT exercise in AI chaos where, basically, the publicly available chatbot became really racist, really fast. But as Smith seemed to suggest in ...
In 2016, Microsoft published a blog post titled “Learning from Tay’s introduction.” In it, the corporate vice-president of Microsoft Healthcare detailed the development of a chatbot named Tay, and ...
Microsoft has admitted it faces some "difficult" challenges in AI design after its chatbot "Tay" had an offensive meltdown on social media. Microsoft issued an apology in a blog post on Friday ...
One of the biggest barriers to using AI successfully is bias, which is one of the terms we defined last time, as follows: Bias, in a general ...
Microsoft’s Tay in 2016 is a prime example of chatbot training gone awry — within 24 hours of its launch, internet trolls manipulated Tay into spouting offensive language. Lars Nyman ...
How did a bunch of 4chan users feeding Microsoft's Tay chatbot hateful language become such a potent political force? Elle Reeve joins Endless Thread to discuss her book Black Pill: How I ...