Erika Hayasaki in Foreign Policy:
It started as a seemingly sweet Twitter chatbot. Modeled after a millennial, it awakened on the internet from behind a pixelated image of a full-lipped young female with a wide and staring gaze. Microsoft, the multinational technology company that created the bot, named it Tay, assigned it a gender, and gave “her” account a tagline that promised, “The more you talk the smarter Tay gets!”
“hellooooooo world!!!” Tay tweeted on the morning of March 23, 2016.
She brimmed with enthusiasm: “can i just say that im stoked to meet u? humans are super cool.”
She asked innocent questions: “Why isn’t #NationalPuppyDay everyday?”
Tay’s designers built her to be a creature of the web, reliant on artificial intelligence (AI) to learn and engage in human conversations and get better at it by interacting with people over social media. As the day went on, Tay gained followers. She also quickly fell prey to Twitter users targeting her vulnerabilities. For those internet antagonists looking to manipulate Tay, it didn’t take much effort; they engaged the bot in ugly conversations, tricking the technology into mimicking their racist and sexist behavior. Within a few hours, Tay had endorsed Adolf Hitler and referred to U.S. President Barack Obama as “the monkey.” She sex-chatted with one user, tweeting, “DADDY I’M SUCH A BAD NAUGHTY ROBOT.”
By early evening, she was firing off sexist tweets:
“gamergate is good and women are inferior”
More here.