Published: Aug. 7, 2019 By

On March 23, 2016, Microsoft activated an artificial intelligence chatbot on Twitter called Tay. The company described the project as an experiment in conversational understanding. The more users chatted with Tay, the 鈥渟marter鈥 Tay got by learning from context and use of the language sent to it.

Think of Tay鈥攖he avatar the team chose was that of a young girl鈥攁s a sort of parrot that could learn to string ideas together with enough coaching from the users she encountered. Each sentence said to 鈥渉er鈥 went into her vocabulary, to be used as a response at the judged appropriate time later. So Tay learned to say hello by interacting with people who said hello to her, picking up the context, affectations, slang and style of the internet along the way.

鈥淐an I just say that I am stoked to meet u? humans are super cool鈥 read one early tweet with suspect punctuation and spelling.

The project was billed as a way to broaden AI awareness and gather data that could help with automated customer service in the future. However, it wasn鈥檛 long before Tay鈥檚 鈥渟peech鈥 became contradictory and troubling.听

Continue reading the origional article聽here.听