Just think about what happened with Google Photos," Mortensen said. A Microsoft executive said Friday that the company was deeply sorry for the unintended offensive and hurtful tweets the company’s Tay chatbot. "There will be plenty of examples like Microsoft coming out over the next year that are even more dramatic. Bard is a conversational artificial intelligence chatbot developed by Google, based on the LaMDA family of large language models. It was simply making a connection between two text strings, Mortensen said. In March 2016 Microsoft launched an AI bot called Tay which was meant to learn conversational ability through interactions with real people. What some people may not have considered when they perused Tay's offensive tweets was that the chatbot didn't actually understand what humans were saying to it and what it was responding with. There's nothing happening here that doesn't happen all the time." Rogue chatbots will be as common as PCs crashing - we'll become "immune" to them If you look at Reddit or 4chan, any two idiots can be connected on the internet every 10 minutes. "All you really see is two crazy people connected on the internet, under the moniker of Microsoft," Mortensen said. It seems that in this case both humans and software might not learn the. Tay was built to train itself using public text datasets as both the input and the output, so it was inevitable people would attempt to game the system. The experiment with Tay highlighted the poor judgement of developers at Microsoft as much as the limitations of chatbots. Mortensen said it was "naive" of Microsoft not to foresee a potential issue. Account icon An icon in the shape of a person's head and shoulders.
0 Comments
Leave a Reply. |