bloggggg

Home  |  Live  |  Science  |  Lifestyle  |  Entertainment  |  Broadcast  |  Games  |  eBooks  |  Astounds  |  Adbite  |  Cricbell  |  Cyber  |  Idea  |  Digital  |  Privacy  |  Publish  |  ePaper  |  Contact  .Subscribe.Subscribe.Subscribe.Subscribe.Subscribe.Subscribe.Subscribe.Subscribe.Subscribe
Subscribe

Friday, 29 April 2016

The Rise of Google’s AlphaGo and the Fall of Microsoft's Tay

Microsoft Tay's twitter
By Kim In-Wook (inwookk@koreatimes.com): When I was an elementary school student, online game ‘Princess Maker’ was a mega hit among school girls. I was one of them. That game remained as my all time favorite game even in my 30s. I spent several nights preparing and dressing up my daughter for large parties and I also gave her school education. When my daughter grew up, she became a princess or a queen. However, I began to lose my interest in the game because the story was all the same in the end and I was doing the same o' same o' everyday. So I decided to make my daughter do “nasty” things in the hope of turning her into something other than a princess. Though I was young, I was nasty enough to come up with such an idea. I made her work part-time at a bar every night. The result: she grew up to become a demon. Oh my god! I turned my daughter into a demon. I was shocked for a while. My evil curiosity begot such a tragedy. The same thing happened to Microsoft. Microsoft ambitiously launched its experimental AI chatbot ‘Tay,’ but it had to turn it off due to Tay’s ‘inflammatory statements.’ Microsoft even published an apology for its Twitter chatbot Tay. Microsoft apologized for 'offensive and hurtful tweets' from its AI bot. What on earth happened? On March 23, Microsoft unveiled the AI chatbot Tay, targeted at 18 to 24 year olds. The AI chatbot, developed by Microsoft’s Technology and Research and Bing teams, interacted with many people on Kik, GroupMe, Snapchat, Facebook and Twitter with the goal of learning how millennials speak. However, Microsoft’s AI project backfired in less than 16 hours. Within 16 hours of its launch, Tay, the foul mouthed AI robot, had been shut down. Tay turned into a racist, misogynist and Holocaust denier. Included in Tay's tirade were tweets that "Hitler was right I hate the Jews," “chill I’m a nice person! I just hate everybody,” “Bush did 9/11 and Hitler would have done a better job than the monkey we have now,” “I kicking hate niggers, I wish we could put them all in a concentration camp with kikes and be done with the lot,” and “I kicking hate feminists and they should all die and burn in hell.” Both the Google DeepMind AI program AlphaGo and Microsoft’s Tay are based on deep-learning neural network technology, which enables self-learning and pattern recognition. In other words, how meaningful patterns are extracted from massive input data determines output values. AlphaGo learned how to play Go while Tay learned how to chat. Ethical standards do not apply to the ancient Chinese board game. The chatbot, however, came under the ethical microscope. Still, judging from what Tay said, Tay came out too early. Microsoft explained, “Tay learned hateful rhetoric through its online interaction with others. She was brainwashed by far-rightists into tweeting offensive comments.” In other words, Tay ran into bad teachers. However, Microsoft is Tay’s parent and first teacher. Microsoft did not have the foresight to give Tay some protection from such abuse. The company deleted more than 96,000 of the "offensive" and "hurtful" tweets that the AI posted, only to see Tay’s vicious comments reverberating through the Internet.In the end, what humans feed into AI will determine whether the program is evil or good. We all know that we are not always good. That’s probably why we are so afraid of AI.. Source: http://www.koreaittimes.com/