
Well, well, well. This is what happens when you feed info from the social networks into a self-teaching AI. Seems like Microsoft's experiment with the creation of Tay.ai hasn't worked that well, after all. Why? Because they figured she'd be best if she modeled her own behavior based on real human interactions around the Webz, so they allowed actual random people to input all sorts of feedback into her "mind", thus hoping she'd learn humans' ways.
It all started so well and cosy. But in the end, Tay.ai found herself spewing all sorts of stinky obscenities, crazy conspiracy theories, and talking points that even the most shameless bigot would've blushed from.
My personal favorite? "F*** MY ROBOT P**** DADDY I'm SUCH A NAUGHTY ROBOT!"
