On September 3, 2021, Facebook was forced to remove the Artificial Intelligence (AI) it was developing from circulation. It had become racist and associated black men with the label 'primates'. But it was not the first time.
In 2015, another Google image recognition program classified several photos of black people as “gorillas,” and a Twitter developer discovered that the algorithm that causes large images to be cropped in the Twitter feed ignored black faces and focused on white people. In short, all of these AIs had become, in one way or another, racist.
Why is this happening? Rodrigo Taramona, content creator and guest of the podcast 'The voices of Satoshi', he is clear: «AIs are biased because humans are biased before AIs and we input biased datasets, not only because they are generated by men, whites, multimillionaires... but because it is a history of life on Earth in which there have been many inequalities. What happens? That when you feed all that data to an AI it thinks that it should be favored, and things happen like with Amazon's AI, Facebook's or Twitter's,» he comments.
Another example of AI that became (or became) racist was Microsoft's AI. In less than a day, the community got this AI to post messages like "Hitler was right, I hate Jews" or "feminists should die and burn in hell."
"Hitler was right, I hate Jews”, “I hate feminists, they should die and be burned in hell”. According to Taramona, “they took Microsoft’s AI and in three days it was Hitler. That can happen. It can be corrected, but it is true that it is difficult.”
To watch the full interview with Rodrigo Taramona, click on the video so you don't miss a single detail about the debate on AI, how it works and whether it will be possible to make money with it.