Google fixes its ‘racist’ algorithm by deleting the gorillas

In June 2015, a user of Google Photos discovered that the program labeled his black friends as gorillas. The artificial intelligence of Google was not able to distinguish a dark complexion of human from that of apes like gorillas and chimpanzees. That racist bias of the machine forced to apologize to Google, who undertook to find a solution to the error.

Two years later, the solution is clear: so that the program does not confuse humans with gorillas, they have removed the gorillas from the search engine. Also the chimpanzees and the monkeys.

Google AI

In Wired have done the test, feeding the machine with thousands of photos, including those of these great apes and also other species of monkeys.

The program – designed to classify users’ photos on their own, using artificial intelligence – responds perfectly when you ask them to find orangutans, gibbons, baboons or marmosets, finding them without problems. But it stays blank when you ask for “monkeys”, “gorillas”, and “chimpanzees”, even if you have a few in your guts.

The Google patch is to make these animals disappear from the lexicon of the machine. The program also does not know how to look for “black man” or “black woman”.

The way to solve the problem is to erase the problem: self-censor those labels. “Image-tagging technology is still young and unfortunately not perfect”, a Google spokesperson replied, admitting the patch. Flickr generated a similar problem, labeling blacks as apes. The algorithm of Facebook allowed discriminating by its race to the users. These unexpected byproducts are everywhere.

This episode is a perfect example of some of the problems that are being discovered in the field of artificial intelligence. For example, that algorithms inherit biases and prejudices present in databases (fed by humans) and in the hands of the programmers who develop them. The innovators, the inventors, tend to be white men of good family, and that ends up showing somehow in the fruit of their work.

It also shows that the technology that allows visual recognition machines is much harder to refine what we sometimes think. It is possible that soon we will have automated cars circulating in the streets, making difficult decisions like who to run over. In case of doubt, what happens if machines find it difficult to differentiate an animal from a person? Will he choose to save the lighter-skinned person and run over the dark-skinned person because he can be a monkey?

In addition, this incident illustrates a major problem that experts in artificial intelligence are warning: machines end up being a black box, opaque and full of secrets, even for their own developers. The programmer knows with what elements he has fed the algorithm and knows what the results are, but does not know in detail the processes that happen inside the brain of silicon.

When something fails, as in this case, they do not know exactly why they cannot go directly to solve the problem because they do not know where it is. “We can build these models, but we do not know how they work”, admitted a specialist in diagnosing diseases using artificial intelligence.

“Automated decision making can pose significant risks to the rights and freedoms of people who require appropriate safeguards”, warned report from AI Now, an institute dedicated to research on the problems arising from the use of artificial intelligence. That report criticized the opacity with which these black boxes operate.

In some cases opaque for its creators and usually opaque for the society affected by its decisions: finance, health, insurance, labor market, judicial decisions… All are already affected by algorithmic decisions.

A recent study on the social perception of robots showed an interesting conclusion in this context: “Those who are in historically marginalized groups – women, not whites and less educated – are shown as the most fearful of technology”. Surely not casual.