Recently an image recognition application incorrectly labelled a photograph of a couple of black humans as gorillas. The makers of the app felt the need to apologise, presumably because they think this is potentially insulting to humanity as a whole, or some groups of humans.

But setting aside our pride, let’s consider for a moment how nearly the application got it right. Humans and gorillas certainly look very alike, and they share a taxonomic family. Biologically speaking,the application merely failed to discriminate between genera. To put this into perspective, the magnitude of the error is akin to that of mistaking a mako shark for a great white.

Also, suppose that the software very incorrectly labelled the couple as, say, yellow wood trees. In such a case I doubt that the mistake would have been anything more than a technical issue. For some reason humans become more aware of their self-proclaimed pedigree when compared or confused with, at least, fellow mammals.

The gorilla, of course, is a perfectly interesting, good and lovely modern beast. We cannot voice our indignation when accidentally being confused with one without also admitting our specieist view towards what is one of our nearest relatives.

Perhaps, sometimes, we are just being too fossey fussy.


Yea I think there might have been an implied form of racism.

If anyone is truly upset by this I think perhaps they are under the incorrect impression that computers “think” or have any concept of race or creed. Moreover that computers have, as yet, the capacity to be “prejudiced”. OR they woefully misunderstand how difficult it would be for the coders of said app to make their code racist.

You have to wonder though, how we will prevent computers from making racist generalisations upon looking at commonly available statistics. You try explain to a cold calculating machine about nature vs. nurture…

I don’t know anything about this story apart from what was written in the OP, so what follows is conjecture.

I would guess that the developers used a trained neural network in the implementation. The problem with neural networks is that they constitute a black box approach because it is a practical nigh-impossibility to establish what they are reacting to. There’s a bunch of inputs and some outputs, but what happens in between is largely a murky mystery. While they can be remarkably successful, they can also lead one into unanticipated trouble due to their inherent operational obscurity.

The US Army had such trouble, fortunately for them during field testing, where its own tanks were misidentified as enemy tanks. After much intense investigation, it was found that the position of the sun in the image fed to the neural network was a crucial factor.

Based on these suspicions, it is indeed laughable that anyone should be offended by the occurrence. (And BTW, in inept brains, “black box” could similarly give offence.)


Well I agree that I doubt the programmers programmed it to be a racist AI.
I guess from a PR point of view it’s easier apologize than to explain how
a neural network works

Or maybe it used a picture of Mugabe as the human. >:D Now I’ve put my foot in it.

Have they received any complaints from gorillas? Surely our great ape cousins can’t be all too happy to be mistaken for things as vicious as human beings, of whatever skin colour… :slight_smile: