Three weeks ago, a pair of researchers from Stanford University claimed they had created a facial recognition system that could go some way to determining someone’s sexuality.
The research was the subject of many articles, and received backlash from researchers, sociologists, and LGBT organisations who criticised the authors’ methodology and conclusions.
A recently published article on The Verge does a good job of getting to the heart of the research and correcting some of the poor reporting around it, while also looking at how the study taps into common fears about the future of AI.
Firstly, James Vincent identifies how the algorithm isn’t as accurate as headlines suggested, and how the research falls under the blanket of what is called the “black box” problem, where “AI researchers can’t full explain why their machines do the things they do”.
He then goes on to explain some of the dark history around using biology to predict sexual orientation – as the study said facial features helped to predict sexuality – and the dangers of physiognomy.
And in terms of AI, as Vincent says: “It’s more important than ever that we understand the limitations of artificial intelligence, to try and neutralize dangers before they start impacting people.”
Check it out here.