AI can now Identify People as Gay or Straight from their Photo

By 

Algorithm Achieves Higher Accuracy Rates than Humans

A study from Stanford University suggests that a deep neural network (DNN) can distinguish between gay and straight people, with 81 per cent accuracy in men and 71 per cent in women. The research was based on a sample of 35,326 facial images of white men and women that were posted publicly on a US dating website. The DNN, a machine learning system, was presented with pairs of images, where one individual was gay and the other was straight.

The DNN’s algorithm displayed even higher accuracy rates when presented with five facial images per person: 91 per cent in men and 83 per cent accuracy in women. Human judges, when presented with one image, achieved a much lower accuracy rate: 61 per cent for men and 54 per cent for women.

According to the research, there were certain trends in facial features that distinguished between gay and straight people. Narrower jaws, larger foreheads, and longer noses were common among gay men, while gay women were more likely to have smaller foreheads and wider jaws.

The authors of the report, Yilun Wang and Michal Kosinski, concluded that homosexual men and women have more androgynous or gender-atypical features, expressions, and grooming styles.

Many are concerned that this kind of software may be a violation of people’s privacy and may also be abused for anti-LGBTQ+ purposes. Megan Boler, a professor at U of T’s Department of Social Justice Education, whose areas of expertise include sexual orientation, believed that the use of such psychometric data profiling is dangerous and is of little beneficial use for public or private interests.

“One can easily envision how using AI to label facial photos ‘lesbian’ or ‘gay’ can be used in nefarious ways — to hurt individuals, violate privacy, make public information that should be private,” she wrote in an email to The Varsity. “Many employers or governments can discriminate against individuals if they believe they are lesbian or gay.” Boler stated that it is difficult to imagine how the use of this information could benefit the individuals or society.

Mariana Valverde, a professor at U of T’s Centre for Criminology and Sociolegal Studies and one of the founders of the Sexual Diversity Studies program, believes that such technologies promote stereotypical thinking about sexual orientation and gender. “I would say that whether using AI or using one’s own eyes, the idea that you can tell who’s gay and who’s straight by merely looking at a picture of their face is ridiculous,” she said.

Graeme Hirst, a professor in the Department of Computer Science, stated that this technology could be an invasion of privacy if it were institutionalized, in the same way that other kinds of profiling could be.

Hirst added that, although building a facial ‘gay or straight identifier’ is not very helpful, it could not be abused either. Given the percentage of error, he believes that if individuals or governments wanted to inflict harm on people based on their sexuality, they would find better ways to identify them.

Hirst, however, thinks that technology cannot be blamed for any harm done, nor can its inventors if their intentions are good. The real issue, in his opinion, is rooted in people’s prejudice and intolerance, which allows them to misuse technology. “The real problem here is not that this technology could be used for bad things, the real problem is those bad things exist,” Hirst stated.

The authors of the Stanford study also pointed out in their report that artificial intelligence could eventually be capable of finding links between facial features and political views, personality traits, or psychological disorders.

AI can now identify people as gay or straight from their photo