Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI Can Tell Your Political Affiliation Just by Looking at Your Face (gizmodo.com)
23 points by rntn on April 25, 2024 | hide | past | favorite | 50 comments


From the actual study:

"The algorithm studied here, with a prediction accuracy of r = .22, does not allow conclusively determining one’s political views, in the same way as job interviews, with a predictive accuracy of r = .20, cannot conclusively determine future job performance. Nevertheless, even moderately accurate algorithms can have a tremendous impact when applied to large populations in high-stakes contexts. For example, even crude estimates of people’s character traits can significantly improve the efficiency of online mass persuasion campaigns (Kosinski et al., 2013; Matz et al., 2017). Scholars, the public, and policymakers should take notice and consider tightening policies regulating the recording and processing of facial images."

https://awspntest.apa.org/fulltext/2024-65164-001.html

Compare that to the article:

"A study recently published in the peer-reviewed American Psychologist journal claims that a combination of facial recognition and artificial intelligence technology can accurately assess a person’s political orientation by simply looking at that person’s blank, expressionless face."

The article is misleading. You can't claim the model can accurately assess an individual's political orientation just because it performs better than random chance. Sure, it demonstrates there's a statistically-significant link between facial structure and political orientation, but claiming you can use it to "accurately assess a person’s political orientation" is absurd and not what the study found.


Anytime you read an article about a study, it is worth taking the article's claims with a large grain of salt.

This article at least provides a link to the full text of the study paper in the opening line.


It sounds like this is just a side effect of the correlation between obesity and political affiliation in the U.S., e.g. [0] [1]

[0]: https://www.reddit.com/r/dataisbeautiful/comments/h78j5k/adu... [1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4692249/


My guess, before seeing your post, was that it's an age correlation.


The article explicitly corrected for age, and several other factors to try to expressly focus on facial features.


Did either of you read the article?


The article notes

> The algorithm could generally tell what a person’s political orientation was with a high degree of accuracy, even when that person’s identity was “decorrelated with age, gender, and ethnicity,” researchers write.

But the whole thing is looking at photos - how do you decorrelate a photo from age, gender, ethnicity, or weight?


You could read the study and find out.

The simple version was is that there are statistical methods that allow you to control for known factors in wheb doing statistical analysis.


> even when that person’s identity was “decorrelated with age, gender, and ethnicity,”

Doesn't include weight.

Also, https://news.ycombinator.com/newsguidelines.html

> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".


They expressly look at the predictive power of BMI in Study 4

> How does the predictive power of the lower face size and BMI compare with the predictive power of the facial recognition algorithm estimated in Study 1? Would the VGGFace2-based model trained in Study 1 perform better if it was supplemented with explicit measures of lower face size and BMI? To answer these questions, we trained a series of regression models predicting political orientation (while controlling for age and gender) and used leave-one-out cross-validation to estimate prediction performance.

> The predictive power of the lower face size equaled r(434) = .11; p = .02; 95% CI [.01, .20]. BMI’s predictive power was insignificant r(272) = .06; p = .36; 95% CI [−.06, .18]. Combining the VGGFace2-based predictions (estimated in Study 1) with BMI, lower face size, and with both these variables did not improve prediction performance. The highest performance was afforded by combining VGGFace2 predictions with lower face size. Yet, this model’s performance, r(434) = .21; p < .001; 95% CI [.12, .30], was no higher than the performance of the VGGFace2 predictions alone, r(434) = .22; see Study 1.


Oh I see, that's in the actual paper, not the linked article. Having now re-read that section a few times, I think the important quote is

> The predictive power of the lower face size equaled r(434) = .11; p = .02; 95% CI [.01, .20]. BMI’s predictive power was insignificant r(272) = .06; p = .36; 95% CI [−.06, .18]. Combining the VGGFace2-based predictions (estimated in Study 1) with BMI, lower face size, and with both these variables did not improve prediction performance. The highest performance was afforded by combining VGGFace2 predictions with lower face size. Yet, this model’s performance, r(434) = .21; p < .001; 95% CI [.12, .30], was no higher than the performance of the VGGFace2 predictions alone, r(434) = .22; see Study 1.


"Additionally, body mass index (BMI) was computed for 274 participants who self-reported their weight and height."

So self-reported data. And only provided by 274 participants out of 591, so less than half. It seems very probable the more overweight you are, the less likely you are to self-report your weight. I still think the most likely explanation is that they are picking up on obesity, which would surely be strongly correlated with "lower face size."


Sure, there are other issues with this study too.

However, simply asserting that something explains the results while not addressing the ways that the study actually look at that factor is not a valuable contribution to the conversation.

You are correct that self-reporting adds error to the BMI and the optional self reporting may have narrowed the range of BMI reports than can be analyzed. Both of these would reduce the size of the predictive effectivenes. It does seem like an area where a study better designed to control for BMI would increase our understanding. However the relative effect size vs lower face shape indicates to me that it is pretty unlikely that BMI fully explains model predictive ability.


[Removed demographic questions, original study already addresses it.]

I wonder whether it's able to determine heritage and make a statistical guess based on that. For instance, if you're a white American with English or Scotch-Irish ancestry, then you're more likely to be in a Southern state and thus vote Republican [1]. If you have Italian, Irish or French ancestry then you're going to be more likely to be in the Northeast and vote Democrat.

What would be most surprising would be if you could show it photos of, say, first cousins who have different political affiliations, and have it guess right. That might suggest some kind of hormonal effect.

The article also suggests other ways it could cork, including self-fulfilling hypotheses (people with a more "conservative-looking" face might get treated like they're a conservative, and therefore become more conservative) as well as how other factors like wealth might affect both face and politics.

1. https://vividmaps.com/ethnicity-of-white-americans/


Since they controlled for both ethnicity and age, all you have to do is read the study to answer your questions.


This paper is bad and Gizmodo should feel bad.

Besides making ridiculous and unsupported causal claims (as other commenters have noted), the results are really poor (R score of 0.2). They show that the model achieves human-level performance, which is also really poor. Then they run it on natural images of politicians and the results are significantly worse. What the paper actually shows is that facial appearance can not consistently predict political alignment, but the authors want to try really hard to do it anyway.


No, it can't. It can tell you the statistical correlation between your facial features and voters for different parties. That's not the same thing as is suggested here.


I dunno. This seems like a reasonable summary for a headline. Plus the research is actually looking at political orientation rather than how you vote. Having said that, I'm skeptical of these reinventions of phrenology.


AI Can Guess Your Political Affiliation Just By Looking At Your Face (with much more than the default 50% chance of guessing right)

There.


I can tell by what color you dye your hair.


Idealism: People are unique and should be viewed independently of their appearance.

Reality: You can make correlations based on patterns.

I find it interesting this is acceptable/necessary in marketing. Outside of marketing this is immoral.


Phrenology 2.0!


Its probably just looking for the MAGA hat.


The actual paper shows (r = .22) and (r = .31) so the headline would appear to overstate the case, no?


I'd like to see if this still works if I took a picture of myself wearing a MAGA hat


I find it amusing how something that’s stereotyping like this is considered news, while other predictions are considered dystopian.

Totally ridiculous.


They explicitly discuss the dystopian aspects.

So, yes, something is ridiculous.


So can I, there are all sorts of clues. Old = Republican. Tattoos = Democrat. Weird hair color = Democrat. Black = Democrat. Etc.

It’s not perfect, but a handful of easy heuristics like this and you’ll do WAY better than chance.


The study cropped hair styles, shaved facial hair and controlled for age and ethnicity. They even looked at it BMI could explain the predictive accuracy.

The conclusion is that facial shape alone is as strong a predictor of political affiliation as interview performance is a predictor of job performance.


It figured out I was a libertarian techbro:

>zip up

>0% tan

>wfh background


I bet humans are pretty good at this too.


That is exactly what study 2 of this paper looks at and the answer is that if you control for age, gender and ethnicity, individual raters can't do much better that guessing (r=.02) but if you aggregate the ratings you start to gain accuracy (which matches results for aggregate guessing in other areas.)


If you look at the study itself, the accuracy was pretty poor. They hype it up, but it was only marginally better than random (a correlation of r=.13).

So, pretty much bullshit.


Statistics Ph.D. here. This phenomenon is the scourge of our existence in the big-data age. It's why we have endless stories about miracle medical diagnostic methods that can tell everything about you from a retina image or a recording of your voice or gait or whatever. When the method fizzles out in practice, there's not an equivalent news story. And don't get me started on those annoying personality tests for employers claiming to predict job performance.

Everyone can now do a train-test split, use a canned algorithm to get a classifier with good apparent accuracy on the hold-out data set. Like, yeah, so what. That accuracy score is only a fair estimate of how accurate the method will be out in the general population under the most restrictive circumstances (keywords: the generalization problem, the representative training set problem, stationarity, data drift/model drift). And when the subject of study is humans rather than atoms - humans that can change their behaviors in response to being studied and classified (keywords: Hawthorne effect, Goodhart's law) - all bets on future predictive ability are off.

The paper talks about decorrelating age/gender/ethnicity. Statistical corrections (keywords: causal methods, confounding, instrumental variables) aren't magic; they should be thought of as best-efforts attempts to cope with an ultimately intractable problem, not as something that, if done right, solves it. So, it's perfectly possible that the vision model is picking up on subtler demographic clues.

Finding apparent patterns that are correlated with any trait of interest in the population has always been easy, and is now essentially automated. That's superstition / stereotyping / phrenology / pseudoscience, all the stuff humanity worked so hard to get away from. Designing a clean experiment to isolate a consistent effect that's akin to a law of nature? That's science, and it's hard, and the scientific method and statistical methodology can only tell you the pitfalls, not give a recipe for making discoveries.

For most things - we find patterns. They work for prediction for a while, and then they don't. Finding genuine causal phenomena from the data - that's a rare and precious thing.

(no blame on this study's authors; they're highlighting an curious phenomenon, not suggesting that that it be used for decision making or claiming to have found a law of society or biology)


Feed Hitler, Mao, Stalin etc. through this and see what it says.


Um… a friend of mine actually made an artwork on this. When life imitates art: https://alexanderpeterhaensel.com/smiletovote


And our efforts to make Charlie Kirk more liberal don't work. smh...


Astonishing. They have some hypotheses on why this could work, for example:

> Exposure to such face-altering factors, in turn, is associated with psychological traits. Liberals, for example, tend to smile more intensely and genuinely (Wojcik et al., 2015), which, over time, leaves traces in wrinkle patterns (Piérard et al., 2003). Conservatives tend to be more self-disciplined and are thus healthier, consume less alcohol and tobacco, and have a better diet (Chan, 2019; Subramanian & Perkins, 2010), altering their facial fat distribution and skin health (Richmond et al., 2018).

https://awspntest.apa.org/fulltext/2024-65164-001.html


> Conservatives tend to be more self-disciplined and are thus healthier

This study suggests the opposite.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4692249/


> Conservatives tend to be more self-disciplined and are thus healthier, consume less alcohol and tobacco, and have a better diet (Chan, 2019; Subramanian & Perkins, 2010)

The first one is unfortunately paywalled.

The second[1] is a letter to the editor and mostly consists of the finding that people who report themselves as Republicans are more likely to answer positively to the question "Would you say your own health, in general, is excellent, good, fair, or poor?".

It only speculates one of the reasons could be conservative values encouraging self discipline (it doesn't seem to entertain the possibility of any correlation between political leanings and biased self reporting).

[1] https://academic.oup.com/ije/article/39/3/930/627191


Conservatives tend to be more self-disciplined and are thus healthier, consume less alcohol and tobacco, and have a better diet (Chan, 2019;

What ?


Having lived in the midwest my entire life, that also does not track with me - at all.

https://scholar.google.com/scholar_lookup?title=Political%20...

https://scholar.google.com/scholar_lookup?title=Are%20republ...


Yeah I don't buy it either. I grew up in a red state and nobody I knew was "healthy". All ate highly processed foods, drank too much on a daily basis, and smoked cigarettes. No exercise. Never went to the doctor because they didn't trust them.


Your bias is showing.


Definitely. The Liberalism of today is rooted in secular Hedonist ideas, while Conservatives overwhelmingly support Judeo-Christian principles. This is evident by for example the fact that Liberals largely support recreational drug use while Conservatives largely oppose it.



You can oppose recreational drugs but still be unhealthy. Conservatives in my home state tend to oppose drugs but are gluttonous about junk food, alcohol, and tobacco.


The conservatives I know are definitely not “healthy”


I know conservatives who are healthy and conservatives who are not. I know liberals who are healthy, and liberals who are not. I think this issue is more complicated than some notion of health.


Yeah, that can’t possibly be true.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: