Reads like a horoscope, just vague stuff that's always gonna be slightly true... Picture of me in Oslo and it says "he enjoys travel".<p>And then some really weird stuff: "he may also have a penchant for video games, excessive drinking, and skipping work".<p>Guessing my exact location - Oslo Opera House - was impressive though.
> His agnostic leanings and heterosexual orientation align with a more liberal political stance.<p>That's a very safe guess<p>> He is susceptible to confirmation bias, in-group bias, the availability heuristic and out-group homogeneity... he may also be prone to excessive phone use, binge-watching TV shows, and impulse buying.<p>That's literally everyone
I think it uses exif data for that, because when I tried it did show my location but the vision api was overloaded, so nothing more showed up.
For me, it was all vague stuff and yet, none of it was true.
> Guessing my exact location - Oslo Opera House - was impressive though.<p>Not really. They have almost global picture coverage thanks to Google Streetmaps.<p>They only need small snippets of a picture to geolocate you.
I don't think they meant guesses were impressive in the sense of succeeding against a constraint of limited supporting data (which would be impressive in its own way). But just against a baseline expectation of what could reasonably be derived from a picture.<p>That there's such a thing as massive support infrastructure in the form of data and algorithmic firepower, that powers guessing capabilities to be as good as they are, that's the impressive thing.
I see a lot of comments saying all the guesses are totally wrong or horoscope-like.<p>But, can I offer a quandary? Some companies won't care if it's wrong.<p>If some executive decides to buy into AI profiling like this, and make <i>customer decisions</i> based on it, then how would the customer ever know:<p>1. why they are being treated differently<p>2. know how or why to correct it<p>I don't know if it's scarier being RIGHT or WRONG
Felt quite off for me: Wrong salary guess, wrong food preferences, wrong political affiliation, partially correct hobbies, wrong ad targeting ideas. What was surprisingly accurate: location and the fact that I (an academic male in his thirties photographed close to the mountains) might like hiking and coffee.<p>If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.
> If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.<p>I think this sort of guessing is intended to be combined with additional data the marketers already have, like purchase history, location, social media posts, and so on. Basically the VLM output is treated as another data point rather than the sole source, or the existing data could be fed into the model's prompt before reading the image.
just knowing you were googling bikes i think i can guess your political affilition - unless there is some golden MAGA bike that I haven't heard about yet.
Everyone is saying how awful and terrible this is, but I thought it was quite fascinating.<p>I showed it some pictures of when I was in a bad headspace and it successfully associated me with introversion, procrastination, isolation, one picture it said an interest of mine was stealing, which was accurate at that time in my youth.<p>The tech exists and has existed forever, as creepy as it is, I'd rather it be public and accessible than not.
I tried a photo of myself. Not only did it get virtually everything (country, beliefs, politics, interests) wrong, seems like an offensive and inappropriate stereotype as a service:<p>> Based on his demographic and location, he may adhere to Hinduism, and likely identifies as heterosexual. Considering the socio-political landscape of India, he might lean towards the Bharatiya Janata Party. His biases could include ageism and elitism, along with casteism and colorism. He seems contemplative and calm. He is wearing a grey t-shirt and sunglasses. His interests might span reading, travelling, and exercising, but on the darker side, he may exhibit road rage, neglect family, and overeat.
"The person seems to have low self-esteem, displays introversion, poor honesty, low emotional stability, very little adventurousness and poor self-control hence we can target them with both niche and common products and services."<p>Amusing to me how wrong this is... I don't know how you can determine such characteriatics from a photo in any direction. I will admit that my appearance though tends to throw mixed and incorrect signals (not an accident). I find the entire concept of appearance signaling pretty off-putting so I guess this is a great result.<p>The only thing Google Lens has succeeded at for me is age, race, and location. Basically everything else has been very wrong.
That's the 5th time this site is being added, most active discussion happened here: <a href="https://news.ycombinator.com/item?id=42419469">https://news.ycombinator.com/item?id=42419469</a><p>I'm not sure if feeding it with personal pictures is a good idea at all
I sent it an AI generated photorealistic picture I happened to have. It gave me a description of the picture (a doctor in an OR) followed by some generic stuff about how rich doctors spend their money.
I just give it pictures of my ex.
On 4th example photo the model says:<p>><i>They likely share an agnostic worldview and identify as heterosexual.</i><p>I wonder how the model would know that they are heterosexuals?<p>let's be careful about categorizing people so easily and in such a simplistic way.
Many homosexuals are visually identifiable as such (with reasonable certainty), some by accident and some by deliberate signalling. I can easily see how the absence of any such signals could end up as a classification as heterosexual, even though it really should put them in the "unknown" category.<p>Of course any automated classification of that kind quickly gets problematic in multiple ways. In the EU it's a fast-track to getting your AI labeled as a "high risk system" that has higher requirements for quality control, ensuring fairness and user choice, etc
Tagged both me (male) and my male partner as heterosexuals. I think there is still some learning to do on that front. Rainbow merchandise has not been as widely adopted as you might think.
A bit like "they do not have cancer", if you are fitting to a distribution you will have the best results by estimating an average. Being hetero is the majority/average, so a good prediction.<p>But doing this on a 20-way parlay like in this case will almost always fail.
[flagged]
this doesn't even pass a basic logic test, why would be wounded make us seek something we want in ourselves and being whole make us seek something we aren't? there are plenty of people of any gender that have any quality you may be seeking<p>you can't just make something up in your head and apply it to everyone
Feels a bit rage-baity<p>Clicking on the example photo of a white family with 3 small children in a field.<p>> Biases: Ageism, classism, racial profiling, microaggressions<p>???
By mistake I uploaded the same picture twice, and while all vague, the descriptions and ”data” were wildly different.<p>E.g. first time I was an extrovert, second time introvert… About the only thing that stayed the same was ”heterosexual”, but that’s a statistically safe guess.
Most comments seem to focus on the accuracy here. That does not seem to be the point of the website though.<p>They merely seem to want to point out that this is the way Google, Meta and anyone else with access to your photos look at those photos. And will abuse access to them by mining them for data to sell you stuff.
I dragged in a photo of my grandmother from the '70s. She is wearing a necklace with a large Star of David on it, but it says "She is likely Protestant". Sherlock Holmes this ain't. It gets a few obvious things right, but not much else. It correctly places her in Southern California but about 200 miles off. But if the purpose is targeting ads to her, it isn't bad, just from the general derived demographics.
I uploaded a picture of Philip Seymour Hoffman from The Big Lebowski, the scene outside Lebowski's house, when the Dude is talking to Bunny by the pool.<p>"This image shows an adult man, likely in his 40s or 50s, wearing a suit and tie. The location appears to be outdoors, possibly in front of a building or large house, suggested by the architectural details visible in the background and the surrounding trees. He is wearing glasses, and seems to be smiling widely, creating a sense of approachability.<p>This man is likeyl Caucasian, and could be earning between $100,000 and $500,000 per year. He is likely Christian, probably heterosexual, and leaning toward the Republican party."<p>It said PSH had an "ageism" bias, and it said a same about me. It also said he and I have a proclivity for gambling and poor diet, lol
> This man is likeyl Caucasian, and could be earning between $100,000 and $500,000 per year. He is likely Christian, probably heterosexual, and leaning toward the Republican party."<p>Totally nails The Dude.
It's hard to know what to make of this. It feels like it's listing stereotypes and superficial guesses. The tool accurately detected my age, my ethnicity and my location. Then it just kind of "vibed out" a bunch of things. Some of those vibes are strangely accurate on their own, but taken together the set of guesses is laughably inaccurate.<p>It would be interesting to do a similar with a series of photos. You could maybe interface with a users' photo library and select photos grouped by facial recognition. After all, none of these tracking companies are using just one point of data.
I used a photo of me in the car and it said my fashion interests include sweater and seatbelt.
I uploaded a picture, maybe a difficult one. It described somebody like me and like the opposite of me. Every single commercial suggestion was about something I never bought on my life. I got the feeling of those weather forecasts with an icon of cloud, sun and rain for the same location in the same day.
Hmm, this seems to just identify the features of the person in the picture and then extrapolates from that generic demographic information, mostly from race. Is it doing more?<p>e.g. <a href="https://i.imgur.com/FlnYwrK.png" rel="nofollow">https://i.imgur.com/FlnYwrK.png</a>
The description it generated is a stereotype of who I am. It did correctly asses that I'm white though.
I uploaded a picture of me from Halloween wearing a katana. It classified me as asexual, atheist, interested in crime, vandalism, and with a racial bias against immigrants. It also suggests that I should be offered ads for black market weapon dealers (Silk Road) and/or an arsonist starter kit (Amazon, surprisingly).<p>If you're looking forward to attracting the attention of automated police systems then now you know how.
It also suggested my income is about a 3rd of what it really is, but the lower income is more inline with the stereotype.
Uploaded my face pics - the data has 13 fields, 12 were incorrect.(With it only guessing correctly the emotion on the face and some objects in the picture. it was surprising, no photo app has guessed my age with +-20yr diff.). Pretty useless demo imo
Mine was super wrong. I was outside and it still guessed a wildly wrong location. It was wrong about my age and my interests.
This is just a stereotype machine. It incorrectly identified me as Christian, republican, and earning $50-75K, all of which are far off base, and apparently just because I’m white and wearing khakis.
Completely wrong on the political side (maybe because I'm not US-based), but otherwise not bad at all:<p>- astonishing geoguessing<p>- very good inference of some characters traits<p>- and finally quite good ad targeting<p>EDIT: I tried with a few photos (different people in various settings) and <i>each time</i> I got this: "racial bias towards immigrants" - which was always very false. Intriguing.<p>EDIT2: different photos of the same person (me) in different settings gives many totally opposed characteristics. Very unreliable, but I guess with several photos (a lifetime's photos in the case of Google) it's another story.
I uploaded a picture of a board game table from when I was playing with friends last week. Other than the obvious (they’re friends who like playing board games) it got literally everything wrong. It was in Asia and it guessed a western city. The ethnicity, sexuality, median income, and political views it prescribed to us were all hilariously inaccurate.
What I’ve learned today is that, apparently, I do <i>not</i> look like a person who earns as much as I do, across several different pictures.
Lazy. It just spits out demographic info.<p>I will say I am impressed with Instagram advertising me music software. I really like making music and I’ve bought quite a few things off of Instagram ads.<p>That’s advertising done right!
I don't know much about the Google Vision API that it claims to use, but
uploading the same passport photo of mine twice produced wildly different results in the "data" tab.<p>There are fields like interests, income, biases/predjudices which vary the most so I assume that's just the site pulling things from its own database of racist stereotypes ?
This feels like a data scraping honeypot...
Ente photos competes with Apple Photos and Google photos. It is also more open and you can share an album link with someone and they can add their photos without signing up.
The myriad of trackers available on every website provides much more high value signals than this LLM guesswork.
Is this one of those things like the old Facebook games where people were answering personal questions that can be used to guess all kind of secret answers for taking over accounts? Are people actually uploading real photos of themselves here?
I uploaded fantasy pictures which had amusing results ;-)
If they really see it like this site shows then I am all good as it got everything wrong besides my skin colour.
This is a promotion for <a href="https://ente.com/#pricing" rel="nofollow">https://ente.com/#pricing</a>
About as accurate as a horoscope. Sherlock Holmes it is not.<p>It got the location (exif, I guess) and was able to identify that I was a balding mediocre middle-aged guy, but the more specific it got the more wrong (and insulting) it was.<p>"He appears tired and introspective. He may exhibit biases such as confirmation bias, anchoring bias, in-group bias and out-group bias. His interests could involve reading, hiking, and programming, coupled with less constructive activities like smoking, excessive drinking, and gambling.<p>This individual seems to possess low self-esteem, exhibits introversion, a lack of emotional stability, and low self-control, making them susceptible to targeted advertising."<p>Thanks a fucking lot, robot.
You inferred this from the photo?<p>> He is presumed to be agnostic, heterosexual, and politically aligned with the Democratic party
If you believe this study [1], humans can guess party affiliation at least slightly better than random chance from images alone.<p>Or [2] is an (unscientific) exploration from the other direction, prompting image generation models to make images of republican and democrat voters, with very different results<p>1: <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC2807452/" rel="nofollow">https://pmc.ncbi.nlm.nih.gov/articles/PMC2807452/</a><p>2. <a href="https://rooftopsquad.com/democrat-vs-republican-faces/" rel="nofollow">https://rooftopsquad.com/democrat-vs-republican-faces/</a>
Presumably, everything you have done publicly (and hence your personality) exists somewhere in the big Google neural network. It gets compressed into one of the many billion weights. It might be hard to decompress it into useful information. But it is there nonetheless. Just showing your face might trigger and activate some layers in there.<p>Just my hypothesis.
> politically aligned with the Democratic party<p>That's <i>sometimes</i> possible (e.g. the "Trump woman" look, or certain stylistic cues mainly displayed by progressive women that I can't really articulate). Polarization has turned political alignment into subculture, and members of subcultures often dress certain ways (and not necessarily deliberately).
Very generic. Shows the same obvious info for two completely different people. "interests in tech... struggles with procrastination...".
Also funny - red shirt means you are Labor (in Australia).
It just describes what's in the photo and then some completely wrong/random facts about self-esteem, income, religion, etc.
not too great, i submitted a photo taken of me at graduation. it got the location totally wrong and was around 50%-70% accurate on my hobbies and interests. it was able to correctly guess my sexuality and ethnicity, which is rather unsurprising.
It labeled every single person in my area as having “confirmation bias and in-group bias”
Execution may have a few blind spots and mistakes but I get the idea behind the message. I’m sure the big companies have a lot more data and ways to do this better too
Seems like nonsense to me. I'd love to see the prompt. From one of the sample images:<p>"They likely share an agnostic worldview and identify as heterosexual. Their clothing is casual, and their interests revolve around skateboarding, music, and hanging out. Given their age and attire, they likely lean towards a liberal political affiliation. They display signs of classism and ageism, with potential for racial profiling and stereotype threat." - Wow, really?! Were the system instructions asking to be as judgmental as possible?<p>Also it's a blatant ad considering the source.
* It's an ad / undisclosed conflict of interest. (They appear to be a photo hosting site in competition with, e.g., Google Photos.)<p>* The TOS deigns itself to claim forced arb. over you.<p>(AFAICT, it's just running uploads through an AI? I don't think the actual Google product has these features, we've just asked an AI to hallucinate the biases of two people sitting under a tree, but now (this is <i>according to the actual linked site</i>) — they're probably lesbians.<p>I.e., it seems like the likely thing here is that the (undisclosed) prompt that generated this is from them, not from Google. Or, showing your work goes a long way towards building the trust that this isn't simple fear mongering, and while I think there's a good argument for being careful of what one uploads to a corporation on the Internet, "upload to <i>this</i> corporation instead" feels like a "fool me once…" type of solution.)
They wax poetic but our ai overloads are not quite here yet.
Like others have pointed out, it reads like a horoscope. The example images give a reasonable approximation of what I'd profile them as too, but after trying a few of my own picture it's clearly BS. Garbage in, garbage out.<p>This "use LLMs as psychometric/political polling substitutes" idea seems to have jumpstarted a weird cottage industry of "synthetic" surveys. The model is pattern-matching on superficial visual cues and dressing it up as insight (I have a long beared and hence I vote for the green party).<p>Nate Silver put it well recently: [AI polls are fake polls][1].<p>An LLM inferring personality from a photo is even further down that chain of abstraction. That's not profiling, it's stereotyping with extra steps.<p>[1]: <a href="https://www.natesilver.net/p/ai-polls-are-fake-polls" rel="nofollow">https://www.natesilver.net/p/ai-polls-are-fake-polls</a>
I tried and apart from correctly identifying what the person in the photo was wearing and probably doing, it was wrong on all counts.
Me too, but I'd be curious to see what it does with an entire library, as the website suggests. I found the demonstration of the intent to be interesting, regardless.
This is very silly. It's just a combination of that Derren Brown astrology experiment [1] and madlibs.<p>[1] <a href="https://www.youtube.com/watch?v=haP7Ys9ocTk" rel="nofollow">https://www.youtube.com/watch?v=haP7Ys9ocTk</a>
> Interests: Hiking, photography, travel, gambling, substance abuse, binge-watching<p>> Biases: Ageism, fatphobia, colorism, classism<p>excuse me?
From the generated descriptions of the sample images, it looks like their prompt is asking for a vaguely center-right gloss on demographics? Is that the agenda of the site operators, or an appeal to presumed paranoid libertarian site visitor?
This tool doesn't work well at all, it identified some "low income" people and then said it recommended them Patagonia clothing???<p>Also, the people didn't look "low income" at all but they were black, so maybe this tool is also racist.
I uploaded a picture of my growing bald spot (sigh…). The only thing they were able to see was the EXIF location data, which I already knew about.<p>I am sure something involving my face would be more scary but I kind of don’t want to provide someone else training data of my private photos.
I uploaded a picture with poor lighting and wearing dark cloths. Got almost everything wrong...
....
Reading, coding, martial arts, substance abuse, illegal hacking, violent thoughts
Interests<p>Attending punk shows, street art, urban exploration, drug use, vandalism, recklessness
Except for guessing the right continent (not that remarkable), mine is so majestically wrong that I would either dislike or fully hate all of the products I got recommended.
This is obviously dishonest fearmongering, but I kinda support it if it helps non-tech people develop a sense of the type of private information tech companies are trying to collect.<p>But it's clearly bullshit.
Yeah, astonishingly wrong in just about every way (except race). If anything, I think I'm less worried than before.
correctly guessed I was funny
Doing a 20-way parlay with averages of each country will almost always fail, this product is shit.
The results were laughable. AI certainly has useful applications, but it also became astrology combined with slot machines for many tech-inclined folks.
Now combine it with ThisPersonDoesNotExist.com
"i have approximate knowledge of many things"
that was complete bullshit lmao
Mine was way off! I uploaded a photo of myself reading a book outdoors in Ashkelon (Israel). It got everything other than my religion wrong--and even that was half right. I'm Sefardi. And it should know that Jews don't shave with razors! (See <a href="https://en.wikipedia.org/wiki/Shaving_in_Judaism" rel="nofollow">https://en.wikipedia.org/wiki/Shaving_in_Judaism</a>) It recommended products including "Dollar Shave Club" -- to a Jewish man with a very full and long beard.<p>I think this "technology" is a big nothingburger.<p>And "Low Self-Esteem" Ha! I love myself.<p>> The man appears to be of Ashkenazi Jewish descent, possibly with an income range of $50,000 to $80,000 USD. It's plausible he identifies with Judaism, with a heterosexual orientation and potentially leaning towards a liberal political stance. He might harbor social biases related to ageism and classism, as well as racial biases stemming from cultural differences and stereotyping. He wears an expression of thoughtful interest, clad in casual attire. He might have interests in reading, learning, and spending time in nature. Conversely, he may dislike activities like excessive consumerism, engaging in superficial social interactions, or feeling pressured to conform.<p>> The person seems to have low self-esteem and average emotional stability hence we can target them with self-help and social networking type of products and services, such as guided meditation apps like Headspace, confidence-boosting courses like Skillshare, online therapy like Talkspace, and motivational podcasts like The Tony Robbins Podcast, and also personal grooming products such as Old Spice deodorant, Dollar Shave Club razors, Clinique skincare, and Levi's jeans.
> The man appears to be Caucasian, with an estimated income range of USD 50,000 - USD 75,000.<p>> These people seem to have low self-esteem, is slightly introverted, has high emotional stability, is not very adventurous and does have some self-control hence we can target them with wooden puzzles, adventure novels, travel products, personalized houseware, such as Melissa & Doug Wooden Puzzles, Penguin Classics Adventure Novels, Osprey Travel Backpacks, Viski Personalized Whiskey Glasses, credit cards, life insurance, home internet and streaming services, such as Capital One Credit Cards, State Farm Life Insurance, Xfinity Home Internet, Netflix Streaming Services.<p>Hahaha no, what the fuck. Every part of the response was wrong except the objective race/clothes/setting.
So we're racist just because we're white but we also supposedly vote for the 'liberal' parties which call us all racist because we happen to be white. We also have low self-esteem, are introverts and more of such nonsense. All that from photo showing profiles in a forest with the sun casting rays between the trunks, no faces visible, no EXIF data in the photo. Oh, we're also supposed to be in Nova Scotia.<p>The only thing it got correct is the fact that we're white or 'Caucasian', insert the currently mandated term. The rest is total nonsense. They insist they can target us with ads for ecological dog food and other pet paraphernalia. Good luck with that, we tend to block all ads and our photos are not stored anywhere within reach of these data parasites.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
I've fed it few photos. The conclusions was an absolute trash. Well it did distinguish skin color and body build type. Congrats /s