Hacker News new | past | comments | ask | show | jobs | submit login

A childhood friend committed suicide, I found out, then the next day or so my science-curated youtube feed suddenly showed me a totally out of context video in the top spot with content that mocked my friend's life.

It appeared as though google/youtube algos used the data point about my friend's suicide to perform a psychological experiment on me. At the least, this was amoral, but appeared malicious to me. I fail to see how it is ever appropriate to customize content based on the suicide of a friend.

I would like to ask anyone here on hn who has the appropriate authority at Google to briefly investigate this incident: Why did google/youtube use the suicide of a childhood friend as a data point for personalizing content, what was the intention, and whether this has changed. Or, perhaps it would help to have an explanation as to why my understanding of this incident is incorrect.




The simple answer is no algorithm is that smart. It's just picking up your "interests" and trying to tailor that. AFAIK people do try to tune recommender systems to avoid this but it's an inscrutable problem to solve with today's state-of-the-art.


My youtube feed had been purposely curated to two science topics for weeks prior. How does the suicide of a friend factor into an "interests" algo that all of a sudden overrides my customizations? I was not doing any searches etc for suicide at that point but I did learn of the suicide via SMS from someone who uses an Android phone. (likely infiltration point for google to initially gather the data point User.FriendsSuicide.WasInformed)

I'm willing to accept the innocent explanation but not on conjecture. This was blatant and fucking hurt me badly and I haven't recovered since. It was the worst possible gaslighting.

Recently: tweeting about my suicidal ideations and having twitter respond with personalized trends such as "He's dead" and "Rope" while shadowbanning me doesn't look good.

Also recently: Facebook showing me a long string of county sheriffs as "people you may know" when a guy i knew long ago was scheduled to be released from a long term federal stint, doesnt look good either. Makes it clear that I've been on a "associate of bad guy" list since Season 6 Ep 12. Is it possible that some kind of social credit algos have been deployed by American tech companies, in a limited fashion? I suspect the architectural arrangements at big tech companies offers compartments for performing limited use-case experiments totally out of view of the rank and file engineers. By design, those ouside that clearance zone won't even know it exists. "CIA middleware" might be appropriate

It seems unlikely to me that the google incident was just a random fluke based on my "interest" in learning of the suicide of a childhood friend. However, only google has the logs, data, and algorithms. I find it unlikely that Google will ever give a straight answer but to me its evidence that Google's data and algos need to be turned over to the justice dept. A big part of Google's power is their impunity to perform such experiments with total deniability. They are worth a trillion dollars yet any damage, incidental or intentional, inflicted on a user goes totally unchecked.

Google is loyal to google/shareholders. They dont give a fsk about me, and I fully expect that no response is forthcoming. There's no accountability, and these services start to look more like public utilities. If it isnt transparent then any such malice can easily be played off with statements claiming that some kind of intelligence is needed for an algo to be malicious

An algo doesn't have to be "smart" to gaslight a person. All you need is their personal data, especially intimate data such as:

* Childhood bully's name

* Subsequent taken name of your ex wife who you were heartbroken to lose

* Suicide of a childhood friend

* Other personals such as siblings' names

Worse yet is rumors. An algo can use info related to rumors to gaslight a person. Just use keywords from the rumors for personalized content. None of this requires any complex algos. Just incompetence and/or malice.


I'm sorry to hear about your friend, and that you've had some bad experiences with recommendation systems recently, whether it be on YouTube, Twitter, or Facebook.

I also had a similar experience after a friend was killed in a home burglary, and my news feed started pushing all the latest updates on the case as well as similarly tragic news stories. For a long time, these recommendations were a painful and unwanted reminder of my friend's murder. Even worse was that the algorithm was definitely trying to push increasingly extreme and fear-mongering content about home invasions, all to try to get my clicks, and perhaps unintentionally radicalize me in the process.

As for why this type of thing happens, I think this part from your last point is pretty much right on the mark, "none of this requires any complex algos". All it takes is data of your browsing habits, the webpages you visit, the keywords you search, the webpages your friends and contacts visit, etc. There don't need to be any database fields explicitly encoding this information, which is why it's so worrying, such a system can push personalized traumatic content a global scale with precision, all from keywords and algorithms intended to maximize click revenue.

The other side of the equation is the content that gets recommended. I'm not sure who created the YouTube video that mocked your friend's death, but in my experience I was pushed content from typical local news outlets, then less reputable outrage content farms, and finally even more despicable sources, like white supremacist sites. That's something about this experience that really disturbed me. I've seen uncannily accurate product predictions before, but usually the 3rd party was relatively reasonable. I hadn't really ever been recommended radicalizing propaganda before, not to this degree. I've read articles about some of the operators of such sites, and they often don't even believe the content that they're pushing. They know that it triggers something in people who are vulnerable, and they exploit that for profit. That whole ecosystem just seems pure evil to me. At least the recommender systems work most of the time, usually just showing me stuff related to my hobbies, cute cats, etc.

Anyway, the combination of incomprehensible recommender systems and toxic content on the web is definitely not good for anyone's mental health. Hope you feel better in the future.


sorry for your loss. i totally see where your comment is coming from but i do not believe that youtube deliberately ran an experiment on you.

youtube, along with most social media, doesn't know how to handle the dead.

it always pains me to see videos of dead youtubers show up in my feed. i highly doubt that the "monetization" money goes to the families. and i can't bring myself to unsubscribe to their channels either.

it's a weird and sad conundrum.


thanks. may i ask for a source or are you speculating?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: