Hacker News new | past | comments | ask | show | jobs | submit login
The scam YouTube ads are getting better (reddit.com)
73 points by sorenjan on Jan 24, 2024 | hide | past | favorite | 85 comments



It reminds me of the fact that the "Fake Mr Beast giveaway" ads that even raised some attention here on Hacker News [1] a while ago are still around. In fact, I have seen one yesterday. Those must have been flagged as impersonation and scam thousands of times by many people, including me personally, and Youtube finds them perfectly fine.

After that episode, where I tried myself to get rid of them, I am much more convinced that Youtube is fine with all but the worst scammers, and don't buy any of the "they're just low on manpower" arguments anymore.

[1]: https://news.ycombinator.com/item?id=34943047


I’m sure big budget scams pay great CPM! You’re thinking too customer friendly


> thinking too customer friendly

gentle reminder that 'customer' for youtube _is_ MrBeast / CPM scammers / et al

you and me (users) are the 'product'


Last year for weeks I was seeing these fake Mario games on YouTube ads. And it wasn’t a Mario feel-like. They called it “Super Mario” and had whole asset rips in the graphics.

Nintendo lawyers were surely all over that. I’m shocked it took so long to get them removed.


YouTube should be indicted for fraud. They are publishing this stuff, knowing it and sharing in the profits.

Same goes for X, btw.


At some point platforms should be held responsible for the ads they run and the damages they cause. If the ad is to be broadcasted to thousands of people, or worse targets groups of people, it's reasonable that it must be approved and vetted by the platform. The cost of such vetting, even if done manually by a human, is likely a fraction of what is the cost of creating an ad anyway. If necessary Google can charge advertisers with an upfront cost.

Edit: grammar


That should be all points really. Why are you suddenly, as a platform holder, not responsible for the content you host if money changes hands for you to host it first? That makes no sense, you are still the party putting that content in front of an audience, and as part of a contractual arrangement no less.


Not only do they receive money for hosting the ad, they actually select the potential scam victims for the scammer and preferentially display the ad to the people that are algorithmically determined to be most vulnerable to the scam. It hasn't been tested yet, but I believe YouTube could be held civilly or criminally liable for the underlying fraud.


What is also frustrating is that it seems very easy to get takedowns due to DMCA. However these ads seem to stay up longer than necessary.


>...platforms should be held responsible for the ads they run.

Yes! If the platform hosts its ads on its own platform, then I'm OK with that. I don't like ads, but they 'own' the ad content and if it blows, there's no one else to point finger at.

Show an ad from some 3rd party and I have no qualms in blocking that.


Why aren’t scams easily prosecuted in this case?

We should at a minimum have financial information and contacts for the person who created the ads.

Are all governments sitting on their hands because it’s happening on the internet?


Probably because no one takes it seriously if you get scammed out of $50 on youtube.


I think that a good way to do this would be a triple-poison pill approach. Google et. al. are capable of policing the advertisements on it's platform, there is just not the business will to do it.

The triple poison pill approach: * Every user-report of a scam ad must be reviewed by an independent barred lawyer for a minimum of 15 minutes of actual time.

* If a person (institution, etc) with a public image is shown in the scam ad, they must be notified and should the advertiser and platform should share joint liability.

* If a person falls for a scam, they should have recourse against the platform and the advertiser.


If you think this is bad (and it is), know that Facebook kept showing me, for months, ads from scammers pretending to be... Facebook!

This only shows how bad the processes (including algorithms) developed by these companies aiming at fighting these ads.

Is this by design? After all, it was clear these campaigns were pouring good money at the operation as they were showing non-stop for a long period.

Even more troubling, reporting the offending ads multiple times didn't help either.


Why would Youtube care to police bad actors? They are taking this money directly to their quarterly revenue.


Creators don't want scam ads on their videos. It hurts YouTube's image, which hurts their bottom line.


There is not really anywhere for the creators to go so they don’t really have any leverage. If you’re mad at YouTube you can go to an alternative and get 100-1000x less traffic and revenue. And some other obnoxious creator will take your spot on YouTube immediately


YouTube still has an brand image to uphold for users. Scam ads make it look cheap.


You would need a mass exodus of top YouTube creators to make the slightest dent on YouTube's bottom line. Meanwhile, any top YouTube creator leaving YouTube (to what alternative?) isn't hurting his or her bottom line, but rather nuking it from orbit.


It's not a sudden mass exodus they care about, but a slow decline in their brand's perception for users. Scam ads make YouTube look shady.


You're making it out to be a decision, but it's not. Creators can upload their content on several platforms and collect revenue from all.


In that case the effect on YouTube's bottom line would be even smaller, if any. Creators posting their videos on other platforms would have a negligible to no effect on YouTube, given that it would remain the default (read "only") video platform for the overwhelming majority of the viewing public. A creator announcing "You may also see my videos on DailyMotion/Vimeo/Twitch/PeerTube/wherever" would simply be irrelevant to their mass audience, who would react anywhere between "Ok, whatever" to "I don't know what that means, ignore".


Just because they can collect revenue from other platforms doesn’t mean it will be sufficient or anywhere near what they were getting at YouTube.


Of course not, but it is a hedge against any decrease in revenue from YouTube, as speculated here. A hedge that doesn't cost anything.


The act of publishing on PeerTube together with Youtube (or even one day earlier!) can be a good protest already in itself.


The only reason I can think of is so they can keep up their "we need to police bad actors" image necessary to maintain the moral high ground when they mistreat creators for the sake of sanitizing their platform.


Hold YouTube responsible for the content of their advertising. If they advertise scams and people end up scammed, IMO, they should be held legally culpable for that. They played a role that they were paid for.


I'll happily continue using an ad-blocker on YT until it stops working completely.


It is not AI that is getting better. It is Google that gets greedy. Google accepts money from some random businesses.

Not everybody should have the ability to distribute such scams to YouTube, where it is broadcasted everywhere.

Ad reach should be decided on the reliability of the advertiser.


A long time ago, my grandmother got a call one day saying that I was hurt badly and needed money. That I was afraid to contact my parents because I had been doing something I shouldn't have, and all my grandmother needed to do was send money to this hospital, which was just some scammers fake information.

I hate scammers. The idea that now you will be able to create videos, that interact and respond to questions, as if there is a real "grandson" on the other end of the video call is disturbing. I don't know how we deal with that.

Legit question... how do we deal with that?


Instruct (pun not intended) your relatives to always hang up and call you back and use something from your past as a test. Some story from your life that is family-only knowledge. And explain that you aren’t the type who gets into “money situations”, and that in general “money from relatives now, or else” is not how even the less legal part of the world works.

[Grand]parents take it seriously because they don’t usually know anything about these extremities to counter the scammer pressure. Information is fortification.

Also add them to do-not-disturb bypass list if not yet.


Educate. Make up some kind of information that is easy for legit persons to exchange but difficult to look up online. In case of parents/grandparents, it can be information in their life which isn't available online (good luck finding the town my mother was born in online!).

We use this in our family for generations - anyone ringing our doorbell and asking to be let in needed to know the full name of my siblings. We have a similar mechanism in place for our kids.


Would it make sense for scammers to bother doing the research needed for that? The scam works by itself without those things. My grandmother has gotten that call many times and they can't even be bothered to figure out the names of any of the grandkids. Learning the likenesses and collecting voice information is a whole different animal.


The type of attack you're concerned with doesn't scale. The scammers would first have to collect your likeness to use to make the video or audio.

It _would_ work for the more targeted attacks. People will have to establish passphrases.


Passphrases probably don't work well with the target demographic that most needs it (elderly people). Maybe teach teach them to verify through another channel instead. I.e, hang up and call back at the number you usually use. That's what we just implemented at my company. If one of our CXO's (video) calls or writes to authorize a transfer of funds, then politely say you will, hang up, and call back the CXO in question at their regular number to verify.


You could probably be visiting your grandparents and physically stand right in front of them when the scammers call and pretend to be you, and the grandparents would still believe the scammers more than they believed you. Elderly people have an eery respect and reverence for telephones and letters.


This scam has been around far longer than deepfake audio/video.

~20 years ago my grandparents were close to sending money to someone claiming to be my cousin asking for bail to get out of jail.


Keep anything that can copy your identity from the open web, especially socials.

Only use services that have privacy in mind.

Educate your family on how to check for these things, like asking personal questions, or having a safe phrase.


If only this advice had been available 20 years ago. Instead, everyone volunteered that information up taking quizzes asking the name of the street you grew up on, the name of your first pet, and the details about your first car.

Oh, wait, that advice had been around for years before the internet even, but for whatever reason people collectively lost all reason when social media came around and just did the dumbest things.


“Nah, man, you’re just paranoid!”

My buddy recently learned about AI, something clicked in his mind (not sure what) and he absolutely seriously asked me to delete his everything from the internet. Who’s paranoid now.


I think this is the source video: https://www.youtube.com/watch?v=urVolzJS_Ws

It also explains why she strangely looks past the camera as the original was framed as an interview with the camera below eye level.


This is going to get worse (better) outside of YouTube too...

Imagine a scammer calling grandma and recording a few minutes of her voice, and then using that to generate a fake voice to scam you in realtime.

I imagine that we'll need to come up with safe words or security questions to verify we're talking to our own families at some point. People in high risk scenarios already do this sometimes... they'll have a distress signal that indicates to someone on the other side of the call that they're in trouble "don't forget to feel Wolfie" when their dog's name is Rex or something.


Any particularly good scammers will scam grandma out of her safe words pretty quickly. Issues like this don't scale well, at least for people that have not grown up with it.


[0] embeds a youtube video on a local html file, AFAICT there are no ads on locally embedded videos

[0]https://github.com/luvchurchill/NoAds


Not tested extensively, but that seems to work. It can be wrapped in a bookmarklet too, so you are ctrl+shift+b-then-click away from a standard youtube page into a simple page with the video embedded:

    javascript:(function newDocTest(){ src = 'https://www.youtube.com/embed/'+document.location.href.split('watch?v=')[1]; document.body.innerHTML = '<iframe width="420" height="315" src="'+src+'" title="Embedded YT" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>'; })();
I might have to spend some time tinkering to improve it, make the default size match current view-port dimensions as closely as possible etc.


Also AFAICT it doesn't show up in your youtube history, if you care about that kinda stuff.


I think the only way to "protect" people from this is to educate them. Similar to Nigerian prince spam emails these will need to become common knowledge and people need to learn to ignore them.

The one thing that won't work is to ban AI because the people that won't care about this ban are the same stealing money using AI.

But where is the "Report Scam" button? Facebook doesn't have it, Twitter doesn't have it and neither does TikTok.


I wonder if there's anyone here that would know what would happen if this same ad were played on broadcast television? The public backlash would of course be bad, but I'm curious about FCC stuff. Seems like there's different rules for that and internet based communications are ignored by traditional regulators. Does Section 230 cover advertisers too?


YouTube, Google, Facebook and Instagram (ie Alphabet and Meta) should be prosecuted and fined for every scam ad they publish on their networks and all responsible people from CEOs down to managers and directors in the advertising division should serve prison time for their crimes. It's incredible how lax prosecutors and law enforcement are on these frauds.


I've been lurking on this sub for a week, searching the internet, reading every bit of information available, and trying to find or contact any HUMAN who can actually help. However, I'm beginning to realise that my problem will not be easily solved. I was referred to Marieconsultancyoz@gmail com. I'm one of the thousands of Instagram users affected by this farce. Also on Instagram under the handle Marie_consultancy. You can also get in touch with her. She successfully unlocked my Facebook and Instagram accounts, providing me with appropriate and sincere services.


As far as I see it, all advertisements are scams of some form, intending to persuade (or con) you into doing or buying something you probably don't want to. By nature their sole intent is to extract dollars from you in whatever way they can, so how is that ever going to end well on the receiving end?

Just use an adblocker, ublock origin and sponsorblock work great for me, I never see an ad, like anywhere on either desktop or android with firefox extensions. If you insist on using iphone/ipad, well thank apple for giving you no options but to let the world to cram ads down your throat.


Everyone focuses on how scamming and AI is evolving. I'm rather curious how this will affect marketing. Celebrity endorsements is useful for building trust. I wonder if we see more or less of that in the future.


Are celebrity endorsements actually useful for building trust? Usually i view them in a negative light, thinking that they just paid someone to promote the product, rather that a genuine and organic endorsement.


I think in exactly the same way as you do but if it didn't work at scale then corporations wouldn't do it. Most people are falling for it.


Even if 'most' people aren't falling for it, enough people are falling for it that it's very profitable.


> Are celebrity endorsements actually useful for building trust?

I can't imagine this is highly controversial. Of course celebrity endorsements can be useful for building trust. This is supported observationally and by years of research in marketing psychology. It's not even necessarily limited to megastars : even endorsements from niche influencers and small-fish content creators can be marketing goldmines, nevermind the kind of overnight headwinds your product can get if someone with the reach of Oprah, Rogan, Musk et al starts pushing it.


Personally, I don't know specifically about building trust, but your average person seems to be more likely to buy a product if a celebrity they like is endorsing it. At the very least, a lot more people are going to /hear/ about whatever product some celebrity is endorsing.


Agreed I've never seen a celebrity endorse something and thought "Ah they clearly actually love this product so it must be good!"


It works incredibly well, that's why it keeps happening. Most people do follow pretty much exactly that thought process.

Merely by participating on this forum etc etc etc


That's fair. I just don't get it. It's obvious to me that it's paid for and my assumption is that the person probably hasn't even used the product. So for me, yeah it doesn't work. But you're right, there's a reason these celebs keep getting insane payments for these ads.


I wonder if the rise of deepfakes will finally be the moment that gives us all sufficient incentive to adopt a real, cross-platform authentication infrastructure of some sort (public key auth, etc). There's real value in, say, the NYT being able to authenticate that a reporter's social media presence is legit, and for that reporter to be able to sign and take responsibility for their photographs. Or using your example, some sports star authenticating an ad featuring them. Unsigned media would be considered suspect. This could all be surfaced in a digestible way to users, like we have done with the HTTPS lock icon. Dunno. Probably not, but maybe.


You have to look at the incentives versus counter incentives.

Who is going to be in control of this authentication infrastructure? Private companies? Where do they get the information that is trusted? Other private companies? Governments? Why won't these social media companies try to make their own private infrastructure so they can remain in control rather than potentially allow people to leave to other platforms? How will revocation work? Will the browser have to implement support?

Also: How long before authoritarians demand we sign our stuff?


There's a bunch of different PKI-style schemes, most of which don't require trusting a single central authority. I thought KeyBase's approach struck a good balance between authentication strength & ease of use, maybe something like that could be a good fit.

The incentive would be for online platforms, especially social media, to remain a place people want to be and can have some measure of trust in, rather than be overrun by deepfake-style content.


it is funny how the uncanny valley nature of this can almost be written off to low bandwidth compression artifacts. otherwise, it's still just strange enough that someone remotely informed on deep fakes would raise an eyebrow. luckily for the scammers, that's a low percentage of viewers and totally not their target demo. it feels like the same thing as the horrendous grammar/spelling used as a weed out so the only people responding are the gullible.


The original video (linked in the comments here) is also uncanny valley, because Jennifer Aniston's IRL face is artificial (plastic surgery).


Is it a demise like Twitter? I mostly see scam ads both places, does it mean proper advertisers have given up on these?

And at what time should we start making advertising companies like Google, Meta etc responsible for what they show? They will of course claim it's not feasible to monitor and approve each ad. But they're quite happy to take the money and show them.


> Is it a demise like Twitter? I mostly see scam ads both places, does it mean proper advertisers have given up on these?

I'm wondering if it's everywhere. I've also seen a noticeable decline in quality of the ads on the New York Times app (including some weird ones, like "40 ideas for dentist signs" or something). I don't click on them, so I don't know if they're scams. I vaguely recall that a few years ago I'd never see stuff like that, and mostly see prestige brand ads (e.g. luxury designer clothes/watches/jewelry).

In the last year or so, it seemed like 100% of their banner ads were the ACLU spamming the same messages, over and over. So maybe the bottom fell out earlier, and the ACLU has just decided to stop spending money on it.


I'm guessing "proper advertisers" won't lift a finger unless one of the scams impersonates them or does something the directly affects their day to day.

I wonder how cooperative YT is when Anniston's lawyers demand information on the advertiser using her likeness without permission.


We are in for a hell of a year.

Talked about this today with a friend after a few different ads designed to look like they were news stories from the CBC (and also included interviews with Elon Musk). The audio (voice recreation) and lip sync are getting really good, really fast. But this is more a question of YouTube not having mechanisms to prevent (or at least quickly remove) blatant impersonation (especially of news sites). Unmoving news logos in the lower third seem like ideal candidates for identification/verification.


I am curious to see how those facial liveness check systems will adapt to this reality. They are a key pillar in the KYC and fraud prevention process in the fintech space.


I'm not entirely sure what you're referring to but the closest thing I'm aware of is video calls in Germany where you are asked to show your ID card. Those certainly wouldn't be easy to fool without being able to do this at runtime and consistently showing an altered ID with all safety features intact. The agent will give you specific instructions on how to hold your ID and where to put your face while also switching between front and rear cameras and using the camera light. They're also performed by specialised services that only offer that kind of verification.


Those are the fancy ones (scratching your ID card is still annoying). The less fancy ones use a lighting scheme on the phone or zooming methods to detect fake actors and do not require a human on the other side. The fancy ones also do not scale.


Scale relative to what? The fancy ones are the only ones I've ever encountered in Germany except when dealing with an American company that insisted I upload photographs of my ID card directly to them (which due to the lack of a proper privacy policy brought me close to ditching the client that asked we use them).

There's also Postident (which involves walking into a post office and showing the clerk your ID card in person) and eID (which requires a compatible smartphone and uses the digital data on the ID card itself). Video ID is pretty ubiquitous these days, so I'm not sure why you think it "doesn't scale".

If you're suggesting alternatives that only rely on biometrics, I don't think those are sufficient to fulfill the requirements of anti-money-laundering laws in Germany even if American KYC laws might allow them.


This may actually end up making the internet better. The demand for trustworthy publications will increase which will be nice.


The scammer loses money everytime you click their ad. Does anyone click these ads just for fun and as a social service ?


There is adnauseam plugin for firefox and possibly others that clicks every ad it blocks, i forget there's ads on youtube until i happen to open a link on my phone.

It keeps track of how much you've cost advertisers too.


I’m glad YouTube banned ad blockers so we can all see quality sponsors like this.


Use a Youtube proxy and ads are a thing of the past until they are directly inserted into the video stream. In that case Sponsorblock will help to skip those parts until they start randomising ad position in streams. Once that happens it is either the end of Youtube as far as I'm concerned or time to employ some machine learning to create an advertising recogniser which can take the place of Sponsorblock.

Ads are there to be blocked, not endured. Hell hath no fury but a content blocker scorned.


My wife uses the YouTube app on her iPad and I don’t understand how she can stand to have 3 commercials during a 10 minute Graham Norton clip.

I’m happy the adblockers still work in Firefox, but I’m also not sure you can really fault the advertisement company for not allowing you to use advertising blockers in their own products.


Yeah, I'd need more breaks than that. An average 2.5 minutes of Graham Norton between breaks is a bit much!

Though more seriously, when I get sick of the adverts, which is fairly soon, I just leave. There is very little, if anything, on youtube that I can't live without watching, I'm usually only there because of a link posted by friends/family or in an open forum. Other uses, like film/TV trailers tend to be embedded and there are less (no?) adverts that way.

Of course I'm likely not the target audience so they are not going to miss my attention much, but if enough people could resist the attention sink enough to just leave after a couple of adverts, perhaps there would not be so many of them before+during+after each clip. The adverts tend to be of low relevance too (often things like “this miraculous heater” scams, or yet another blast from Temu). When they claim tracking means you get more relevant adverts, this doesn't mean you get less adverts if they don't have anything relevant in the paid-impressions-waiting-to-be-used queue, you'll just get a few maybe relevant adverts per 1,000 instead of less, you'll still get the 1,000 with the remainder being less than relevant.


I agree with the comments on Reddit. It’s gonna be an interesting decade.


I would vastly prefer a couple of boring years.


I'm slightly curious about the choice of Jennifer Aniston for this. The plastic surgery she got a few years back always strikes me as extremely "uncanny valley" anyway, so it's not like the deep fake makes the effect that much more apparent.


That's scary convincible to a lot of folks.


The original it was faked from - https://youtu.be/urVolzJS_Ws




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: