Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What did they do wrong in this case?


They intentionally created the system that allows this sort of abuse to happen? If you don't hold Google responsible for giving a few megacorps carte blanche to remove any content on YouTube that they don't like, who would you hold responsible?


> who would you hold responsible?

The legislature? Corporations obviously pursue profits within the legal framework, and this is the most profitable path within the legal framework. Lawmakers can challenge or reduce this sort of abusive pressure by content publishers in a number of ways but they don't. The current legal framework seems to tacitly allow publishers to effectively move as a bloc in punishing any platform, whether it's by Google or someone else, that refuses to accept their anti-consumer practices.

But of course you can keep blaming one corporation. Bad Google, why don't you just voluntarily do what I want, profits be damned?! See if that changes anything.


Correct me if I’m wrong, but AFAIK, Google was legally required by the DMCA to make a Content ID system


No, there was no such requirement in the DMCA. Remember, we're talking about legislation that was written in the late '90s — the idea of Content ID would have been completely alien to them. YouTube has actually implemented a completely parallel system to the DMCA's takedown process.


Content ID was not at all alien to the 90s Congress; the DMCA explicitly calls for the industry to develop such systems, though it stops short of requiring YouTube to do all the technical heavy lifting:

(1) Accommodation of technology.—The limitations on liability established by this section shall apply to a service provider only if the service provider—

(A) has adopted and reasonably implemented, and informs subscribers and account holders of the service provider’s system or network of, a policy that provides for the termination in appropriate circumstances of subscribers and account holders of the service provider’s system or network who are repeat infringers; and

(B) accommodates and does not interfere with standard technical measures.

(2) Definition.—As used in this subsection, the term “standard technical measures” means technical measures that are used by copyright owners to identify or protect copyrighted works and—

(A) have been developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process;

(B) are available to any person on reasonable and nondiscriminatory terms; and

(C) do not impose substantial costs on service providers or substantial burdens on their systems or networks.


The passage you quoted doesn't call for anything closely resembling YouTube's current system. All it says is that YouTube (where YouTube is standing in for any provider) has to be open to banning repeat infringers, and can't claim DMCA protections if it tries to stop content holders from identifying their copyrighted works through software that is the product of "an open, fair, voluntary, multi-industry standards process."

In the context where the DMCA was written, it seems most likely that they were thinking of, say, Napster banning known copyright enforcement agencies in order to prevent them from scanning its network.


Content ID is above and beyond the DMCA. Media companies asserted that, due to the size of YouTube, utilizing the DMCA reactively was insufficient. To avoid the media companies taking legal action (and possibly setting bad legal precedent), Google voluntarily created Content ID to proactively identify copyright misuse.

It worked - the media companies backed off. It's also a massive failure, in that it's been abused to the detriment of the content creators since it came into being.


Sort of. They built a video platform that they couldn't effectively moderate, so they built Content ID because they don't have the manpower to manually review everything. Content ID is just a side effect of Google relying on algorithms to do everything.


> because they don't have the manpower to manually review everything.

Because they don't want to pay for the manpower to manually review everything.

Youtube gets 250-300 million hours of video uploaded every year. At $20 an hour to review it, that's $6b. Call it $10b.

Alphabet made $86b last year.


It’s possible that a human review system would be even worse. Have you seen the studies that show how insurance adjusters, working from the same inputs, with clear evaluation rules, have spreads of > 40% for the same case?


> Alphabet made $86b last year.

They did not, and I'm curious where the mixup came from. They did $46.1b in revenue fiscal year 2019, and $10.7b in profits.

https://techcrunch.com/2020/02/03/alphabet-earnings-show-goo...


https://www.macrotrends.net/stocks/charts/GOOG/alphabet/gros...

Alphabet gross profit for the twelve months ending September 30, 2019 was $86.264B, a 16.62% increase year-over-year.


I find it highly unlikely that more than 1/8th of Alphabet's profits are coming from YT, so that'd make it a cost center... and I'm willing to bet it'd be deep deep in the red.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: