Hacker News new | past | comments | ask | show | jobs | submit login
Zuckerberg appeared to know Llama trained on Libgen (rollingstone.com)
57 points by bn-l 3 months ago | hide | past | favorite | 77 comments



Of course he does. Heck most of us in early stages of LLM did the same thing. The data simply did not exists outside Google which is why it’s crazy that Google completely dropped the ball on AI this decade. They had such a huge lead in terms data access.


They dropped the ball on cloud and need to catch up and now it's AI. It's kinda interesting how being ahead with data center infrastructure and also AI research didn't lead to them being ahead on those products


Google is a playground funded by Ads and Ads make so much damn money that nothing can compete, even internally. If I were an activist investor, I'd make ads its own company. I was the FTC, I'd make ads its own company.


And what are the other companies? Just GCP? Why separate those?


Ads fund Waymo.


To be fair, they did have the lead as late as 2018. It’s just they treated it like it was their PhD thesis. Didn’t protect their IP at all and let all their talent leave.


In my opinion the Ai and absorbing all knowledge part of Google was Larry Page after his health scare his focus and priorities changed about actually living his life not Google. I think he had also realized what was happening with Google and so wanted Alphabet as an umbrella organisation but in the end he gave it up and let be run as a normal company.


And the only reason they had the data is because they scanned every book ever for Google books.


and every e-mail, and every document in google docs, and every video on youtube ...


How was the data Google already had access to any less protected by copyright?

The data Google had was book scans, search engine indexing of arbitrary 3rd party content, and private email and documents they hosted.


Google dropping the ball on AI… given their achievements on Waymo, Gemini and Gemma (just to name a few)… does not sound like a fair statement


Those models are absolutely garbage. Terrible code understanding. Ridiculous hallucinations.


Have you actually used them recently? Gemini is top of chatbot arena, and Gemma is one of the best open models at its size.


And that makes me extremely suspicious of that ranking. I use it at least a few times a week when I have a problem that’s unusual for me (to see it’s just terrible in my domain but not in others). It has a 9/10 fail rate.

It is the best at OCR though. Not many people are talking about that. It’s a very nice thing to know.


Perhaps the more interesting question would be exactly how did they obtain their copy/copies of Libgen?


It's hinted at in the article. If they torrented one large dataset, it's likely they did the same for Libgen.

> "I think torrenting from a corporate laptop doesn’t feel right,” wrote one engineer in April 2023, adding a smiley face emoji. (A later email acknowledged that the “SciMag” data had indeed been torrented.)


Are you asking for a way to obtain a copy?


Nope, I have no need for any <whisper>further</whisper> copies.

I'm more interested in how a for-profit corp decides to obtain a copy for development of a commercial product, and how they execute that ... whether they still have the data, and whether legal know about it :)

It's exactly not the kind of thing you can say you "found on a USB stick lying around in the car park"...


There's torrents of it. I remember one AI company saying somewhere they just grabbed the big 7z torrent of it for their training.


You should've seen the size of it. More of a USB baton really.


If Ryobi and DeWalt can make Bluetooth speakers, ASP can get into USB drives.


> You should've seen the size of it. More of a USB baton really.

<glances at shelf with many, many external USB drives hooked up to a Pi 400>

Oh, really? :)



Which is exactly why the want to shut it down, preventing competition in the Ai space.


Library Genesis is living up to its name.


"The note observed that including the LibGen material would help them reach certain performance benchmarks, and alluded to industry rumors that other AI companies, including OpenAI and Mistral AI, are “using the library for their models.”

A new chapter in "information wants to be free". Copyright was always an artificial restriction on human instinct. We now enter an age where piracy is keeping up with the Jones', and those who respect intellectual property choose irrelevance. Prosecution becomes impossible as the laundering grows ever more sophisticated. Adaptation and acceptance painful, but the only path forward .


Tangential question: What's your take about Code AIs using GPL and source available code in their training sets, and breaching both licenses?


Eschewing the ethics of such an action, my take would be that it happens (and will happen) regardless of license and no one can stop it unless those utilizing such code leave evidence that they disregarded the license.


The models can and will reproduce a lot of their training set given the correct prompt [0]. How you can know you're infringing a license if the model doesn't tell you from which repository it got "inspired" for generating that code?

GitHub was working on a feature which supposedly tells you which repository the snippet you just got is copied from, or IOW, which repositories have similar code, sorted by date. Effectively pushing the blame further on you by making you spend the time you just saved by investigating which repository provided the code you just got from "AI".

Supposedly, if the license is not friendly, you can delete the snippet and write your own version. :)

[0]: https://x.com/docsparse/status/1581461734665367554


Github, and others, have strategies to mitigate culpability. One way, as you note, is to hand off such responsibility to the consumer. Another is to suppress the training set from passing through too close to verbatim.

To be crystal, I am sympathetic with GPL and copyleft movements. I do think we've entered a time where it will be hard for license holders to enforce their rights. The laundering will only get more effective. And, per my original comment, competitive pressure will incentivize models to sail the most piratey tack.


It’s not obvious that training a model on GPL code would constitute a breach of the license.


Given the correct prompt, you can get the training set almost or completely verbatim [0]. Getting a GPL function is enough for GPLs virality, since you effectively lift the code from a GPL codebase and add to your codebase.

Plus, the stack's latest version contains at least one GPL repository which their license tool failed to detect. So it's not something hypothetical in the first place.

[0]: https://x.com/docsparse/status/1581461734665367554


Sometimes I just feel like these people overestimate how much they are actually owed from these training runs.

It’s trained on 15T tokens. So how many did you provide that were genuinely novel? And how much money do you want? Like $5 from OpenAI? And $0 from meta since it’s open source?

I personally hope we can all get on the same team with AI and treat its advancement as scientific research for the betterment of humanity.


> It’s trained on 15T tokens. So how many did you provide that were genuinely novel?

Are we suggesting that we should ditch creators' rights and instead value intellectual property along the lines of "I should be able to copy all your stuff as long as long as I copy lots of other stuff too, and give it all away for free or almost free...?"


That sounds like a pretty good deal to me - but I've always believed that the entire concept of "intellectual property" does overall more harm than good.


> I've always believed that the entire concept of "intellectual property" does overall more harm than good

It's fairly broken, but on balance it seems the creators are the ones getting screwed.

I did years of research in a scientific lab which resulted in <drum roll> 2 (yes two ... count them) peer-reviewed papers.

My colleagues and I did the work, wrote up the damned papers, yet to get them published we had to sign over copyright to what I'd now suggest is essentially a rent-seeking scientific publishing mafia.

All a long time ago, but I never had (and still don't have) the ability to either legally download or legally redistribute my own work...


The question if training an llm is fair use is one that will have to be answered by the courts.


The point (often) is to stop the practice, not to ask money for it.

Like, you get fined for speeding, but if you keep speeding you'll get you're license revoked, and if you keep driving after that you get jail time. The payment required is punitive, but the point is to stop you.


Again, that an opinion that will need to be tested in the courts.

You will have to explain why Hunter Thompson copying every word of every Hemingway novel isn't copyright infringement, but a computer doing the same is.


Three things:

1. Humans are not machines; arguments saying because a human can learn LLMs must be allowed to copy is not interesting.

2. Did Thompson publish the work? It sounds like you're referring to an activity Thompson did in private, to improve his skills as an author. Meanwhile, lawsuits are alleging that LLM services reproduce copyrighted materials.

3. What can be fair use at small scale is no longer fair use at large scale.


If you think that pretraining is copying then your opinions are irrelevant.


If you think courts follow your personal opinion and technical definition, you're silly. The reality is we don't yet know how the courts will decide.


> Hunter Thompson copying every word of every Hemingway novel isn't copyright infringement

I make a high-res photo of a banknote. I can print that out at home. The bad stuff starts at the step after that...


Try importing it in photoshop and report your findings back.


No-one would possibly think of using GIMP instead...

https://www.reddit.com/r/graphic_design/comments/ah9s8n/trie...


No not at all, just that their damages are going to be measurably fairly low


Of course. The alternative is that creators dictate the price for any of the infinite number of zero cost copies which is and always has been ridiculous.


Yes. As long as you don’t reproduce other people’s stuff specifically.


Which they do. That’s what the New York Times lawsuit is about. And in Meta’s case, they went specifically out of their way to remove the copyright notices to hide their actions.


“They” might be doing that. But this is not intrinsic to LLM usage


The conversation is specifically about OpenAI and Meta.


I disagree. This thread seems to be about LLMs on a fundamental level.


The submission is about Meta, and the comment that started the thread specifically mentioned OpenAI and Meta and no other LLMs or providers.


And the thread is about how LLM training interacts with copyright. Not whether OpenAI or Meta coincidentally blatantly copied other works.


I think the statistical arguments cover this.


> I personally hope we can all get on the same team with AI and treat its advancement as scientific research for the betterment of humanity.

s/AI/capital/.

It's painfully obvious that this is going to make material conditions worse for most people who use their minds to work instead of their hands. to these people, the "betterment of humanity" is a cruel joke.


Yeah, just like Google and stack overflow


The normal way to figure this out is to negotiate. We’ll either come to a mutually agreeable amount, or they’ll decide it’s not worth the cost to use my stuff. If I think I deserve $5 from OpenAI, then I’d suggest that, and they’d accept or come back with a counteroffer or tell me I’m nuts and move on. Probably that last one.

But for some reason, these companies think they don’t need to bother, and can just use everyone’s stuff.

Wait, I phrased that wrong. For a very good reason based on long precedent, these companies know that IP law is a tool to be used by big companies against individuals and sometimes other big companies, but never by individuals against company, so they know they don’t have to bother.


> Sometimes I just feel like these people overestimate how much they are actually owed from these training runs.

It’s not about being paid for including their work, it’s about being compensated for having done so without permission. For crying out loud, they went out of their way to remove copyright notices from the pirated work.

> It’s trained on 15T tokens. So how many did you provide that were genuinely novel?

Then they can just take it out. And go ahead and take out every thing you didn’t have permission to include. What’s that? The model is now significantly worse? Yeah, these things compound.

> And how much money do you want? Like $5 from OpenAI? And $0 from meta since it’s open source?

No, they would’ve wanted for the work to not have been included without permission in the first place. Do you understand the world you’re advocating for? You’re arguing it’s OK for rich people to do whatever they want if they throw some scraps on the floor for you. Not everything is about money. Unfortunately there’s no other reasonable way (legal and non violent) to punish these infringers.

> I personally hope we can all get on the same team with AI and treat its advancement as scientific research for the betterment of humanity.

What you’re expressing is “I hope everyone will stop arguing and agree with me”. These moguls care about themselves, it is incredibly naive to believe they give a rat’s ass about “the betterment of humanity”.


LLM companies aren't being funded by the hundreds of billions because investors expect science to be advanced by text and image generators.

I find it very unlikely that the commodification of knowledge work will be for the betterment humanity, I don't know if people are expecting here that just because the value of more people's labor becomes zero that we will do, what, do away with money? No, it will just mean that fewer people will have the chance to earn the right to use space and resources in a meaningful way.


> I find it very unlikely that the commodification of knowledge work will be for the betterment humanity

There's no law to force it. So of course it won't be.

Even if there were a law to force it, how would you enforce it?


Hard to treat llms training aon your data at your expense as research for the betterment of humanity when it is specifically the private company who is imposing that cost on you that profits.


Does this goes both ways? Can I infringe of Disney’s IP on the grounds that their stories are so derivative that they aren’t actually that new?

The betterment of humanity seems to involve some parties making a ton of money while the people who provided the data apparently just need to be grateful.


You can absolutely use the story frameworks that Disney has done.

You cannot make a story featuring Simba.


> You cannot make a story featuring Simba.

Just use Kimba the White Lion.[0]

[0] https://12tomatoes.com/kimba-similarity-lion-king/


The fact they they were willing to risk significant legal exposure in order to use this dataset suggests it's worth considerably more than 5 dollars to them. Zuck isn't putting his ass on the line for a Big Mac.


So if the works they're stealing aren't worth anything, why do they need it so badly?


Assuming the works have registered with the copyright office they're eligible for statutory damages.

The range for that is huge though, it can be in the hundreds of dollars per work, or if the infringement is shown to be wilful then a judge can award up to $150,000 per work.


Fair point, seems willful here


There is absolutely no logical pathway between the current flavor of hardcore free for all individualistic capitalism and what you describe here


Stealing is still illegal


well US copyright law statutory damages are $30,000 per work infringed, and $150,000 if done deliberately

so I think $150,000 per copyrighted work ingested is fair


[flagged]


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


I'll do my best to tone things down. However I struggle a bit with your observation. The posted article is about one of the wealthiest men in the world likely engaging in the largest copyright violation in history for his own financial benefit. Such antisocial behavior is almost certainly linked to male genetics. Something this same man just recently publicly stated he believes the world needs more, not less, of his anti-social aggressive tendencies.

I would argue that you don't like what I have to say and are simply trying to squash my opinions. If you feel the post itself doesn't belong here, then I agree - and to that extent, I also agree to do my best to refrain from commenting upon submissions that themselves do not belong here.

But, and here is my biggest objection: I have strong opinions regarding the dangers male genetics pose to world order and the future of humanity. And males unedcuated about the subject react in the manner they know best: aggressive anger; a reaction I am not inclined to back away from (having my own aggressive male genetics).


"almost certainly" sounds like quite a stretch to me, but if you wanted to make a substantive point about male genetics that might be ok. Maybe. But it might also be a generic flamewar tangent (which we definitely don't want on HN), so a comment like that would need to be written in a particularly flame-retardant style.

Either way, though, a dismissive one-liner like the GP comment is definitely against the site guidelines.


I understand. I only responded to you to let you know I had definitely had a reason behind my post. You and I could have a private conversation some day about why "almost certainly" is not a stretch. And it is particularly relevant with male billionaires currently exercising their aggressive genetics in the political arena more openly than ever. But, as you say, that's a discussion best held elsewhere than HN.

I love it here. I began programming back in 1978 and although I spent my life as a trial attorney, I have been actively programming since 1978, more so than ever now that I am retired. My primary reason for being here is to keep up with hi-tech. I will definitely do my best to limit my comments to the tech arena.


I don't thank that Dan is saying that the discussion is best held elsewhere. And I don't think he's saying it shouldn't be held. I take his point to be that if you are going to start that discussion, you are responsible for the overall effect it's going to have on the conversation.

When he says "a comment like that would need to be written in a particularly flame-retardant style" he's asking you to write your comments the same way you've done for the two comments directed at him: as if you are trying to persuade someone intelligent who happens to disagree with you.

There's no need "limit your comments to the tech arena". It sounds like we'd be better off if you bring your full experience---not too many people know both law and programming. But please do it in the style of these excellent follow-up responses rather than the flame-bait you started with!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: