Hacker News new | past | comments | ask | show | jobs | submit login

A few months until this is effectively outlawed if the open weights proposal in 270 days comes into existence



This assertion is not supported by the text of Biden’s Executive Order. There are a number of requirements placed on various government agencies to come up with safety evaluation frameworks, to make initial evaluations related to open weight models, and to provide recommendations back to the President within 270 days. But there’s nothing whatsoever that I can find that outlaws open weight models. There’s also little reason to think that “outlaw them” would be amongst the recommendations ultimately provided to the executive.

(I can imagine recommendations that benefit incumbents by, for instance, placing such high burden on government adoption of open weight models that OpenAI is a much more attractive purchase. But that’s not the same as what you’re talking about.)

I dunno, the EO seems pretty easy to read. Am I missing something in the text?

https://www.whitehouse.gov/briefing-room/presidential-action...


Yeah, they're not going to outlaw them. The well-worn path is to make the regulatory burden insurmountable for small companies; that will be enough.

Disobey.


How does one provide the KYC information for open models?


More to the point, why in the world should anyone "know" me to download a file? A simple antithesis of open technology, that's what this proposal is.


Nothing in the order requires KYC information for open models.


I don't see anywhere that it says weights are outlawed. The part I saw says something about making a report on risks or benefits of open weights.

I agree that it is concerning the way it's open-ended. But where is the actual outlawing?


Common sense AI control NOW. Ban assault style GPUs


Nobody needs more that 4GB of VRAM


> Common sense AI control NOW. Ban assault style GPU

That is beautiful, I made you a shirt. https://sprd.co/ZZufv7j


the laws were designed for the GPUs of 2020, there's no way they could have predicted what technology would come to.


How seriously threatening is this? How can they enforce something this stupid without even consulting with industry leaders?


Oh they have. Many of the industry leaders are importuning them for this. For industry leaders this is a business wet dream. It's "regulatory capture" at its finest


OpenAI has lobbied them among others, already. Our elected officials accept written law from lobbyists. Money and power games don't look out for common folks except incidentally or to the extent commoners mobilize against them.


I honestly can't blame OpenAI. They likely threw a huge amount of money at training and fine-tuning their models. I don't know how open-source will surpass them and not stay a second-tier solution without another huge generous event occurring like Facebook open-sourcing LLaMa


It seems like we need a distributed training network where people can donate GPU time, but then any training result is automatically open.


Great idea!


The only thing this directs is... consulting with a variety of groups, including industry, and writing a report based in that consultation.

So, literally, they can’t enforce it without consulting with industry, since enforcement is just sonewhat in the government holding someone else in government accountable for consulting with, among others, the industry.


Can you elaborate please?



Soliciting Input on Dual-Use Foundation Models with Widely Available Model Weights. When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model. To address the risks and potential benefits of dual-use foundation models with widely available weights, within 270 days of the date of this order, the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, shall...

I believe that is the relevant section, which I am hoping they realize how dumb this is going to be.


China and Russia will keep using US models because they don't care about US laws. I think if restrictions on AI are only applied in the US, such restrictions will only put Americans at a disadvantage.

P.S. I'm from one of the said countries


I feel like this is another case of screens for e-book readers; where a single player had everybody else in a chokehold for decades, and slowed down innovation to a drip


This exists solely to protect the incumbents (OpenAI et al)


OpenAI has truly lived up to its name!

Not even Oracle dared to pull shit like this w/ Java.


Eh, I may be misremembering my history, but didn't they try back in the day and quickly got smacked down for one reason or another?


They do defend their business, tooth and nail.

But this would be akin to them saying "you know, bytecode could be used for evil, and we'd to regulate/outlaw the development of new virtual machines like the one we already have".


This is huge news... Has anybody seen any discussions around that topic?



This is not a discussion, this is just you posting verbatim what the executive order said.


> I believe that is the relevant section, which I am hoping they realize how dumb this is going to be.

How dumb its going to be to... solicit input from a wide range of different areas and write a report?


> the removal of safeguards within the model

this cat is already out of the bag, so this is pointless legislation that just hampers progress

I have already had good-actor success with uncensored models


> this cat is already out of the bag, so this is pointless legislation that just hampers progress

This isn't legislation.

And its not proposing anything except soliciting input from a broad range of civil society groups and writing a report. If the report calls for dumb regulatory ideas, that’ll be appropriate to complain about. But “there’s a thing that is happening that seems like it might have big effects, gather input from experts and concerned parties and then writeup findings on the impacts and what, if any, action seems warranted is... not a particularly alarming thing.


OK. The way it was portrayed is that this was "very bad" so I assumed something was decreed without sufficient input from the industry/community.


> removal of safeguards within the model

This is insanity. I have to be missing something, what else do the safeguards prevent the LLM from doing? This has to be more about this than like preventing an LLM from using bad words or showing support for Trump...


we just have to download the weights and use piratebay, or even emule, just like old times


Yes, but companies will have no incentive to publish open source models anymore. Or, it could be so difficult/beaurocratic no one will bother and keep it close source


What will actually happen is that innovation there will just move somewhere else (and it has partly done so).

This proposal is the US doing that bicycle/stick meme, it will backfire spectacularly.


The innovation is largely happening within the megacorps anyway, this is solely intended to make sure the innovation cannot move somewhere else.


Mistral and Falcon is not from megacorps and not even US.and many other opensource chinese models . And both are based models that means they are totally organic outside of US.


That’s what they told us. Turns out Google stopped innovating a long time ago. They could say stuff like this when Bard wasn’t out but now we have Mistral and friends to compare to Llama.

Now it turns out they were just bullshitting at Google.


> Now it turns out they were just bullshitting at Google.

I don't think Google was bullshitting when they wrote, documented and released Tensorflow, BERT and flan-t5 to the public. Their failure to beat OpenAI in a money-pissing competition really doesn't feel like it reflects on their capability (or intentions) as a company. It certainly doesn't feel like they were "bullshitting" anyone.


Everyone told us they had secret tech that they were keeping inside. But then Bard came out and it was like GPT-3. I don’t know man. The proof of the pudding is in the eating.

> The innovation is largely happening within the megacorps anyway

That was the part I was replying to. Whichever megacorp this is, it’s not Google.


Hey, feel free to draw your own conclusions. AI quality is technically a subjective topic. For what it's worth though, Google's open models have benched quite competitively with GPT-3 for multiple years now: https://blog.research.google/2021/10/introducing-flan-more-g...

The flan quantizations are also still pretty close to SOTA for text transformers. Their quality-to-size ratio is much better than a 7b Llama finetune, and it appears to be what Apple based their new autocorrect off of.


Still, one of those corporations wants to capture the market and has monopolistic attitude. Meta clearly chose the other direction, when publishing their models and allowing us all to participate in.


Then we'll create a distributed infrastructure for the creation of models. Run some program and donate spare GPU cycles to generate public AI tools that will be made available to all.


I really really would like to believe this could work in practice.

Given current data volume used during the training phase (tb/s), I highly doubt it's possible without two, magnitude changing, breakthroughs at once




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: