This assertion is not supported by the text of Biden’s Executive Order. There are a number of requirements placed on various government agencies to come up with safety evaluation frameworks, to make initial evaluations related to open weight models, and to provide recommendations back to the President within 270 days. But there’s nothing whatsoever that I can find that outlaws open weight models. There’s also little reason to think that “outlaw them” would be amongst the recommendations ultimately provided to the executive.
(I can imagine recommendations that benefit incumbents by, for instance, placing such high burden on government adoption of open weight models that OpenAI is a much more attractive purchase. But that’s not the same as what you’re talking about.)
I dunno, the EO seems pretty easy to read. Am I missing something in the text?
Oh they have. Many of the industry leaders are importuning them for this. For industry leaders this is a business wet dream. It's "regulatory capture" at its finest
OpenAI has lobbied them among others, already. Our elected officials accept written law from lobbyists. Money and power games don't look out for common folks except incidentally or to the extent commoners mobilize against them.
I honestly can't blame OpenAI. They likely threw a huge amount of money at training and fine-tuning their models. I don't know how open-source will surpass them and not stay a second-tier solution without another huge generous event occurring like Facebook open-sourcing LLaMa
The only thing this directs is... consulting with a variety of groups, including industry, and writing a report based in that consultation.
So, literally, they can’t enforce it without consulting with industry, since enforcement is just sonewhat in the government holding someone else in government accountable for consulting with, among others, the industry.
Soliciting Input on Dual-Use Foundation Models with Widely Available Model Weights. When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model. To address the risks and potential benefits of dual-use foundation models with widely available weights, within 270 days of the date of this order, the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, shall...
I believe that is the relevant section, which I am hoping they realize how dumb this is going to be.
China and Russia will keep using US models because they don't care about US laws. I think if restrictions on AI are only applied in the US, such restrictions will only put Americans at a disadvantage.
I feel like this is another case of screens for e-book readers; where a single player had everybody else in a chokehold for decades, and slowed down innovation to a drip
But this would be akin to them saying "you know, bytecode could be used for evil, and we'd to regulate/outlaw the development of new virtual machines like the one we already have".
> this cat is already out of the bag, so this is pointless legislation that just hampers progress
This isn't legislation.
And its not proposing anything except soliciting input from a broad range of civil society groups and writing a report. If the report calls for dumb regulatory ideas, that’ll be appropriate to complain about. But “there’s a thing that is happening that seems like it might have big effects, gather input from experts and concerned parties and then writeup findings on the impacts and what, if any, action seems warranted is... not a particularly alarming thing.
This is insanity. I have to be missing something, what else do the safeguards prevent the LLM from doing? This has to be more about this than like preventing an LLM from using bad words or showing support for Trump...
Yes, but companies will have no incentive to publish open source models anymore. Or, it could be so difficult/beaurocratic no one will bother and keep it close source
Mistral and Falcon is not from megacorps and not even US.and many other opensource chinese models .
And both are based models that means they are totally organic outside of US.
That’s what they told us. Turns out Google stopped innovating a long time ago. They could say stuff like this when Bard wasn’t out but now we have Mistral and friends to compare to Llama.
Now it turns out they were just bullshitting at Google.
> Now it turns out they were just bullshitting at Google.
I don't think Google was bullshitting when they wrote, documented and released Tensorflow, BERT and flan-t5 to the public. Their failure to beat OpenAI in a money-pissing competition really doesn't feel like it reflects on their capability (or intentions) as a company. It certainly doesn't feel like they were "bullshitting" anyone.
Everyone told us they had secret tech that they were keeping inside. But then Bard came out and it was like GPT-3. I don’t know man. The proof of the pudding is in the eating.
> The innovation is largely happening within the megacorps anyway
That was the part I was replying to. Whichever megacorp this is, it’s not Google.
Hey, feel free to draw your own conclusions. AI quality is technically a subjective topic. For what it's worth though, Google's open models have benched quite competitively with GPT-3 for multiple years now: https://blog.research.google/2021/10/introducing-flan-more-g...
The flan quantizations are also still pretty close to SOTA for text transformers. Their quality-to-size ratio is much better than a 7b Llama finetune, and it appears to be what Apple based their new autocorrect off of.
Still, one of those corporations wants to capture the market and has monopolistic attitude. Meta clearly chose the other direction, when publishing their models and allowing us all to participate in.
Then we'll create a distributed infrastructure for the creation of models. Run some program and donate spare GPU cycles to generate public AI tools that will be made available to all.