Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I found this website with the actual bill text along with annotations [0]. The section 22757.12. seems to contain the actual details of what they mean by "transparency".

[0] https://sb53.info/



> “Artificial intelligence model” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.

Correct me if I'm wrong, but it sounds like this definition covers basically all automation of any kind. Like, a dumb lawnmower responds to the input of the throttle lever and the kill switch and generates an output of a spinning blade which influences the physical environment, my lawn.

> “Catastrophic risk” means a foreseeable and material risk that a large developer’s development, storage, use, or deployment of a foundation model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident, scheme, or course of conduct involving a dangerous capability.

I had a friend that cut his toe off with a lawnmower. I'm pretty sure more than 50 people a year injure themselves with lawn mowers.


Yeah, you're wrong - a court simply isn't going to consider a lawnmower's translation of throttle input to motor power as "inference". The principles of statutory interpretation require courts to consider the context and purpose of the legislation, and everyone knows this is about GPT-5, not lawnmowers.

In any case, that definition is only used to further define "foundation model": "an artificial intelligence model that is all of the following: (1) Trained on a broad data set. (2) Designed for generality of output. (3) Adaptable to a wide range of distinctive tasks." This legislation is very clearly not supposed to cover your average ML classifier.


These type of comments are so annoying.

"Everything is the same as everything" as an argumentative tactic and mindset is just incredibly intellectually lazy.

As soon as anyone tries to do anything about anything, ever, anywhere, people like you come out of the wood work and go "well what about this perfectly normal thing? Is that illegal now too???"

Why bother making bombs illegal? I mean, I think stairs kill more people yearly har har har! What, now it's illegal to have a two story house?

Also, elephant in the room: lawnmowers absolutely fucking do contain warning and research on their safety. If you develop a lawnmower, YES you have to research it's safety. YES that's perfectly reasonable. NO that's not an undue burden. And YES everyone is already doing that.


Also, people seem to forget that when laws are challenged or cases arise that judges exist who are charged with making reasonable interpretations of the law.


It doesn't really "infer..how to" do anything in that example; rather it simply does those things based on how it was originally designed.

I'm not saying that it's a great definition, but I will correct you, since you asked.


If a single design of automated lawnmower cut off 50 toes it should absolutely be investigated.

Perhaps the result of that investigation is there is no fault on the machine, but you don't know that until you've looked.


The reason I bring up the definition is that "AI" is defined so loose as to include dumb lawn mowers.

In my friend's case, he was mowing on a hill, braced to pull the lawnmower back, and jerked it back onto his foot.

Edit: Looked it up, the number of injuries per year for lawn mowers is around 6k [1]

[1] https://www.wpafb.af.mil/News/Article-Display/Article/303776...


Oh that reminds me of strollers that can amputate baby fingers. It happened multiple times. These people need to be sued to death.


Around 20,000 people die every year from falling off a roof or a ladder.

Some things are inherently dangerous.


Motion sensing light? Or light sensing? Motion sensors say with automated doors...

They infer from fuzzy input when to activate...


"The Butlerian Jihad was a cataclysmic, millennia-long holy war that completely eradicated artificial intelligence, computers, and sentient robots from human civilization. Taking place over 10,000 years before the events of the original novel, the Jihad was a violent reaction to humanity's over-reliance on and eventual enslavement by "thinking machines". The ban on advanced AI became a foundational law of the Galactic Imperium."


Thanks! we'll add that link to the top text.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: