Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've written a submission to the authors of this bill, and made it publicly available here:

https://www.answer.ai/posts/2024-04-29-sb1047.html

The EFF have also prepared a submission:

https://www.context.fund/policy/2024-03-26SB1047EFFSIA.pdf

A key issue with the bill is that it criminalises creating a model that someone else uses to cause harm. But of course, it's impossible to control what someone else does with your model -- regardless of how you train it, it can be fine-tuned, prompted, etc by users for their own purposes. Even then, you can't really know why a model is doing something -- for instance, AI security researchers Arvind Narayanan and Sayash Kapoor point out:

> Consider the concern that LLMs can help hackers generate and send phishing emails to a large number of potential victims. It’s true — in our own small-scale tests, we’ve found that LLMs can generate persuasive phishing emails tailored to a particular individual based on publicly available information about them. But here’s the problem: phishing emails are just regular emails! There is nothing intrinsically malicious about them. A phishing email might tell the recipient that there is an urgent deadline for a project they are working on, and that they need to click on a link or open an attachment to complete some action. What is malicious is the content of the webpage or the attachment. But the model that’s being asked to generate the phishing email is not given access to the content that is potentially malicious. So the only way to make a model refuse to generate phishing emails is to make it refuse to generate emails.

Nearly a year ago I warned that that bills of this kind could hurt, rather than help safety, and could actually tear down the foundations of the Enlightenment:

https://www.fast.ai/posts/2023-11-07-dislightenment.html



> A key issue with the bill is that it criminalises creating a model that someone else uses to cause harm.

Build a model that is trained on the corpus of gun designs.

Should be an interesting court case and social experiment.


I was about to say how quick a lot of people are to blame the gun (the tool) and the manufacturer, but when it comes to "AI" (the tool) suddenly they turn 540 degrees and blame the user.

The reasonable take of course is that the tools are never to blame, they are just tools after all. Blame the bastard using the tools for nefarious ends, whether it's guns or "AI" or whatever else the case may be.


World Trade Center bombing 1993

The Oklahoma City bombing 1995

Most people with a high school level of chemistry and a trip to the library can cause a lot of damage.

David Hann https://en.wikipedia.org/wiki/David_Hahn the radioactive Boy Scout single handedly created a superfund site.

This will quickly turn into a first amendment case and die in court I would think.


Typically it also requires precursor materials.

I am against laws and systems of government that turn me into a suspect just for learning information. It is not ok that a secret investigation into our private lives is triggered simply by being curious.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: