Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're actually making my point for me. 18 USC § 842 criminalizes distributing information with knowledge or intent that it will be used to commit a crime. That's criminal liability for completed conduct with a specific mens rea requirement. You have to actually know or intend the criminal use.

SB 53 is different. It requires companies to implement filtering systems before anyone commits a crime or demonstrates criminal intent. Companies must assess whether their models can "provide expert-level assistance" in creating weapons or "engage in conduct that would constitute a crime," then implement controls to prevent those outputs. That's not punishing distribution to someone you know will commit a crime. It's mandating prior restraint based on what the government defines as potentially dangerous.

Brandenburg already handles this. If someone uses an AI to help commit a crime, prosecute them. If a company knowingly provides a service to facilitate imminent lawless action, that's already illegal. We don't need a regulatory framework that treats the capability itself as the threat.

The "AIs don't have speech rights" argument misses the point. The First Amendment question isn't about the AI's rights. It's about the government compelling companies (or anyone) to restrict information based on content. When the state mandates that companies must identify and filter certain types of information because the government deemed them "dangerous capabilities," that's a speech restriction on the companies.

And yes, companies control their outputs now. The problem is SB 53 removes that discretion by legally requiring them to "mitigate" government-defined risks. That's compelled filtering. The government is forcing companies to build censorship infrastructure instead of letting them make editorial choices.

The real issue is precedent. Today it's bioweapons and cyberattacks. But once we establish that government can mandate "safety" assessments and require mitigation of "dangerous capabilities," that framework applies to whatever gets defined as dangerous tomorrow.



I hate that HN's guidelines ask me not to do this, but it's hard to answer point-by-point when there are so many.

> You have to actually know or intend the criminal use.

> If a company knowingly provides a service to facilitate imminent lawless action, that's already illegal.

And if I tell an AI chatbot that I'm intending to commit a crime, and somehow it assists me in doing so, the company behind that service should have knowledge that its service is helping people commit crimes. That's most of SB 53 right there: companies must demonstrate actual knowledge about what their models are producing and have a plan to deal with the inevitable slip-up.

Companies do not want to be held liable for their products convincing teens to kill themselves, or supplying the next Timothy McVeigh with bomb-making info. That's why SB 53 exists; this is not coming from concerned parents or the like. The tech companies are scared shitless that they will be forced to implement even worse restrictions when some future Supreme Court case holds them liable for some disaster that their AIs assisted in creating.

A framework like SB 53 gives them the legal basis to say, "Hey, we know our AIs can help do [government-defined bad thing], but here are the mitigations in place and our track record, all in accordance with the law".

> When the state mandates that companies must identify and filter certain types of information because the government deemed them "dangerous capabilities," that's a speech restriction on the companies.

Does the output of AI models represent the company's speech, or does it not? You can't have your cake and eat it too. If it does, then we should treat it like speech and hold companies responsible for it when something goes wrong. If it doesn't, then the entire First Amendment argument is moot.

> The government is forcing companies to build censorship infrastructure instead of letting them make editorial choices.

Here's the problem: the nature of LLMs themselves do not allow companies to fully implement their editorial choices. There will always be mistakes, and one will be costly enough to put AIs on the national stage. This is the entire reason behind SB 53 and the desire for a framework around AI technology, not just from the state, but from the companies producing the AIs themselves.


You're conflating individual criminal liability with mandated prior restraint. If someone tells a chatbot they're going to commit a crime and the AI helps them, prosecute under existing law. But the company doesn't have knowledge of every individual interaction. That's not how the knowledge requirement works. You can't bootstrap individual criminal use into "the company should have known someone might use this for crimes, therefore they must filter everything."

The "companies want this" argument is irrelevant. Even if true, it doesn't make prior restraint constitutional. The government can't delegate its censorship powers to willing corporations. If companies are worried about liability, the answer is tort reform or clarifying safe harbor provisions, not building state-mandated filtering infrastructure.

On whether AI output is the company's speech: The First Amendment issue here isn't whose speech it is. It's that the government is compelling content-based restrictions. SB 53 doesn't just hold companies liable after harm occurs. It requires them to assess "dangerous capabilities" and implement "mitigations" before anyone gets hurt. That's prior restraint regardless of whether you call it the company's speech or not.

Your argument about LLMs being imperfect actually proves my point. You're saying mistakes will happen, so we need a framework. But the framework you're defending says the government gets to define what counts as dangerous and mandate filtering for it. That's exactly the infrastructure I'm warning about. Today it's "we can't perfectly control the models." Tomorrow it's "since we have to filter anyway, here are some other categories the state defines as harmful."

Given companies can't control their models perfectly due to the nature of AI technology, that's a product liability question, not a reason to establish government-mandated content filtering.


> You can't bootstrap individual criminal use into "the company should have known someone might use this for crimes, therefore they must filter everything."

Lucky for me, I am not. The company already has knowledge of each and every prompt and response, because I have read the EULAs of every tool I use. But that's beside the point.

Prior restraint is only unconstitutional if it is restraining protected speech. Thus far, you have not answered the question of whether AI output is speech at all, but have assumed prior restraint to be illegal in and of itself. We know this is not true because of the exceptions you already mentioned, but let me throw in another example: the many broadcast stations regulated by the FCC, who are currently barred from "news distortion" according to criteria defined by (checks notes) the government.


Having technical access to prompts doesn't equal knowledge for criminal liability. Under 18 USC § 842, you need actual knowledge that specific information is being provided to someone who intends to use it for a crime. The fact that OpenAI's servers process millions of queries doesn't mean they have criminal knowledge of each one. That's not how mens rea works.

Prior restraint is presumptively unconstitutional. The burden is on the government to justify it under strict scrutiny. You don't have to prove something is protected speech first. The government has to prove it's unprotected and that prior restraint is narrowly tailored and the least restrictive means. SB 53 fails that test.

The FCC comparison doesn't help you. In Red Lion Broadcasting Co. v. FCC, the Supreme Court allowed broadcast regulation only because of spectrum scarcity, the physical limitation that there aren't enough radio frequencies for everyone. AI doesn't use a scarce public resource. There's no equivalent justification for content regulation. The FCC hasn't even enforced the fairness doctrine since 1987.

The real issue is you're trying to carve out AI as a special category with weaker First Amendment protection. That's exactly what I'm arguing against. The government doesn't get to create new exceptions to prior restraint doctrine just because the technology is new. If AI produces unprotected speech, prosecute it after the fact under existing law. You don't build mandatory filtering infrastructure and hand the government the power to define what's "dangerous."




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: