Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem it solves is providing any sort of baseline framework for lawmakers and the legal system to even discuss AI and its impacts based on actual data instead of feels. That's why so much of it is about requiring tech companies to publish safety plans, transparency reports and incidents, and why the penalty for noncompliance is only $10,000.

A comprehensive AI regulatory action is way too premature at this stage, and do note that California is not the sovereign responsible for U.S. copyright law.



If I had a requirement to either do something I didn't want to do or pay a nickel, I'd just fake doing what needed to be done and wait for the regulatory body to fine me 28 years later after I exhausted my appeal chain. Luckily, inflation turned the nickel into a penny, now defunct, and I rely on the ability to pay debts in legal currency to use another 39 years of appeals.


And if later on someone proposes more aggressive action if you don't do that action, they now have a record of all the times you've failed to do that thing which they can point to about how the current level of penalties are not sufficient.

That said, the penalty is not just $10k. It's $10k for an unknowing violation, or $100k for a knowing violation that "does not create a material risk of death, serious physical injury, or a catastrophic risk" or a unknowing violation that does cause that, and $10m for if you knowingly violate it and it does cause risk of death or serious physical injury, etc.

I imagine the legal framework and small penalty if you fail to actually publish something can play into the knowing/unknowing state if they investigate you as well.


28yrs for an appeals chain is a bit longer than most realities I'm aware of. More like a dozen years at the top end would be more in line with what I've seen out there.

In general though, it's easier to just comply, even for the companies. It helps with PR and employee retention, etc.

They may fudge the reports a bit, even on purpose, but all groups of people do this to some degree. The question is, when does fudging go too far? There is some gray, but there isn't infinite amounts of gray.


For sure, it's meant to go a bit too far I guess. I'll be more real.

This is to allow companies to make entirely fictitious statements that they will state satisfies their interpretation. The lack of fines will suggest compliance. Proving the statement is fiction isn't ever going to happen anyway.

But it's also such low fine, that will eat inflation for those 12 years.


12 years would also be extraordinary, the grand majority of these are dealt with within a year or two.

But I also understand that you were using hyperbole to emphasize your point, so there's not actually a reason to argue this.


In general, when you are appealing fines you have to have posted them as bond.

It's a low fine, but your particular objection is invalid.


>and why the penalty for noncompliance is only $10,000.

Think they were off by an order of magnitude for this fine. The PR for reporting anything bad on AI is probably worth more than the fine for non-compliance. 100k would at least start to dent the bumper.


Hint: It's low because the tech companies are already in agreement with the legislation. This is a huge win compared to a blanket regulatory push.


I thought OpenAI campaigned real hard against this one?


I want to hope so. They'll agree, until the bubble is prone to bursting and then suddenly that fine might be better to eat.


There is no bubble. Your priors are not serving you well here.


If I had to shoot from the hip and guess: 30% of the current crop of AI startups are going to make it at best. Frankly that feels insanely generous but I’ll give them more credit than I think they deserve re: ideas and actual staying power.

Many will crash in rapid succession. There isn’t enough room for all these same-y companies.


I thought a 10% success rate was the baseline for startups -- which would make your 30% estimate generous indeed.


I'm admittedly hedging my bets in both directions a bit since really none of us know anything about what's going to happen lol


Why do you say that there is no bubble? Do you feel the investment that went in to this justifies the results and returns?


The AI bubble is a thing but so was the dot com bubble. It doesn't mean the technology is useless.


Historically speaking, people saying "history won't repeat this time, it's different" have a pretty bad track record. Do we remember what the definition of "insanity" is?


This seems different to me than other hype bubbles because “the human intelligence/magical worker replacement machine isn’t in that guy’s box or the other guy’s box or even the last version of my box but it surely is in this box now that I have here for you for a price...” is just about infinitely repackageble and resellable to an infinite (?) contingent of old gullible men with access to functionally infinite money. Not sure if that’s lack of imagination or education on my part or what.


“The .com economy is a new business paradigm: the rules have changed.”

- pioneers in wrongness 25 years ago. Oft copied, but never vindicated.


That's a weird example, because they were vindicated. The .com economy really did massively change the way business is done. They just (fatally, for most of the companies) overestimated how quickly the change would happen.


Having a bubble doesn’t mean the industry entirely implodes and goes away forever or otherwise doesn’t change things. It’s often a phase.


I think a bubble implies that the industry was overvalued compared to its long term fundamental value.

Having a dip and then a return to growth and exceeding the previous peak size isn't exactly a bubble. The internet and tech sector has grown to dominate the global economy. What happened was more of a cyclical correction, or a dip and rebound.


>Having a dip and then a return to growth and exceeding the previous peak size isn't exactly a bubble.

That's exactly what a bubble is. You blow too fast and you get nothing but soap. You blow slow, controlled breaths balancing speed and accuracy and you get a big, sustainable bubble.

No one's saying you can't blow a new bubble, just that this current one isn't long for this world. It's just that the way investments work is that the first bubble popping will scare off a lot of the biggest investors who just want a quick turnaround.


A bubble's when values inflate well beyond intrinsic valuations.

You can have a crash without this having happened.


Yes, indeed. Thought I'd change "intrinsic valuations" to "rapid change in valuations".

If you deflate over time slowly (or simply slow growth expectations) you can prevent a pop. That doesn't tend to be what historically happens, though.


> to "rapid change in valuations".

Yes. This seems to be what you're doing-- carrying the metaphor too far. I believe the common use relates the asset prices to intrinsic or fundamental values.


There's a graduated penalty structure based on how problematic it is. It goes from $10k to $100k and then $10m, depending on whether you're in violation knowingly or unknowingly and whether there's a risk of death or serious physical harm, etc.


> A comprehensive AI regulatory action is way too premature at this stage

Funny, I think it is overdue.


> I think it is overdue.

Why?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: