The problem it solves is providing any sort of baseline framework for lawmakers and the legal system to even discuss AI and its impacts based on actual data instead of feels. That's why so much of it is about requiring tech companies to publish safety plans, transparency reports and incidents, and why the penalty for noncompliance is only $10,000.
A comprehensive AI regulatory action is way too premature at this stage, and do note that California is not the sovereign responsible for U.S. copyright law.
If I had a requirement to either do something I didn't want to do or pay a nickel, I'd just fake doing what needed to be done and wait for the regulatory body to fine me 28 years later after I exhausted my appeal chain. Luckily, inflation turned the nickel into a penny, now defunct, and I rely on the ability to pay debts in legal currency to use another 39 years of appeals.
And if later on someone proposes more aggressive action if you don't do that action, they now have a record of all the times you've failed to do that thing which they can point to about how the current level of penalties are not sufficient.
That said, the penalty is not just $10k. It's $10k for an unknowing violation, or $100k for a knowing violation that "does not create a material risk of death, serious physical injury, or a catastrophic risk" or a unknowing violation that does cause that, and $10m for if you knowingly violate it and it does cause risk of death or serious physical injury, etc.
I imagine the legal framework and small penalty if you fail to actually publish something can play into the knowing/unknowing state if they investigate you as well.
28yrs for an appeals chain is a bit longer than most realities I'm aware of. More like a dozen years at the top end would be more in line with what I've seen out there.
In general though, it's easier to just comply, even for the companies. It helps with PR and employee retention, etc.
They may fudge the reports a bit, even on purpose, but all groups of people do this to some degree. The question is, when does fudging go too far? There is some gray, but there isn't infinite amounts of gray.
For sure, it's meant to go a bit too far I guess. I'll be more real.
This is to allow companies to make entirely fictitious statements that they will state satisfies their interpretation. The lack of fines will suggest compliance. Proving the statement is fiction isn't ever going to happen anyway.
But it's also such low fine, that will eat inflation for those 12 years.
>and why the penalty for noncompliance is only $10,000.
Think they were off by an order of magnitude for this fine. The PR for reporting anything bad on AI is probably worth more than the fine for non-compliance. 100k would at least start to dent the bumper.
If I had to shoot from the hip and guess: 30% of the current crop of AI startups are going to make it at best. Frankly that feels insanely generous but I’ll give them more credit than I think they deserve re: ideas and actual staying power.
Many will crash in rapid succession. There isn’t enough room for all these same-y companies.
Historically speaking, people saying "history won't repeat this time, it's different" have a pretty bad track record. Do we remember what the definition of "insanity" is?
This seems different to me than other hype bubbles because “the human intelligence/magical worker replacement machine isn’t in that guy’s box or the other guy’s box or even the last version of my box but it surely is in this box now that I have here for you for a price...” is just about infinitely repackageble and resellable to an infinite (?) contingent of old gullible men with access to functionally infinite money.
Not sure if that’s lack of imagination or education on my part or what.
That's a weird example, because they were vindicated. The .com economy really did massively change the way business is done. They just (fatally, for most of the companies) overestimated how quickly the change would happen.
I think a bubble implies that the industry was overvalued compared to its long term fundamental value.
Having a dip and then a return to growth and exceeding the previous peak size isn't exactly a bubble. The internet and tech sector has grown to dominate the global economy. What happened was more of a cyclical correction, or a dip and rebound.
>Having a dip and then a return to growth and exceeding the previous peak size isn't exactly a bubble.
That's exactly what a bubble is. You blow too fast and you get nothing but soap. You blow slow, controlled breaths balancing speed and accuracy and you get a big, sustainable bubble.
No one's saying you can't blow a new bubble, just that this current one isn't long for this world. It's just that the way investments work is that the first bubble popping will scare off a lot of the biggest investors who just want a quick turnaround.
Yes. This seems to be what you're doing-- carrying the metaphor too far. I believe the common use relates the asset prices to intrinsic or fundamental values.
There's a graduated penalty structure based on how problematic it is. It goes from $10k to $100k and then $10m, depending on whether you're in violation knowingly or unknowingly and whether there's a risk of death or serious physical harm, etc.
A comprehensive AI regulatory action is way too premature at this stage, and do note that California is not the sovereign responsible for U.S. copyright law.