It's too late to stop "deep fakes". That technology is already in Photoshop and even built into some cameras. Also, regulate that and Hollywood special effects shops may have to move out of state.
As for LLMs making it easier to people to build destructive devices, Google can provide info about that. Or just read some "prepper" books and magazines. That ship sailed long ago.
Real threats are mostly about how much decision power companies delegate to AIs. Systems terminating accounts with no appeal are already a serious problem.
An EU-type requirement for appeals, a requirement for warning notices, and the right to take such disputes to court would help there. It's not the technology.
> Systems terminating accounts with no appeal are already a serious problem.
Right, there is no issue with how "smart" ML models will get or whatever ignorant framing about intelligence and existential risk gets made up by people who don't understand the technology.
The real concern is dumb use of algorithmic decision making without recourse which is just as valid whether it's an if statement or a trillion parameter LLM.
According to the bill, a model has "hazardous capabilities" if and only if the existence of the model makes it "significantly" easier to cause the damages the bill covers. If Google is equally good at telling you how to build a bomb and Photoshop is equally good at producing deepfakes, then the bill takes no issue with your LLM.
>As for LLMs making it easier to people to build destructive devices, Google can provide info about that. Or just read some "prepper" books and magazines. That ship sailed long ago.
However, those can and are tracked. The thing making them nervous is the ability to do that on your own with no possible way for someone to track or catch you. Same with deepfakes. They don't care if you are doing it with photoshop, because that can be reviewed. They care that you can do it and not be caught/stopped/punished for it.
Ok this is insanity. The thing keeping people from making destructive devices is the difficulty in synthesizing white fuming nitric acid and similar required precursors. You can Google wherever shit you want if you use someone else's wifi and you should be able to at least Google whatever you want without the feds showing up.
The danger of ai has nothing to do with what the average Joe might try to do, it has everything to do with what soulless corporations are doing to you right now and how it enables them to be even worse in the future.
Right now your roof is being scanned by aircraft with cameras and AI is being used to determine how old it is and if there are tree branches nearby. They're also looking at and classifying objects in your back yard to determine safety risks. It's not horribly accurate but because of the scale it doesn't matter to the companies, you just get fucked. Accidentally bag something you didn't scan at the self checkout? They have AI for that too, there are multiple reports of people being hunted down and charged with theft for simple mistakes.
Your chances of having your life ruined of degraded because of AI are massively higher than your chances of being hurt by a random individual using it to build destructive devices.
Individuals are usually the targets of companies and the government, whereas the government is immune from any blowback for with it does and has, right now, very cozy relationships with large corporations.
> The danger of ai has nothing to do with what the average Joe might try to do
Why not both? With the story of a high school principal being framed by a coworker who deep faked a racist anti-semitic rant that the principal didn't say, I'd say the danger of AI also has to do with what an average Joe that wants to cause you harm can do. That doesn't diminish the threat from corporations, but a jilted lover can now ruin your life in additional ways.
In the case of the example, it didn't work, but every day people are being dropped from their homeowners/car/health insurance.
Yes there are dangers there but they ultimately come down to evidentiary standards. We can't do the thing we always do where all risk is perceived based off of extremely rare incidents so we destroy everyone's privacy while the stuff actually harming people at scale is ignored.
>SB 1047 creates an unaccountable Frontier Model Division that will be staffed by EAs with police powers, and which can throw model developers in jail for the thoughtcrime of doing AI research. It’s being fast-tracked through the state Senate. Since many cloud and AI companies are headquartered in California, this will have worldwide impact.
Of course that is scare propaganda, but when you put it with what the Federal govt is doing here[0], it makes it pretty clear that the real worry is people have access to "dangerous" information with no oversight. I can imagine policing agencies at every level getting very nervous with lone-wolf or tiny militia types getting access to information without any triggers flipping and alerting them and with no way to get any evidence if they do want to arrest them for something.
It's too late to stop "deep fakes". That technology is already in Photoshop and even built into some cameras. Also, regulate that and Hollywood special effects shops may have to move out of state.
As for LLMs making it easier to people to build destructive devices, Google can provide info about that. Or just read some "prepper" books and magazines. That ship sailed long ago.
Real threats are mostly about how much decision power companies delegate to AIs. Systems terminating accounts with no appeal are already a serious problem. An EU-type requirement for appeals, a requirement for warning notices, and the right to take such disputes to court would help there. It's not the technology.