The elephant in the room is legal liability. If something happens with a criminal employee then the question is raised "what precautions did you take from letting this dangerous person into your workplace".
And the hypothetical employer's answer to that question, in the model proposed by GP commenter, would be "I did all that was permitted by law, which of course did not include my right to access information on fully served criminal sentences", and thus the employer be rightfully exempt from liability.
If, as I understand is the case in the USA, employers are allowed to retrieve the potential criminal record of prospective employees after they have served their sentence, that's where one could argue the employer could be criminally liable for future wrongdoing by their employee.
Does that really cause legal liability, though? The state/federal entity that released them from prison is essentially saying 'okay, we think this person has paid their dues and has a good chance at being a productive member of society.'
You have a lot of faith in public opinion. What would your family and friends think if they found out a teacher at your child’s high school had done 20 years?
I get the sentiment, and there is due diligence such as background checks required for many public trust positions for that reason, but is there really legal liability created immediately at the time of hiring someone because of their record- or does it just satisfy the models more when you hire someone that got convicted versus someone that has not?
Who cares what they think; would a judge consider me liable because I hired the ex felon? If so, aren't they admitting that the criminal system shouldn't be trusted?
> This is where institutions like universities, governments, etc. come in.
Science was doing pretty well before it became institutionalized in the early 20th century. It's not without tradeoffs, but these aren't essential components.
Of course rare books are valuable. The point is that if you want to buy a physical book you probably will pay $10-15 more for the nice version. The market for the cheap entry is smaller.
Your value to your company is also not a linear function of your time there. There are high fixed costs to training, liability, insurance, etc. They are paying you to always be available, etc.
With that said, I think it's very possible to find a much more easier development job with a lower salary. You should be able to meet performance expectations in very little time.
yes, just like "our nuclear bombs are so powerful, they could wipe out civilisation", which led to strict regulation around them and lack of open-source nuclear bombs
It will never stop being funny to me that people are straight-facedly drawing a straight line between shitty text completion computer programs and nuclear weapon level existential risk.
There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future. It renders them completely incapable of anticipating significant changes regardless of how clear the trends are.
No, no one is afraid of LLMs as they currently exist. The fear is about what comes next.
> There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future.
It is refreshing to see somebody explicitly call out people that disagree with me about AI as having fundamentally inferior psyches. Their inability to picture the same exact future that terrifies me is indicative of a structural flaw.
One day society will suffer at the hands of people that have the hubris to consider reality as observed as a thing separate from what I see in my dreams and thought experiments. I know this is true because I’ve taken great pains to meticulously pre-imagine it happening ahead of time — something that lesser psyches simply cannot do.
"Looks at all the other species 'intelligent' humans have extincted" --ha ha ha ha
Why the shit would we not draw a straight line?
If we fail to create digital intelligence then yea, we can hem and haw in conversations like this forever online, but you tend to neglect that if we succeed then 'shit gets real quick'. Closing your eyes and years and saying "This can't actually happen" sounds like a pretty damned dumb take on future risk assessments of technology when pretty much most takes on AI say "well, yea this is something that could potentially happen".
Literally the thing people are calling "AI" is a program that, given some words, predicts the next word. I refuse to entertain the absolutely absurd idea that we're approaching a general intelligence. It's ludicrous beyond belief.
Then this is your failure, not mine, and not a failure of current technology.
I can, right now, upload an image to an AI and say "Hey, what do you think the emotional state of the person in this image is" pretty damned accurately. Given other images I can have the AI describe the scene and make pretty damned accurate assessments of how the image could have came about.
If this is not general intelligence I simply have no guess as to what will be enough in your case.
Which is interesting because after the fall of the Soviet Union, there was rampant fear of where their nukes ended up and if some rogue country could get their hands on them via some black market means.
Then through the 90's, it was the fear of a briefcase bomb terrorist attack and how easy it would be for certain countries, who had the resources to pull an attack off like that in the NYC subway or in the heart of another densely populated city.
Then 9/11 happened and people suddenly realized you don't need a nuke to take out a few thousand innocent people and cripple a nation with fear.
Yes, just like... the exact opposite. One is a bomb, the other a series of mostly open source statistical models. What kind of weed are you guys on that's made you so paranoid about statistics?