Hacker Newsnew | past | comments | ask | show | jobs | submit | prosqlinjector's commentslogin

The elephant in the room is legal liability. If something happens with a criminal employee then the question is raised "what precautions did you take from letting this dangerous person into your workplace".


And the hypothetical employer's answer to that question, in the model proposed by GP commenter, would be "I did all that was permitted by law, which of course did not include my right to access information on fully served criminal sentences", and thus the employer be rightfully exempt from liability.

If, as I understand is the case in the USA, employers are allowed to retrieve the potential criminal record of prospective employees after they have served their sentence, that's where one could argue the employer could be criminally liable for future wrongdoing by their employee.


Does that really cause legal liability, though? The state/federal entity that released them from prison is essentially saying 'okay, we think this person has paid their dues and has a good chance at being a productive member of society.'


You have a lot of faith in public opinion. What would your family and friends think if they found out a teacher at your child’s high school had done 20 years?


I get the sentiment, and there is due diligence such as background checks required for many public trust positions for that reason, but is there really legal liability created immediately at the time of hiring someone because of their record- or does it just satisfy the models more when you hire someone that got convicted versus someone that has not?


Who cares what they think; would a judge consider me liable because I hired the ex felon? If so, aren't they admitting that the criminal system shouldn't be trusted?


You’re talking about the criminal system, I’m talking about civil suits.


> This is where institutions like universities, governments, etc. come in.

Science was doing pretty well before it became institutionalized in the early 20th century. It's not without tradeoffs, but these aren't essential components.


Of course rare books are valuable. The point is that if you want to buy a physical book you probably will pay $10-15 more for the nice version. The market for the cheap entry is smaller.


Your value to your company is also not a linear function of your time there. There are high fixed costs to training, liability, insurance, etc. They are paying you to always be available, etc.

With that said, I think it's very possible to find a much more easier development job with a lower salary. You should be able to meet performance expectations in very little time.


You actually don't need any of that stuff, you can just write .js files and run them in node. The node.js standard library is quite extensive.


It's much much better now, especially if you are willing to adopt the experimental modules.


We can do polynomial regression of data sets that looks equally plausible, but it's not real data.


Curating and presenting facts is a form of narrative and is not at all objective.


The sky is not one color and changes color depending on weather, sun, and global location.


> but if you watch how they act, how they vote, and how they discriminate it could not be farther from the truth.

This is a low quality and bigoted comment.


"wow our software is so powerful, it's going to take over the world!"


yes, just like "our nuclear bombs are so powerful, they could wipe out civilisation", which led to strict regulation around them and lack of open-source nuclear bombs


It will never stop being funny to me that people are straight-facedly drawing a straight line between shitty text completion computer programs and nuclear weapon level existential risk.


>shitty text completion computer programs

There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future. It renders them completely incapable of anticipating significant changes regardless of how clear the trends are.

No, no one is afraid of LLMs as they currently exist. The fear is about what comes next.


> There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future.

It is refreshing to see somebody explicitly call out people that disagree with me about AI as having fundamentally inferior psyches. Their inability to picture the same exact future that terrifies me is indicative of a structural flaw.

One day society will suffer at the hands of people that have the hubris to consider reality as observed as a thing separate from what I see in my dreams and thought experiments. I know this is true because I’ve taken great pains to meticulously pre-imagine it happening ahead of time — something that lesser psyches simply cannot do.


"Looks at all the other species 'intelligent' humans have extincted" --ha ha ha ha

Why the shit would we not draw a straight line?

If we fail to create digital intelligence then yea, we can hem and haw in conversations like this forever online, but you tend to neglect that if we succeed then 'shit gets real quick'. Closing your eyes and years and saying "This can't actually happen" sounds like a pretty damned dumb take on future risk assessments of technology when pretty much most takes on AI say "well, yea this is something that could potentially happen".


Literally the thing people are calling "AI" is a program that, given some words, predicts the next word. I refuse to entertain the absolutely absurd idea that we're approaching a general intelligence. It's ludicrous beyond belief.


Then this is your failure, not mine, and not a failure of current technology.

I can, right now, upload an image to an AI and say "Hey, what do you think the emotional state of the person in this image is" pretty damned accurately. Given other images I can have the AI describe the scene and make pretty damned accurate assessments of how the image could have came about.

If this is not general intelligence I simply have no guess as to what will be enough in your case.


Modern generative AI functionality is hardly limited to predicting words. Have you not heard of e.g. Midjourney?


By "approaching" do you mean "likely to achieve it this century"?


Which is interesting because after the fall of the Soviet Union, there was rampant fear of where their nukes ended up and if some rogue country could get their hands on them via some black market means.

Then through the 90's, it was the fear of a briefcase bomb terrorist attack and how easy it would be for certain countries, who had the resources to pull an attack off like that in the NYC subway or in the heart of another densely populated city.

Then 9/11 happened and people suddenly realized you don't need a nuke to take out a few thousand innocent people and cripple a nation with fear.


Yes, just like... the exact opposite. One is a bomb, the other a series of mostly open source statistical models. What kind of weed are you guys on that's made you so paranoid about statistics?


Last time I checked my statistical model book didn't have the ability to write Python code.

And a nuclear bomb is just a bunch of atoms. Do you fear atoms? What the hell.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: