Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Now you're changing the subject. Knowing something about ML (which is a legitimate, practical field) does not imply any knowledge of "AI safety". Since AI safety (as the grifters use the term) isn't a real thing they're free to make up all sorts of outlandish nonsense, and naive people eat it up. The "AI Impacts" group that you cite is among the worst of the bunch, just some clowns who have the chutzpah to actually ask for donations. Lol.

None of the arguments you've made so far actually touch in any relevant facts, they're just vague arguments from authority. I obviously can't prove that some event will never happen in the future (can't prove a negative). But this stuff is no different than worrying about an alien invasion. Come on.



>legitimate, practical field

It's a mistake to conflate practicality with legitimacy, e.g. philosophy and pure mathematics are legitimate but impractical fields.

>None of the arguments you've made so far actually touch in any relevant facts, they're just vague arguments from authority.

I've been countering your arguments which sound vaguely authoritative (but don't actually cite any authorities) with some actual authorities.

I also provided a few links with object-level discussion, e.g. this literature review https://arxiv.org/pdf/1805.01109.pdf

There are many AI risk intros -- here is a list: https://www.lesswrong.com/posts/T98kdFL5bxBWSiE3N/best-intro...

I think this is the intro that's most likely to persuade you: https://www.cold-takes.com/most-important-century/

>But this stuff is no different than worrying about an alien invasion.

Why aren't you worried about an alien invasion? Is it because it's something out of science fiction, and science fiction is always wrong? Or do you have specific reasons not worry, because you've made an attempt to estimate the risks?

Suppose a science fiction author, who's purely focused on entertainment, invents a particular vision of what the future could be like. We can't therefore conclude that the future will be unlike that particular vision. That would be absurd. See https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-s...

Our current world is wild relative to the experience of someone living a few hundred years ago. We can't rule out a particular vision of the future just because it is strange. There have been cases where science fiction authors were able to predict the future more or less accurately.

Based on our discussion so far it sounds to me as though you actually haven't made any actual attempt to estimate the risks, or give any thought to the possibility of an AI catastrophe, essentially just dismissing it as intuitively too absurd. I've been trying to convince you that it is actually worth putting some thought into the issue before dismissing it -- hence the citations of authorities etc. Donald Trump's election was intuitively absurd to many people -- but that didn't prevent it from happening.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: