Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI safety is not a legitimate field. You have wasted your time. It's just a bunch of grifters posting alarmist tweets with no scientific evidence.

You might as well be following "unicorn safety" or "ghost safety".



Do you think Stuart Russell (coauthor, with Peter Norvig, of the widely used textbook Artificial Intelligence: A Modern Approach) is a grifter? https://people.eecs.berkeley.edu/~russell/research/future/

Does this review look like it only covers alarmist tweets? https://arxiv.org/pdf/1805.01109.pdf


Yes, Stuart Russell is a grifter. Some of the more advanced grifters have gone beyond tweeting and are now shilling low-effort books in an attempt to draw attention to themselves. Don't be fooled.

If we want to talk about problems with biased data sets or using inappropriate AI algorithms for safety-critical applications then sure, let's address those issues. But the notion of some super intelligent computer coming to take over the world and kill everyone is just a stupid fantasy with no scientific basis.


Stuart Russell doesn't even have a Twitter account. Isn't it possible that Russell actually believes what he says, and he's not primarily concerned with seeking attention?


Some of the more ambitious grifters have gone beyond Twitter and expanded their paranoid fantasies into book form. Whether they believe their own nonsense is irrelevant. The schizophrenic homeless guy who yells at the river near my house may be sincere in his beliefs but I don't take him seriously either.

Let's stick to objective reality and focus on solving real problems.


Do you think you know more about AI than Stuart Russell?

Do you believe you are significantly more qualified than the ML researchers in this survey? (Published at NeurIPS/ICML)

>69% of [ML researcher] respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016.

https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml...

Just because a concern is speculative does not mean it is a "paranoid fantasy".

"Housing prices always go up. Let's stick to objective reality and focus on solving real problems. There won't be any crash." - your take on the housing market in 2007

"Just because the schizophrenic homeless guy thinks Trump will be elected, does not mean he has a serious chance." - your take on Donald Trump in early 2016

"It's been many decades since the last major pandemic. Concern about the new coronavirus is a paranoid fantasy." - your take on COVID in late 2019/early 2020

None of the arguments you've made so far actually touch on any relevant facts, they're just vague arguments from authority that (so far as you've demonstrated here) you don't actually have.

When it comes to assessing unusual risks, it's important to consider the facts carefully instead of dismissing risks only because they've never happened before. Unusual disasters do happen!


Now you're changing the subject. Knowing something about ML (which is a legitimate, practical field) does not imply any knowledge of "AI safety". Since AI safety (as the grifters use the term) isn't a real thing they're free to make up all sorts of outlandish nonsense, and naive people eat it up. The "AI Impacts" group that you cite is among the worst of the bunch, just some clowns who have the chutzpah to actually ask for donations. Lol.

None of the arguments you've made so far actually touch in any relevant facts, they're just vague arguments from authority. I obviously can't prove that some event will never happen in the future (can't prove a negative). But this stuff is no different than worrying about an alien invasion. Come on.


>legitimate, practical field

It's a mistake to conflate practicality with legitimacy, e.g. philosophy and pure mathematics are legitimate but impractical fields.

>None of the arguments you've made so far actually touch in any relevant facts, they're just vague arguments from authority.

I've been countering your arguments which sound vaguely authoritative (but don't actually cite any authorities) with some actual authorities.

I also provided a few links with object-level discussion, e.g. this literature review https://arxiv.org/pdf/1805.01109.pdf

There are many AI risk intros -- here is a list: https://www.lesswrong.com/posts/T98kdFL5bxBWSiE3N/best-intro...

I think this is the intro that's most likely to persuade you: https://www.cold-takes.com/most-important-century/

>But this stuff is no different than worrying about an alien invasion.

Why aren't you worried about an alien invasion? Is it because it's something out of science fiction, and science fiction is always wrong? Or do you have specific reasons not worry, because you've made an attempt to estimate the risks?

Suppose a science fiction author, who's purely focused on entertainment, invents a particular vision of what the future could be like. We can't therefore conclude that the future will be unlike that particular vision. That would be absurd. See https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-s...

Our current world is wild relative to the experience of someone living a few hundred years ago. We can't rule out a particular vision of the future just because it is strange. There have been cases where science fiction authors were able to predict the future more or less accurately.

Based on our discussion so far it sounds to me as though you actually haven't made any actual attempt to estimate the risks, or give any thought to the possibility of an AI catastrophe, essentially just dismissing it as intuitively too absurd. I've been trying to convince you that it is actually worth putting some thought into the issue before dismissing it -- hence the citations of authorities etc. Donald Trump's election was intuitively absurd to many people -- but that didn't prevent it from happening.


That's just what a super-intelligent AI would say.. hmmmm...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: