Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s nothing that makes it physically impossible so it’s likely inevitable - just hard to know when.

The risk of poorly aligned AGI as described is real, makes sense to try to work on the problem before we need it (when it’ll be too late).



No it's not a real risk, just a silly fantasy with no basis in scientific reality.


Dismissive comments without any substance make you come across as dumb, not smart.

If you have some actual position of substance, articulate it or just don’t bother replying at all.


Claims about AGI without hard scientific backing make you come across as pseudo-intellectual, not smart. If you have actual evidence then articulate it or just don't bother replying at all.

Claims made without evidence can be dismissed without evidence.


If AGI is possible, then the described concerns about risk hold up.

Either you think AGI is impossible for some reason or you think it is possible, but the goal alignment issues don’t matter. You haven’t given any explanation for either. Maybe you think something else? I wouldn’t know because you don’t say anything.

Obviously there isn’t “hard scientific backing” since AGI doesn’t exist yet. There are still unknowns about what would be required. That doesn’t mean it’s impossible to think about what its existence could mean.

It was possible to think about the consequences of flight before airplanes existed.

I think AGI is possible because we’re surrounded by general intelligence and there’s nothing magical about brains or biology. It’s just the timeframe that’s unclear.

We already see issues with poorly optimized reward functions on a smaller scale, both in ML models (classifying based on unintended things) and in humans (drug addiction).


So in other words you're just engaging in idle speculation.


No - it logically follows and you can begin work on the problem now, but it's obvious you refuse engage in anything other than one line dismissals and condescending snark so there's no point in me replying further.


Logic only produces correct results when your starting assumptions are correct. Since you're just making things up, and logic resting on that foundation is pointless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: