Claims about AGI without hard scientific backing make you come across as pseudo-intellectual, not smart. If you have actual evidence then articulate it or just don't bother replying at all.
Claims made without evidence can be dismissed without evidence.
If AGI is possible, then the described concerns about risk hold up.
Either you think AGI is impossible for some reason or you think it is possible, but the goal alignment issues don’t matter. You haven’t given any explanation for either. Maybe you think something else? I wouldn’t know because you don’t say anything.
Obviously there isn’t “hard scientific backing” since AGI doesn’t exist yet. There are still unknowns about what would be required. That doesn’t mean it’s impossible to think about what its existence could mean.
It was possible to think about the consequences of flight before airplanes existed.
I think AGI is possible because we’re surrounded by general intelligence and there’s nothing magical about brains or biology. It’s just the timeframe that’s unclear.
We already see issues with poorly optimized reward functions on a smaller scale, both in ML models (classifying based on unintended things) and in humans (drug addiction).
No - it logically follows and you can begin work on the problem now, but it's obvious you refuse engage in anything other than one line dismissals and condescending snark so there's no point in me replying further.
Logic only produces correct results when your starting assumptions are correct. Since you're just making things up, and logic resting on that foundation is pointless.
The risk of poorly aligned AGI as described is real, makes sense to try to work on the problem before we need it (when it’ll be too late).