So you're hoping we'll be able to effectively align an AGI with ethics we ourselves don't even possess?
I recommend Robert Miles on YouTube for a few lessons on the challenges of creating friendly/aligned AIs. (And why your hope here is somewhat ludicrous).
I am very aware of all of the challenges. It was an oversimplification of my thoughts.
I feel that working on that kind of AI that emulates animals with true autonomy etc. in the short term is a very bad idea, and totally unnecessary to get most of the benefits of general purpose AI.
However, realistically, most people do not recognize this, and even the ones that do, many will pursue this dangerous type of ultra humanlike autonomous AGI. So it's going to happen.
In the long run it will be better for the evolution of intelligent life. In the short run it will probably be a disaster for humans.
I recommend Robert Miles on YouTube for a few lessons on the challenges of creating friendly/aligned AIs. (And why your hope here is somewhat ludicrous).