Elon Musk is part of a big framework we've created to stop things going off the rails. It's got humans in the loop at pretty much every step, and sometimes still goes wrong, for example, working conditions in amazon fulfillment centers. You can argue whether that's offset by the benefit of cheaper and faster shipping. But even then, there are definitely cases where we outsourced our desires to organizations that went on to do bad things, even with humans running them.
Why would you expect an AI to listen to you? We can train them, but it's not clear when they get smart enough that they'll learn to do what we want, or just learn to pretend really well. A child punished too much will generally learn to lie very convincingly, because it's too hard to avoid every possible misstep. Is that what is going to happen with AGI? It would be good to figure it out before we build one that's a lot smarter than us!
Why would you expect an AI to listen to you? We can train them, but it's not clear when they get smart enough that they'll learn to do what we want, or just learn to pretend really well. A child punished too much will generally learn to lie very convincingly, because it's too hard to avoid every possible misstep. Is that what is going to happen with AGI? It would be good to figure it out before we build one that's a lot smarter than us!