I think having a better understanding about the underlying statistical model of how these AIs are trained is helping me keep back the wave of fear and anxiety associated with AI risks.
The singularity requires AIs to be very good at doing things people have not done before. But this form of machine learning is bad at that. It is like someone who doesn't actually understand anything has somehow managed to memorize their way through whatever topic you're asking about. They have lots of tips and information about things, similar to what you might currently find by doing research. But they don't seem to have what is required to push the boundaries of knowledge for understanding, because they don't actually really have it in the first place. Or maybe what they have is just very minimal when compared to the contribution of their memorization.
Obviously you still have the main risks of breaking capitalism, mass unemployment, pollution of public communications, etc. But honestly, I think each of these are far less scary to me than the existential risk of superintelligence. So in a way I'm actually happy this is happening the way it is right now, and we don't have to deal with both of these risks at the same time.
Our current approach is probably the safest way to progress AI that I can think of: it requires a new model to improve, and it's learning entirely from human data. It might not seem like it, but this is actually pretty slow, expensive, and limited compared to how I expected AI to improve given Sci fi movies or Nick Bostrom's writings(curious what he'd have to say about this resurgence of AI)
The singularity requires AIs to be very good at doing things people have not done before. But this form of machine learning is bad at that. It is like someone who doesn't actually understand anything has somehow managed to memorize their way through whatever topic you're asking about. They have lots of tips and information about things, similar to what you might currently find by doing research. But they don't seem to have what is required to push the boundaries of knowledge for understanding, because they don't actually really have it in the first place. Or maybe what they have is just very minimal when compared to the contribution of their memorization.
Obviously you still have the main risks of breaking capitalism, mass unemployment, pollution of public communications, etc. But honestly, I think each of these are far less scary to me than the existential risk of superintelligence. So in a way I'm actually happy this is happening the way it is right now, and we don't have to deal with both of these risks at the same time.
Our current approach is probably the safest way to progress AI that I can think of: it requires a new model to improve, and it's learning entirely from human data. It might not seem like it, but this is actually pretty slow, expensive, and limited compared to how I expected AI to improve given Sci fi movies or Nick Bostrom's writings(curious what he'd have to say about this resurgence of AI)