Does this just come down to a semantic idea that if something isn't in pursuit of AGI, its not really AI? That feels unfair to most of these researchers who absolutely disagree with that.
And to consider these algorithms to not "learn" is similarly unfair. They do. They learn to solve specific problems (at least right now), but they do learn.
And to consider these algorithms to not "learn" is similarly unfair. They do. They learn to solve specific problems (at least right now), but they do learn.