They are not imitating humans in general. They are imitating the statistical average of many human written texts. That is not the same thing as imitating the goals of humans.
By imitating the speech it may look like the AI has some goal-oriented behavior, but it only looks that way. And that is precisely the goal of their programmers, to make it look like the AI has some goals.
It would be possible to have a different type of AI which actually decides on its own goals and then infers what are the best actions to take to reach those goals. Such an AI would have goals yes. But language models do not. They are not scored based on did they reach any specific goal with any specific interaction. They have no specific goals.
The only goal (of the programmers who wrote the AI) is to fool the humans into thinking they are interacting with some entity which has goals. and intelligence.
By imitating the speech it may look like the AI has some goal-oriented behavior, but it only looks that way. And that is precisely the goal of their programmers, to make it look like the AI has some goals.
It would be possible to have a different type of AI which actually decides on its own goals and then infers what are the best actions to take to reach those goals. Such an AI would have goals yes. But language models do not. They are not scored based on did they reach any specific goal with any specific interaction. They have no specific goals.
The only goal (of the programmers who wrote the AI) is to fool the humans into thinking they are interacting with some entity which has goals. and intelligence.