Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They are not imitating humans in general. They are imitating the statistical average of many human written texts. That is not the same thing as imitating the goals of humans.

By imitating the speech it may look like the AI has some goal-oriented behavior, but it only looks that way. And that is precisely the goal of their programmers, to make it look like the AI has some goals.

It would be possible to have a different type of AI which actually decides on its own goals and then infers what are the best actions to take to reach those goals. Such an AI would have goals yes. But language models do not. They are not scored based on did they reach any specific goal with any specific interaction. They have no specific goals.

The only goal (of the programmers who wrote the AI) is to fool the humans into thinking they are interacting with some entity which has goals. and intelligence.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: