Hacker News new | past | comments | ask | show | jobs | submit login

Oh yeah I agree that it _could_ do all those things, but it would be a bit of overkill to always send every observation an agent encounters into the API/chatbox, and ask it to spit out an evaluation or action.

This paper does a nice job of separating the "agency" from the next word with context type predictor. I think that's why I like the paper, it is just chatgpt, in the same way that pizza is just dough, sauce, and cheese.




Yes, but I think this was a fairly obvious conclusion to imagine isn’t it.

If you were going to seriously consider using ChatGPT for AI in a game, you would need each instance of GPT to only know certain information it has gathered. And you would want it to reflect on observations to come up with new thoughts that weren’t observed.

Still, I’d argue you don’t really even need GPT for any of the above. GPT is useful if you want thoughts expressed as natural language, but you could easily code observations and thoughts into an appropriate abstract data structure and still have the same thing, except it’s a bit harder to understand since asking an NPC something in a language it understands and getting back a query result isn’t user friendly, but it can be just as amazing if you know what the data represents. The imprecision and fuzziness of an LLM leaves room for fun weirdness though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: