Hacker News new | past | comments | ask | show | jobs | submit login

Looking through the repo, reading the doc, an LLM looks to be part of the implementation. LLMs cannot explain their reasoning, so if there is an LLM, then the system as a whole cannot explain its reasoning, because part of the system is a black box? reasoning can be explained up to the point the LLM comes into play, and also then afterwards, with whatever is done with LLM output?



Can you explain your reasoning?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: