I don't think the LLM value prop here is to build lots of cards. It's just to interact with the model conversaitonally. If the model is wrong in this one translation, I'm not going to exactly "commit it to memory", I'm just going to keep on carrying on. I don't know that the LLM mistakes are any particularly worse than the many and sundry other mistakes I'm already continuously making as a language learner anyhow.
If I could speak a foreign language as well as a 8B parameter LLM, hallucinations and all, I'd be immensely ahead of where I am now. It's not like second languages aren't themselves often broken in somewhat similar ways.
If I could speak a foreign language as well as a 8B parameter LLM, hallucinations and all, I'd be immensely ahead of where I am now. It's not like second languages aren't themselves often broken in somewhat similar ways.