Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To me, this is exactly what LLMs are good for. It would be exhausting double checking for valid citations in a research paper. Fuzzy comparison and rote lookup seem primed for usage with LLMs.

Writing academic papers is exactly the _wrong_ usage for LLMs. So here we have a clear cut case for their usage and a clear cut case for their avoidance.





If LLMs produce fake citations, why would we trust LLMs to check them?

Because the risk is lower. They will give you suspicious citations and you can manually check those for false positives. If some false citation pass, it was still a net gain.

Because my boss said if I don't, I'm fired.

Exactly, and there's nothing wrong with using LLMs in this same way as part of the writing process to locate sources (that you verify), do editing (that you check), etc. It's just peak stupidity and laziness to ask it to do the whole thing.

Shouldn’t need an llm to check. It’s just a list of authors. I wouldn’t trust an llm on this, and even if they were perfect that’s a lot of resource use just to do something traditional code could do.

I would assume you would use the LLM to not only check the source exists but check that the citation referenced actually says what the author says it does. That's not something you can do heuristically, I would think.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: