In general, a citation is something that needs to be precise, while LLMs are very good at generating some generic high probability text not grounded in reality. Sure, you could implement a custom fix for the very specific problem of citations, but you cannot solve all kinds of hallucinations. After all, if you could develop a manual solution you wouldn't use an LLM.
There are some mitigations that are used such as RAG or tool usage (e.g. a browser), but they don't completely fix the underlying issue.
You don't have much experience in it, do you? The real world peer review process could not be furher from what you are describing.
Source: I've personally been involved in peer reviewing in fields as diverse as computer science, quantum physics and applied animal biology. I've recently left those fields in part because of how terrible some of the real-world practices are.
And of course anyone with sufficient knowledge in a domain is freed from human foibles like racism, sexism, etc. Just dry expertise, applied without bias or agenda.
This is a problem with the journalism and politics, it's not really about science. No scientist would trust a result that depends on a single small sample paper. Those are just stepping stones that may justify further research for more robust evidence. This fact is quite clear to scientist and it's why most would discourage the general public (including smart engineers) from reading academic articles.
But in general, I agree with you. It's ridiculous when someone pretends to shut down a complex issue by citing a random paper. However, an expert can still analyze the whole academic literature on a topic and determine what the scientific consensus is and how confident we are about it.
When I checked how people were citing these useless papers, almost invariably it would be in a sentence like this:
"Computational modelling is a useful technique for predicting the course of epidemics [1][2][3][4][5]"
The cited papers wouldn't actually support the statement because they'd all be unvalidated models, but citing documents that don't support the claim is super common and doesn't seem to bother anyone :( Having demonstrated a "consensus" that publishing unvalidated simulations is "useful", they would then go ahead and do another one, which would then be cited in the same way ad infinitum.
I disagree. A scientist could read a single paper and find out n is small, or identify a flaw.
But there are loads of papers like this.
Then you have some literature studies which look at all these papers together and get result aggregates.
Then you get some “proper” studies which link to these aggregates, and several small studies, and you’re going to read these “proper” studies which are quoted often and deemed decent or good quality.
And at no point will you realise it’s all based on shoddy foundations.
This is for example what recently happened in social psychology
AlphaFold is also a high impact discovery, while Hopfield networks have very little to do with modern AI and they are only a very interesting toy model right now.
Many people arguing that AI risk is real have big monetary incentives. Some are asking for money to study safety and influence the regulatory bodies. Others gain money because believing in AI superintelligence makes their AI startup look like a great investment. The true believers like Bengio are a smaller subset.
> Others gain money because believing in AI superintelligence makes their AI startup look like a great investment.
And still others are hoping that the fear will lead policy makers to build up a regulatory moat that they'll be able to navigate but their up-and-coming competitors won't.
It remains highly interesting, and indicative of something, that researchers in other fields orient almost entirely on the "give us money to improve our technology," not "give us money to research how to fix our technology, and pause it in the meantime."
You don't need a university degree if you just want to learn the last javascript frontend framework, a good coding bootcamp can teach you that. Inflation of credentials is not a problem that universities created, it is an industry issue.
Peer review goes beyond simple issues about clarity or misunderstanding. In particular, peer review is sometimes seen as an adversarial process.
Often, the reviewer will not understand because he is not the intended audience. Other times, he will understand but he just doesn't like your method, because he is working in an opposite direction. Or maybe your method is a direct competitor of his and yours work better, which incentivizes some people to block your work.
> Other times, he will understand but he just doesn't like your method, because he is working in an opposite direction. Or maybe your method is a direct competitor of his and yours work better, which incentivizes some people to block your work.
Oh you mean those phantom "off topic"/"out of scope" reviews.
I use mathpix (https://mathpix.com/) quite often to copy equations from papers and it works very well, but I don't know how good it is with handwritten equations.
I never understand this argument when it's used against Obsidian. First, free != FOSS. Second, while the app is closed source, Obsidian has a large open source community that develops plugins for it. Third, the format (markdown) is open, so you have much more freedom than what you have with other FOSS applications that use custom formats.
Open plugins are meaningless when the platform itslelf isn't open.
There are also lots of markdown centric note software so its not too much of a draw for obsidian.
Personally I'd never use a proprietary note app, having to switch if the company shutters is too much, at least FOSS apps can be forked. Though, I've not heard anything bad about the obsidian team, maybe they open source it if they shut down. They also don't seem like the type to rugpull their customers. But you know how many companies people have said that about.
Open plugins do have a meaning for many developers even if the platform itself isn't open. The meaning is: developers can easily find reference code to develop their own plugins and can easily patch open plugins when they break.
You openly acknowledged that Obsidian isn't open source. There's lots of open source software for Windows and macOS, yet neither Windows nor macOS are open source. Strange counter-argument in which you proved my point.
Name these FOSS apps that use custom formats. There's so many Obsidian-likes that are not only FOSS, but also use non-proprietary flat markdown files. It sounds like you're drinking too much of the Obsidian community Kool-Aid and trying to convince yourself of these things that simply aren't true.
There are some mitigations that are used such as RAG or tool usage (e.g. a browser), but they don't completely fix the underlying issue.
reply