> Nothing about LLM architecture is biased towards facts.... If they're factual at all, it's only because most things written by people are true
That's like saying "if you observe an LLM perform well, it's only because the idea as a whole is good, rather than any one aspect in isolation." Yeah, duh.
I don't know how technology people or the press can on the one hand say that these AI systems have biases, but on the other hand, that they are incapable of ever having biases towards e.g. facts. Of course you can make them do anything, all these problems are surmountable. Tough cookie NYTimes! But I think they will have the last laugh, because one thing that is insurmountable is getting an LLM a Columbia Journalism Masters or an important dad from Manhattan, which is the only thing that really matters to the NYTimes.
I guess the idea is that facts corroborate, and therefore, the most efficient way to learn to reproduce all things is to preferentially learn things which fit in with the rest of the things you are learning. This of course assumes that the majority of the training corpus represents truthful behavior, otherwise it becomes more efficient to become really good at making things up.
That's like saying "if you observe an LLM perform well, it's only because the idea as a whole is good, rather than any one aspect in isolation." Yeah, duh.
I don't know how technology people or the press can on the one hand say that these AI systems have biases, but on the other hand, that they are incapable of ever having biases towards e.g. facts. Of course you can make them do anything, all these problems are surmountable. Tough cookie NYTimes! But I think they will have the last laugh, because one thing that is insurmountable is getting an LLM a Columbia Journalism Masters or an important dad from Manhattan, which is the only thing that really matters to the NYTimes.