I know people get upset when open source is used when open weight is more correct (happily here open weight is specifically being applied).
My question: is open weight even interesting? What does that really offer? Does it allow one to peer into the biases (or lack thereof) of a model? Does it allow one to train a competing model?
Would open source be something different and preferable — or are "weights the new source" in this LLM world we are finding ourselves in?
I really don't get why there's any confusion. These models are LITERALLY compiled binary data. Weights are definitely not source. Source is "the source from which the thing is generated" i.e. the training data (or a script to assemble it) and all scripts, procedures etc required to make the binary blob.
My question: is open weight even interesting? What does that really offer? Does it allow one to peer into the biases (or lack thereof) of a model? Does it allow one to train a competing model?
Would open source be something different and preferable — or are "weights the new source" in this LLM world we are finding ourselves in?
I'm trying to educate myself.