That's not how it works. Llama and Llama 2's raw model is not "censored". Their fine tunes often are, either explicitly, like Facebook's own chat fine tune of llama 2, or inadvertently, because they trained with data derived from chatGPT, and chatGPT is "censored".
When models are "uncensored", people are just tweaking the data used for fine tuning and training the raw models on it again.
> because they trained with data derived from chatGPT
Can you expand on this (genuinely curious)? Did Facebook use ChatGPT during the fine-tuning process for llama, or are you referring to independent developers doing their own fine-tuning of the models?
When models are "uncensored", people are just tweaking the data used for fine tuning and training the raw models on it again.