Not an expert, but my high level understanding is this: If a model is a set of inputs, some middle layers, and a set of outputs. Fine tuning concentrates on only the output layers.
Useful for taking a generic model with a base level of knowledge, and tuning it so the output is more useful for an application specific use case.
I think that's more in line with transfer learning, a variant of fine-tuning. If I'm reading this article correctly, they're fine-tuning the LMs end-to-end.
Useful for taking a generic model with a base level of knowledge, and tuning it so the output is more useful for an application specific use case.