Training large language models is characterised by diminishing returns; the first billion training inputs reduce the loss more than the second billion, the second billion reduce the loss more than the third, etc. Similar for increases in size; the improvement is less than linear.