Obviously someone is programming chatgpt to solve every specific problem that gets popular mention as being one where chatgpt fails. So as soon as this paper hit hn, I am sure someone "fixed" chatgpt to solve these problems.
Of course if you want chatgpt to be a universal intelligence this type of one by one approach will get you nowhere.
I do think the underlying point is a good one however. It wouldnt be surprising that ai researchers read hn, and other tech related social media. I also believe OpenAI are also storing prompts and responses. They should be able to make embeddings of all prompts and cluster them. When they see popular prompts that are failing, they could easily add the problem, solution, and reasoning to the training data. We also know they are constantly fine tuning and releasing new versions of models.
Of course if you want chatgpt to be a universal intelligence this type of one by one approach will get you nowhere.