The idea that they want to train a new custom model for open release instead of just... giving us GPT-3 already suggests a terrible start. I'm calling it now, this is a strategic counterplay against Google's Gemma models so @sama can sell Tim Cook a "frontier" local model that doesn't compete with anything coherent. A fig leaf for their "Open" identity and a paper tiger for the Apple Intelligence panoply.
OpenAI doesn't believe in Open Source, they merely want it's prestige without committing to it on-principle.
If you're referring to the GPT-3 from 2020, modern open source models five years later are a) better at benchmarks b) much smaller yet still better at said benchmarks c) much, much cheaper/faster due to architectural improvements.
The real hard thing for OpenAI to do is to release an open-weights model that's better/more differentiated than Gemma 3 (at the small scale) or DeepSeek R1 (at the large scale)
But if OpenAI was Open, they'd open-source those old obsolete models. You're right that no one really wants it, and it has little to no commercial value at this point, but that's all the more reason they should just put it out there and actually live up to their name.
Is there anything about training a new model? I just assumed they were asking for all the little bits that companies forget to do at release with an open model.
If you're interested in the field, why wouldn't you answer the questionnaire? It costs you basically nothing while the upside is getting something that is potentially at least a tiny bit more useful to you than if you hadn't said anything?
See... that's kinda the idea behind an open model. I don't have to explain what I would use it for.