This is referring to implementation. I argued, simply augmenting the API to match grammar property of the llama.cpp library was not enough, as it doesn't recognise the reality of Ollama consumption. None of the existing Ollama clients (GUI) could support such implementation as that would require a substantial effort on the maintainer's part (adding additional knobs in the UI, etc.) Considering grammar sampling is not really that popular in the first place, it would mean that the vast majority of clients would end up not supporting it.
This has so far turned out to be the case: they ended up implementing it. And distastefully, too, exactly like the other 20 PR's would have implemented it; at the API level, that is.
My fork specifically introduces parsing of GBNF code blocks (Markdown ```gbnf) from the system prompt, so that any of the existing clients are supported out of the box without any effort on maintainer's part.