I disagree with the comparison. This isn't abstraction, it is syntax completion. As if you typed the first four bytes and GitHub (mostly correctly it must be mentioned!) completed the remaining.
Unlike an additional abstraction layer, the readibility is not increased.
The end goal of this based on what I've seen from Open AI examples and related beta projects is more high level language -> code.
"Write a standard sign up page" -> Generated HTML
"Write a unit test to test for X" -> Unit Test.
It's more than just syntax completion - I'd argue that's the beginning of a new layer of abstraction similar to previous new abstraction layers. The demo on their main page is more than syntax completion - it writes a method to check for positive sentiment automatically using a web service based on standard english.
This is extremely powerful and it's still super early.
I saw one example that converted english phrases into bash commands, "Search all text files for the word X" -> the correct grep command.
That is a big deal for giving massive leverage to people writing software and using tools. We'll be able to learn way faster with that kind of AI assisted feedback loop.
Similarly to compilers, the end result can also be better than what humans can do eventually because the AI can optimize things humans can't easily, by training for performance. Often the optimal layout can be weird.
Unlike an additional abstraction layer, the readibility is not increased.