I believe this will happen in many human domains, but it doesn't really matter. Nobody is going to stop writing poetry because of this and I doubt there's much of an audience for AI generated poetry.
There are forms of poetry/art/etc. that we've never thought of, that have never been conceived before. An LLM being what it is won't conceive these. Humans will continue to generate language the pattern/structure/meaning of which has never been generated by LLMs before.
Thought experiment to prove this: if you trained an LLM on every utterance of human language before the 5th century BC would you get any idea we would recognize as modern?
I think that's the wrong perspective on it. People want to compare how an AI does at one thing to how the best people in the world do at that thing.
What you really want to do is compare how good the AI is at that thing compared to the _average person_ at that thing, and I would guess that generative AI outclasses the average human at almost every task that it's even moderately competent at.
People like to point out how it can't pass the bar exam or answer simple math questions or whatever, and how that _proves_ that it's not intelligent or can't use reasoning when _most people_ would also fail at the same tasks.
Almost all the Gen AI models already have super human competency if you judge it across _everything it can do_.
We're deluding ourselves by thinking it's happening to poetry! This study is ignorant and dishonest, it should have never been published in the first place: https://cs.nyu.edu/~davise/papers/GPT-Poetry.pdf
AI research is worse than all the social sciences combined when it comes to provocative titles/abstracts that are not supported by the actual data.