I think that is the key problem, a traditional MVP is a mostly known entity. It may be missing some features, some bugs, etc. But it is an MVP not because it was necessarily rushed out the door (I mean... it was, but differently) but because it has some rough edges and is likely missing major features.
Where what it seems we are getting with a lot of these companies shoving AI into something and calling it a product, is an MVP that is an MVP due to an unknown and untested nature.
The term MVP was cover for shoving poor quality software out on the market long before AI became involved. This is unfortunate, but inevitable when the term was popularized. AI is incredibly easy to tack on now, so people are doing that too.
That is true, but I think rushing to add AI features made it a completely different situation.
We get a lot of MVP crap before, don't get me wrong. But at least it was understood crap. Sure it may have bugs in it and that is to be expected. But there was a limit in how wrong it could go. Since at the end of the day it was still limited to the code within the application and the server (if there is one).
Meanwhile when an over-reliance on an LLM goes wrong, depending on how it goes wrong could be catastrophic.
As we have seen time and time again just in the last couple months, when LLM's are shoved into something we seem to get a serious lack of testing under the guise of "beta".
Where what it seems we are getting with a lot of these companies shoving AI into something and calling it a product, is an MVP that is an MVP due to an unknown and untested nature.