This seems likely to create some inexplicable performance elbows where you have 1000 strings, but there's one code path that replaces one with a number, and now the whole array needs to be copied. Tracking that down won't be fun.
Is it your assessment or the LLM's that it's a good library? There have been many times I looked at the API for a library, said this is bonkers, and bailed. The weird contortions needed to use something should be a signal.
It's mine. I've been shooting down LLM library picks semiregularly. That's kind of what motivated me to comment: it is not at all my experience that LLMs steer me away from libraries, and rather more my experience that it's keeping me on my toes suggesting libraries I might not want to use.
Only support for decoding and from A17 Pro and M3 onwards, I believe? Going to be a few years before that's commonly available (he says from the work M1 Pro.)
Not supporting pipelining is a somewhat effective low effort spam filter. Spammers don't have time to wait for response codes, just blast the message and move on to the next sucker.