It's still better in many cases than modern ML (especially if you incorporate explainability and efficiency as metrics of "better" next to the predictive power), so I wouldn't object much if a company called it "AI". In fact, if I learned that an "AI" behind some product was just linear regression, I'd trust them more.
I personally don’t see a problem with this. Where do you draw the line at model simplicity? Are decision trees too simple to be AI? What about random forest? Are deep neural nets the only model sophisticated enough to be “AI”? It’s not the model, it’s how you use it.