That’s what I’ve done for quite a few [non-LLM] applications. The remaining problem is that HTML is verbose vs other formats. That has a higher, per-token cost. So, maybe stripping followed by substituting HTML tags with a compressed notation.
I've tried this and found it doesn't make much difference. The idea was to somehow preserve the document structure while reducing the token count, so you do things like strip all styles, etc. until you have something like a structure of divs, then reduce that. But I found no performance gain in terms of output. It seems whatever structure of the document is left over after doing the reduction has little semantic meaning that can't be conveyed by spaces or newlines. Even when using something like html2markdown, it doesn't perform much better. So in a sense the LLM is "too good", and all you really need to worry about is reducing the token count.
I wonder if using nested markdown bullet points would help. You would preserve the information hierarchy, and LLMs are phenomenal with (and often output) markdown.
Yeah it’s slightly more token heavy, although not as much as it seems like at first glance. Most opening tags are 2-3 tokens, and most closing tags are 3-4. Since tags are generally a pretty small fraction of the text, it typically doesn’t make a huge difference IME, but it obviously depends on the particular content.