I think Selenium's killer use case is (aside from legacy/inertia) cross-browser and cross-language. In exchange, it comes with a ton of its own baggage, since it's an additional layer in between you and your task, with its own Selenium-specific bugs, behavior limitations and edge cases.
If you don't need cross-browser and Chrome is all you need, then something like a simple Chrome extension and/or Chrome DevTools Protocol cuts out a lot of middle-man baggage and at least you will be wrangling the browser behavior directly, without any extra idiosyncrasies of middle layers.
Sorry, I dropped the context when I decided to make this a top-level comment. Does scraping in a browser circumvent scraping protections that Selenium non-headless would get caught by?
My next article will be on the topic of bypassing the Cloudflare bot protection. You can then compare with how Selenium handles this problem (if at all).