This has been happening already. The market is trying really hard to price out web scraping through scraper detection technologies and it's kinda working - scraping is becoming non-existent in user-space apps. It's also extremely discriminatory. Try running a single scrape with a developing country's IP and Linux, you'll be blocked at TLS step lol
> The market is trying really hard to price out web scraping... scraping is becoming non-existent in user-space apps
Uhh... Those two matters are pretty much unrelated to each other. Scraping is becoming non-existing because the era of static web pages has ended. No need to "scrap" when you have a nice, performant JSON REST API provided for you.
SSG vs SSR really has nothing to do with whether an API exists to provide the data you would otherwise need to scrape.
When was the last time you saw a site with a JSON API providing metadata, like the json-ld for a product on an e-commerce site? Or an API just for the open graph data? How would you even discover these APIs for sites that you don't own?
It's also worth noting that very, very few JSON APIs today are actually REST. They rarely include all the context needed, and in general JSON is much less useful than XML when you're talking to other APIs that you don't own since JSON can't easily describe the shape and datatypes of the content.
Having your cake and eating it too is a natural goal of every business and honestly it was just a matter of time till web pages figured out they can have the benefits of public data and avoid the costs. Web scraping and botting is basically a solved problem too - just put a login gate for the data which allows you to legally litigate against scrapers and bots. Done. However, nobody wants to lose the benefits of public data so here we are.