Eh, he's not really doing the fetch in a loop - he's generating a sequence of promises that are kicking off the fetch right away (during the call to map).
If the number of chapters is less than the sliding window of outstanding requests in the user's browser - the browser will make all the requests basically in parallel. If the number of chapters is very large, they will be batched by the browser automatically - chromium used to be about 6 connections per host, max of 10 total, not sure if those are the current limits.
He's just processing them in a loop as they resolve. The real issue with his code is that the loop isn't the right place to handle a rejection. The right place is right there at the top of the file in the map call.
The loop doesn't need to care about a failed fetch or json parse, those should be handled in the map, where you can do something meaningful about it. The loop doesn't even have the chapterUrl of the failed call.
Each user is making 6-10 requests, in parallel, to the server, instead of 1. My point is that I'd design this to not DDoS the servers if I got a lot of concurrent users.
Generally speaking, you're almost always better off with more light requests than fewer heavy requests.
Assuming this is static content (chapters) there's absolutely zilch you're gaining by fetching the whole block at once instead of chapter by chapter, since you can shove it all behind a CDN anyways.
You're losing a lot of flexibility and responsiveness on slow connections, though.
Not to mention - it may make sense to be doing something like fetching say... 5 chapters at a time, and then fetching the next chapter when the user scrolls to the end of the current chapter (pagination - a sane approach used by anyone who's had to deal with data at scale)
In basically every case though, as long as each chapter is at least a few hundred characters, the extra overhead from the requests is basically negligible, and if they're static - easily cached.
You didn't read what I wrote closely enough. You just fetch the data you need for the list of chapters. In most cases, it'll just be something like chapterID, chapterTitle and maybe chapterShortDescription. This would be easily stored in db cache since chapters don't change much.
If the number of chapters is less than the sliding window of outstanding requests in the user's browser - the browser will make all the requests basically in parallel. If the number of chapters is very large, they will be batched by the browser automatically - chromium used to be about 6 connections per host, max of 10 total, not sure if those are the current limits.
He's just processing them in a loop as they resolve. The real issue with his code is that the loop isn't the right place to handle a rejection. The right place is right there at the top of the file in the map call.
The loop doesn't need to care about a failed fetch or json parse, those should be handled in the map, where you can do something meaningful about it. The loop doesn't even have the chapterUrl of the failed call.