Hacker News new | past | comments | ask | show | jobs | submit login
The gotcha of unhandled promise rejections (jakearchibald.com)
128 points by AshleysBrain on Jan 13, 2023 | hide | past | favorite | 89 comments



I think the ideal thing to do in a lot of cases is to adhere to the spirit of the warning and actually try to handle the rejections in some way.

In this case I'd want to log the error, or display a notice that the chapter couldn't be loaded. For this we need to catch the rejections and turn them into values we can iterate over with for await/of.

Promise.allSettled() does this for us, but forces us to wait until all promises are... settled. A streaming version would give back an async iterable to we could handle promises in order as soon as they're ready. Or we can just convert into the same return type as allSettled() ourselves:

    async function showChapters(chapterURLs) {
      const chapterPromises = chapterURLs.map(async (url) => {
        try {
          const response = await fetch(url);
          return {status: "fulfilled", value: await response.json()};
        } catch (e) {
          return {status: "rejected", reason: e};
        }
      });
    
      for await (const chapterData of chapterPromises) {
        // Make sure this function renders or logs something for rejections
        appendChapter(chapterData);
      }
    }


I'm laughing - I just posted basically the exact same thing. This is the right way to solve this.

He's trying to shove error handling into the wrong place (The loop should not be responsible for handling failed requests it knows nothing about).


I don't think that is an absolute. Lets say it is paragraphs not chapters, the page needs to know to didn't get all the paragraphs to build the story so it can know the page has failed to complete. That's the perfect time to throw an exception and handle it somewhere that will know how to let the application continue to run sanely.


Sure - I don't think anything is really an absolute in programming, and depending on the use case there are all sorts of ways to structure this.

But in general, if you're making a contract where a promise might reject and using await, you need to be wrapping that code in a try/catch somewhere.

That's the dealio with async/await and promises: you're trading the callback .then().catch() format for try{ await ... }catch(){}.

If you don't want to use try/catch - you need to be making a contract where the failure is not a rejection but an error object. If you want to use rejections, you need to be catching them (in one form or another).


Agreed, thought there is no need to complicate it further by returning objects with status and value fields.

The requirements here seem a bit rare (ignore failures individually, machine gun requests, render incrementally), but the ideal solution is actually glossed over in the post, if you consider that you'll probably want to log those request errors in production:

    async function showChapters(chapterURLs) {
      const requests = chapterURLs.map(url => {
        return fetch(url).then(r => r.json()).catch((err) => {
          MyMonitoring.report(err)
        })
      })
    
      for await (const data of requests) {
          appendChapter(data);
      }
    }
In this scenario, and the post also fails to mention this, appendChapter() needs to deal with empty data as the .catch() handler is resolving to undefined.

This could be easily abstracted into a 'fetchAll' function.

There is absolutely nothing "blunt" or wrong about it. The engine warned us about unhandled promises, we handle them, everybody is happy!


But now some of your requests resolve with undefined, rather than correctly reject (it's exceptional behaviour).

appendChapter(undefined) doesn't seem right to me. Feels like you'll have to do something like if (data === undefined) throw… which kinda shows how hacky this is.


Hi Jake! That is exactly what the code in the article does, isn't it?

     promise.catch(() => {})
It does follow up with "this doesn't change the promises other than marking them as 'handled'. They're still rejected promises" but that doesn't seem right - they become fulfilled promises with a value of undefined, and those values will definitely show up in the for loop later. Is there something I'm missing with the nested promises?

My suggestion was in the spirit of keeping this behaviour the same, except I think explicitly adding the catch handler to fetch() is the right way to do it vs an auxiliary function to 'silence' them.


> they become fulfilled promises with a value of undefined

No they don't, which is why I made that clarification in the post :D

const promise1 = Promise.reject(Error("wow"));

// promise1 is rejected, and unhandled

const promise2 = promise1.catch(() => {});

// promise2 will be fulfilled, and is unhandled

// promise1 is still rejected, but now handled


I don't think this is right. One code smell here is that your `chapterPromises` is no longer a set of promises for chapters. You've had to turn exceptions into objects to work around the unhandled promise behaviour.

It also looks like it'll display chapter 3 if chapter 2 fails to load, which seems wrong. At that point you've failed to display the sequence of chapters.


> I don't think this is right. One code smell here is that your `chapterPromises` is no longer a set of promises for chapters. You've had to turn exceptions into objects to work around the unhandled promise behaviour.

I mean, or you do any sort of sensible thing, like automatically retry the failed request (with backoff and a cap) directly in the catch, where the error is local, understood, and you have the information to handle it.

But in either case - you're right that it's no longer `Promise<chapter>`, it's now `Promise<Option<chapter, err>>` But... that's not a problem. And it's a much better way to indicate to other sections of the code what has happened than changing code paths completely with an exception or unhandled rejection.

Using something like an `Option` (or `Either`, or `Result` - depending on what library/languages you're familiar with) is a pretty sound way to handle errors in a pipeline, and is routinely found across basically every functional language out there.


It depends so much on the app and behavior you want - to show partial data or not, to do things in sequence. What do chapters stand in for in this example?

Yeah, in this case I thought it might be nice to show a "failed to load" message for each failure. Maybe this is a chronological Mastodon timeline, or a photo gallery. Do you want to error the whole UI for one failed object? Maybe

But trying to do something useful with all failures it's a general rule that I think guides towards good thinking about UX and fewer unhandled rejection warnings.


For book chapters this is probably fine, but if there's any chance you might have a lot of things to deal with at once, you probably want to manage a pool of promises.

You might be okay in normal usage, but if a backlog accumulates, you could find yourself consuming all available resources and self destructing over and over.

I've tried it and do not recommend it.


This code does not recapitulate the desired behavior.

Desired: when the load from chapter 10 fails, even if chapter 2 is taking 30s to load, I want to immediately transition the UI into the failure state.

Your code: waits until chapters 1-9 have loaded to report the failure state on chapter 10.

(I posted I believe a correct answer "without the hack" to the OP.)


Does the article’s final code behave in a desired way? To my understanding, it simply catches rejections, but for-await block still waits for every promise in order, up until the rejected one. Your `finally` block gets called too late, unless I miss something obvious.

To make this loop early-cancellable on any out-of-order rejection, I’d create a helper generator and a “semaphore” promise, which would block the yield until there’s a next resolved value or any rejection.

  const arr = []
  arr.length = promises.length
  let step, sem
  function rearm() {
    sem = new Promise(res => {step = res})
  }
  rearm()
  promises.forEach((p, i) => {
    p.then(v => step({i, v}))
    .catch(e => step({i, e}))
  })
  async function* values() {
    let next = 0
    while (next < arr.length) {
      if (arr[next]) {
        yield arr[next++].v
        continue
      }
      const {i, v, e} = await sem
      rearm()
      if (e) {
        yield Promise.reject(e)
        return
      }
      arr[i] = {v}
    }
  }
  try {
    for await (… of values()) {
      …
I barely ever used js generators specifically, so this code is more like a concept (e.g. I suspect a race around rearm()), but you get the idea.


Not sure what `Promise.race(Promise.all(promises), loop());` is supposed to be doing. Promise.race only takes one arg, no? Also not sure what partialUpdate is supposed to be doing. Is that assuming some React-like diff reconciliation?

Personally, I'd go with something like this:

    async function showChapters(chapterURLs) {
      return Promise.all(
        chapterURLs
          .map((url) => fetch(url).then(r => r.json()))
          .map((p, i) => p.then(data => appendChapter(data, i), e => displayError(e, i)))
      );
    }
With this, you can control exactly where and how list items are rendered, regardless of order of promise resolution, and things are rendered as soon as they are available, instead of getting stuck behind slow requests.


I'm not trying to say there's only one way to do this, especially if we are talking about user affordances for failure states. I think there could be a debate about showing what data is available vs not showing partial data.

My point is just that trying to do something with the failures often removes the warning and leads to better UX, or at least conscious decisions about it, and usually some clue in the code about what you intend to happen.


How can you create that behavior?


Like I said I added a solution in the Disqus thread to that blog post... the English summary is that you Promise.race() the Promise.all() with the for-await loop.


Does that work? Isn't it possible for the monitor() function to return the result of Promise.all() rather than loop(), since they are racing?

    return Promise.race(Promise.all(promises), loop());
edit: I guess they are the same in this specific case, but probably wouldn't be in real code, making this function very clever but maybe too rigid.


I think that it works because when loop() rejects, Promise.all() rejects too so it doesn't matter who wins.


I mean in the case where nothing rejects.

Say for example the loop was doing `result.push({success: true, value: item})` instead of just `result.push(item)`. Then the value returned from loop() is different from the one returned by Promise.all(promises), and your overall monitor() function might return either.


I also posted the same thing (and have had this discussion with Jake on Twitter :))


Nice. `Promise.iterateSettled` would be cool.


Personally - he's trying to shove the logic to catch the error into the wrong place.

The loop isn't the right spot to be catching an error that resulted from failing to fetch the data, or to parse the json data that was returned. The right spot was right there at the top of the file...

    const chapterPromises = chapterURLs.map(async (url) => {
      const response = await fetch(url);
      return response.json();
    });
This is where the error is occuring, and this is also the only reasonable spot to handle it. A very simple change to

    const chapterPromises = chapterURLs.map(async (url) => {
      try {
        const response = await fetch(url);
        return response.json();
      } catch(err) {
        // Optionally - Log this somewhere...
        return `Failed to fetch chapter data from ${url}` // Or whatever object appendChapter() expects with this as the content.
      }
    });
Solves the entire issue.


That means you'll display chapter 3 even if chapter 2 fails, rather than explain that the story has failed to load. There's no point showing additional chapters if an earlier one fails.


Now you're pretty far out of a discussion about handling promise rejections, and into a discussion about product UX and feature design...

"There's no point showing additional chapters if an earlier one fails." <- This is not always true, and heavily dependent on what the use case is. For a book? sure. for a manual or textbook? Much less sure.

The approach works just fine either way, though - if you don't want to display additional chapters if one fails, you can do so easily by just including a "success" field (or anything similar) in the object that appendChapters expects, and break if needed when you hit that. (although again - you'd probably be better served by displaying a user readable error and offering a button to retry, and then displaying the rest of the content so you don't have to fetch it all a second time).

Alternatively, you can also have appendChapters throw, or if you really want, you can still leave the promise rejection - but leaving the rejection or throwing at this spot means you're making a contract that calls for `try/catch` to use without unhandled rejections.

That's the whole deal with async await. You're trading the formatting of .then().catch() for try{}catch(){}.

if you don't like writing try/catch blocks, either structure the contract so that the promise doesn't reject (at least for expected error cases), or use the traditional .then().catch().


I was happy when Promise became available, but in retrospect I'd wish we would have skipped ahead and gotten Observable (e.g: https://rxjs.dev/) instead to enable more powerful functionality and composition etc.

In Typescript dealing with rejection is also painful since rejection reasons can't be guaranteed to be Error even when you always take care of that. And it can't help you guarantee that you're handling all types of errors thrown. For that purpose I'm thinking of using https://github.com/supermacro/neverthrow#readme or https://swan-io.github.io/boxed.


both can coexist, observable is multiple-push while promise is single-push. they have their place.


One day I will write an article titled "I wish you had just built this syncronously, I can wait."

Speaking in the context of apps and webapps, so many bad UX moments are caused by async optimizations that are entirely pointless. Often I can't action the page until all the calls are done anyway, I would much rather see a finished page than watch the jittery pained birth of a dozen async calls coming to fruition and not knowing when it's safe to click something.


Gmail async loading attachments and moving everything down a row when they do has been killing me in the web app recently. Hate it. Have to consciously wait a few seconds before use every time now.


Nah - you really don't want to make it sync.

Personally - I don't want my entire UI to lock up on slow connections because the entire js context is paused waiting for a response that was fast for the dev on his dev machine sitting right next to the server, but is slow as fuck on my 3g connection pulling data at 1kb a second with fairly frequent packet loss.

I'd like to be able to click other links, navigate through menus, and interact with the site if I'm exploring and just happened to click on this page that's now loading a whole book.


I just discovered that this is possible to a limited degree using the old XMLHTTPRequest class. Looks like it has some limitations though. https://www.npmjs.com/package/sync-fetch


This is why we need proper, rich stream abstractions in JS that integrate well with promises. Promises by themselves are not enough.

Note that a real world implementation of this would also limit the number of concurrent requests - another reason to have streams with rich concurrency options.


This is what observables are all about, yet `rxjs` and the like never gained any wide spread traction, despite solving so many of these issues very well.

Not to mention there are many great upsides to using observables as well!

I remember at the time Promise was gaining acceptance toward a standard there was a rich debate about adding observables and leveraging this as the async primitive but developers complained loudly that they wanted something more primitive.

Then everyone complained about Promise being too verbose, so we got async await and async iterables.

And now that we have said primitive, people say well where are the promise helpers, promises alone aren't enough!

All of this could have been avoided with observables


Observables didn't properly integrate with promises at the right time - if they did they'd be accepted way more easily (e.g. `.first()` returning a Promise rather than an observable or `using` accepting promise return values and/or async functions and returning promises as a result).

I gave the Dart API originally as an example, as it has careful considerations for when to return Future<T> instead of another Stream<T> (https://api.dart.dev/stable/2.18.7/dart-async/Stream-class.h...) - examples include first(), last(), asyncMap etc.


Modern JavaScript has fine stream abstractions in WHATWG Stream and async iterables, and Dart has the same unhandled Future issue as JS.

Dart generally has fewer, but nicer, libraries for dealing with collections of Futures, but JS has a lot of npm libraries and is somewhat catching up with additions like Promise.{any,allSettled,race} and Array.fromAsync().


Async iterables are bare-bones, and WHATWG Streams are not much better.

I removed Dart from the comparison.


yet another reason I try and avoid throwing/rejecting promises in typescript code and just return `Thing | Error` everywhere. I'm sure there's something fancier I could get out of using a full Result type but this gets me a compiler enforcing I handle all errors somehow and keeps `try catch` out of my business logic.


That's ok, and I get the purity of only using the response instead of an implicit exception, but it gets repetitive when nearly every func is potentially returning an error. You only have to handle exceptions at the outermost layer. I don't understand why this is seen as a "gotcha" in the article, it's the whole point of exceptions.


It's all tradeoffs. For me, not having to deal with passing state past a trailing try/catch clause and not worrying about errors potentially bubbling out of one layer into another are worth the extra "if (res instanceof Error) { return res }" lines


> not worrying about errors potentially bubbling out of one layer into another

I feel like that’s the entire point of errors and try/catch?


The trouble with throwing is that the compiler doesn't care if you catch. For any library that throws, I wrap that up in my adaptor layer with try catch and then return an Error upwards. That way in the rest of my domain I've got the compiler complaining if I'm not handling an error that is coming from below. I can be sure that when I'm handling an error it's not going to be something random from 2 layers down, because it was explicitly returned by the layer I'm calling.

As in most things coding, you can accomplish the same thing with either approach, but I believe by returning an `| Error` there's a better chance for the compiler to catch a bug before I run the code.


That’s true. It’d be nice to mark methods as throwing some error and requiring it to be caught at some point (I believe Java does this).

But I can imagine it might be a bit hard in Typescript due to the async nature.


It's like this in Java, but at that point I'd rather do the | Error return type. Same level of explicitness either way.


Using exceptions would be a lot easier however if typescript picked up the throws and would warn about missing catch blocks, just as every other language with exceptions seems to get right…


If your function doesn't handle an exception, it throws it. If TS required you to say `throws` every time like in Java, that'd be repetitive like explicitly returning `| Exception`. Maybe it could detect if you're not catching in the outer event loop.

Idk though, I just use vanilla JS and don't find these safeguards necessary. Worst case, if I fail to handle something, my NodeJS express server will handle it by sending back a 500.


Yeah, but I would want to know which library function could possibly throw an error and not be caught off guard by that after deploying to prod :-/


It's almost accurate to assume that everything can throw an error in high-level code like this, especially in something complex enough to warrant a lib. The thing is, in a webserver, those are all unexpected errors and thus 500. There are only a few known ways the client can use it wrong, and I give 400 for those.


I do it the "go" way, with:

  type Result<K = any, T extends Error = Error> = [K, null] | [null, T];
And use it:

  const [response, error] = await tryFetch<Type>("");
  if (error) { handle(); }


But this way you are losing the type checker yelling at you if you try and use response without having handled error. What I like about

    res = something(): Thing | Error
is I have to do

    if (res instanceof Error) {
      handle()
      return res
    }

    doSomethingWith(res)
or else the compiler will yell at me for trying to doSomethingWith(Error)


I see. Well, for what it's worth, in my tuple example, you either have one or the other, i. e. the K result or an Error; so, if you don't check for error (or result), the compiler gives an error saying that "K is possibly `null`".

Example: https://tsplay.dev/w2p0Vm


that makes sense, and I like the simplicity of `if (err)`. I'm sure it's uncommon but you still lose the compiler check if the function can return null as a matter of course.


Huh, never thought to do this. Coming from Go, this would honestly be very ergonomic (not being snarky).


Well you know what they say, the better name for Go would've been Errlang.


It's honestly great. I also have done a fair amount of go (and before that a decent amount of objc) and it feels in the same vein as both of those but with some nice additional compiler helpers.


JS let’s you throw any type as an error.


This is what we do; for me it came from ancient times when I was doing fulltime C dev and I mostly kept it up (but improved) except when working on teams that want or already have exceptions all over in all layers.


It's similar to manual memory management: you have to remember to do the other side of the thing you're doing.

Structured concurrency is one approach to solving this problem. In a structured concurrency a promise would not go out of scope unhandled. Not sure how you would add APIs for it though in Javascript.

See Python's trio nurseries idea which uses a python context manager.

https://github.com/python-trio/trio

I'm working on a syntax for state machines and it could be used as a DSL for promises. It looks similar to a bash pipeline but it matches predicates similar to prolog.

In theory you could wire up a tree (graph if you have joins) of structured concurrency with this DSL.

https://github.com/samsquire/ideas4#558-assign-location-mult...


Python 3.11 introduced asyncio.TaskGroup[0] to cover the same use-case as trio nurseries (scoped await on context manager exit). It's imperfect, sure, but it improves matters.

[1] https://docs.python.org/3/library/asyncio-task.html#asyncio....


> It's similar to manual memory management

Funny, I used this very analogy about concurrency the other day. I think we live in a manually managed concurrency era, because we haven’t entirely figured out the right mix of patterns to get to the point when it makes sense. In the future, I expect all non-low-level languages simply agrees on one or two models, just like RAII and GC solved the manual memory management problem. That, and we still have a lot of bugs from concurrency related matters which we basically have no way to test.

Structured concurrency is most certainly part of the solution, but I there are a few devils in the details that are a lot of work (or careful thought) to get right.


Thanks for your comment.

I would really enjoy talking to you more about this.

From what you wrote, I agree, we are indeed in an era where concurrency and its notation has yet to settle on an elegant approach.

With microservices and distributed systems simultaneous independent execution and ordering and consistency then the problems of concurrency reveal themselves.

Do you have any ideas how you would want to define concurrency? What notation would you like to be capable of writing? What is your favourite representation of concurrency? (Such as bash pipelines)


I was strongly against JS engines treating unhandled rejections as errors because a rejected promise is only ever not handled yet. Basically, in order to prevent blow-ups from unhandled rejections, you have to synchronously decide how to handle rejections. I feel like this goes against the asynchronous nature of promises.


What would the alternative be? Handle them when the last reference to the promise disappears?

Seems that could hide more bugs when someone keeps a reference to a big set of promises for some other reason - keeping a reference to something shouldn't change that things behaviour.


The alternative is just to leave the promise as rejected, and expect the developer to .catch when they're ready.


That means you can have tons of promises that get silently rejected (and never handled).

I can see the appeal, but I can also see why they didn’t go that way.


I disagree that the fix is to catch and ignore the error. The fetch handling logic should be catching the errors and reporting them somewhere (ex. Sentry or the UI). In a production app where you report and log errors this "issue" doesn't manifest.


I usually catch rejections then turn them into a resolution as an object with an 'errno' property. That way, your handler always gets called, rejections are never generated up to the top level, and you can move the error handling into your consumer very easyily.

    const thing = await fn();
    if (thing?.errno) {
        //handle error
        return;
    }

    // handle result
Maybe I spent too much time writing C code. There are very few times when I actually want a rejection to take an entirely different path through my code, and this feels like the correct way to write it.


The error isn't ignored. The parent function will still reject if it's unable to complete the operation (with the fetch error, if fetch was the cause of the error).


I didn't say it's globally ignored (I understand how promises work :)), I said what you did in the code sample adds a catch that ignores the error (it's an empty catch). In real code that wouldn't be empty though, it should have error reporting logic or something to handle that situation in the UI. Once you do that it's not a "hack" anymore, it's just reasonable error handling.


Do you object to Promise.all/race/any for turning potentially many rejections into one while marking all as handled?


The change in mine is moving await into the second for loop, so that it can be caught and handled.

    async function showChapters(chapterURLs) {
      const chapterPromises = chapterURLs.map(async (url) => {
          const response = await fetch(url);
          return response.json();
      });
    
      for (const i in chapterPromises) {
        try {
          appendChapter(await chapterPromises[i]));
        } catch (error) {
          handleChapterErrorGracefully(error, i);
        }
      }
    }

I used for..in so that the index can be passed to handleChapterErrorGracefully


I don’t believe this will resolve the issue.

    - Promises are created: 1,2,3,4
    - The for loop awaits promise 1
    - Promise 3 is rejected
    - Promise 1 is resolved
    - The for loop awaits promise 2
    - …
Somewhere in this, promise 3’s rejection would be considered unhandled (I’m not sure exactly when).


Promise 3's rejection would be awaited and handled after promise 2.

Yes, it might finish over the network before promise 1, but it won't be realized by the program until its await occurs in the for loop.


The issue is that V8 unhandled rejection logic doesn’t wait until the promise is awaited. If a promise is rejected, has no awaiter (.then/.catch/await), and persists in this state for some amount of time/event-loops (that the article covers), it will be treated as unhandled.

I tested this in Node 18. It treats it as unhandled. The only way to avoid the gotcha’s described in the article is to ensure all promises: - cannot fail, and/or - are immediately awaited (by anything but .catch is the easiest).


Here is a way with a Generator (the chapterGenerator). Only writing this because I've seen a lot of comments lamenting lack of "streams"

    async function showChapters(chapterURLs) {
      const chapterGenerator = function*() {
          for(const url of chapterURLs) {
              yield fetch(url).then(response => response.json());
          }
      };
    
      for (const chapter of chapterGenerator()) {
        try {
          appendChapter(await chapter));
        } catch (error) {
          handleChapterErrorGracefully(error);
        }
      }
    }


This needs a trigger warning. I flashed back to upgrading a large project's build from node 12 to 16, which introduced a bunch of these in the scope of jest tests, and there was no indication of where the offending code was, which made the correct advice of "handle the rejections in some way" a little difficult.


This example is interesting because it is doing a fetch in a loop, which could fail for a whole number of reasons and leave you with limited data that you then need to build a UX to deal with ("Some data didn't load correctly, click here to try again"). The example is a fetch of each chapter in a book and this opens up a larger question of how to build APIs.

If I was doing this, I'd have an API endpoint which returns all of the chapters data that I needed for the UX display in a single request. Then, I don't need the loop. The response is all or nothing and easier to surface in the UX.

The next request would presumably just for an individual chapter for all the details of that chapter. Again, no loop.

I know this doesn't answer the question of how to deal with the promises themselves, but more about how to design the system from the bottom up not not need unnecessarily complicated code.


Eh, he's not really doing the fetch in a loop - he's generating a sequence of promises that are kicking off the fetch right away (during the call to map).

If the number of chapters is less than the sliding window of outstanding requests in the user's browser - the browser will make all the requests basically in parallel. If the number of chapters is very large, they will be batched by the browser automatically - chromium used to be about 6 connections per host, max of 10 total, not sure if those are the current limits.

He's just processing them in a loop as they resolve. The real issue with his code is that the loop isn't the right place to handle a rejection. The right place is right there at the top of the file in the map call.

The loop doesn't need to care about a failed fetch or json parse, those should be handled in the map, where you can do something meaningful about it. The loop doesn't even have the chapterUrl of the failed call.


Each user is making 6-10 requests, in parallel, to the server, instead of 1. My point is that I'd design this to not DDoS the servers if I got a lot of concurrent users.


Most likely Jake encountered real world cases that really needed to use parallel promises and used fetch() as an example that could fit in a blog.

Maybe in the real world it's file or database reads, off-thread parsing, decompression, etc...


Yes, this is why I wrote:

> I know this doesn't answer the question of how to deal with the promises themselves

I'm trying to bring up a bigger picture here.


that's... not a particularly astute approach.

Generally speaking, you're almost always better off with more light requests than fewer heavy requests.

Assuming this is static content (chapters) there's absolutely zilch you're gaining by fetching the whole block at once instead of chapter by chapter, since you can shove it all behind a CDN anyways.

You're losing a lot of flexibility and responsiveness on slow connections, though.

Not to mention - it may make sense to be doing something like fetching say... 5 chapters at a time, and then fetching the next chapter when the user scrolls to the end of the current chapter (pagination - a sane approach used by anyone who's had to deal with data at scale)

In basically every case though, as long as each chapter is at least a few hundred characters, the extra overhead from the requests is basically negligible, and if they're static - easily cached.


You didn't read what I wrote closely enough. You just fetch the data you need for the list of chapters. In most cases, it'll just be something like chapterID, chapterTitle and maybe chapterShortDescription. This would be easily stored in db cache since chapters don't change much.


So you catch it and pass undefined to your chapter handler function? It will surely blow up.

In production code I'm using `.catch(log.rescueError(msg, returnValue))` helper function (or .rescueWarn) - which is very helpful. It'll log error of that severity and return what I'm providing in case of exception.


> So you catch it and pass undefined to your chapter handler function?

No, the original rejection is preserved.


Pythons asyncio.as_completed has a neat solution to this. By returning a placeholder-promise (awaitable), instead of the value they resolve to. It is only when you await this placeholder it eventually resolves the value of the first completed item in the loop. With this design you can choose yourself what do do if it fails.

    for coro in asyncio.as_completed(awaitable_list):
        earliest_result = await coro  # Can be wrapped with try-block
I think same thing can be achieved with smart use of Promise.race. It's a bit different from the article, that wants to iterate in same order as the input. This can be added, by collecting the results and iterating them again, reinventing Promise.allSettled, i didn't fully understand why op couldn't use that.


This seems like a bug with the unhandled rejected promise handler.

IMO it should only trigger for unhandled promises that are garbage collected. This way the given example wouldn’t cause a false positive


It would. If chapterPromises[0] rejects, then chapterPromises[1] is never handled (because it's redundant), even once it's GC'd.


And this is why we shouldn't allow frontend engineers to get ahead of themselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: