I think its great. IELR is a straightforward optimization that just makes sense to use when building an LR family parser.
You can think of LALR(1) as just taking an LR(1) state graph and merging together nodes that are compatible (in that they are essentially the same parsing state, but differ in which "lookahead" tokens are valid). A grammar is consider to be LALR(1) if combining states in this way still results in a correct parser.
IELR, which is derived from David Pager's PGM and Lane Tracing "Minimal LR" techniques from the 1970s, applies a stricter compatibility test for nodes. This check usually allows a large majority of merges while rejecting the ones that will lead to parsing errors. In this way, a more sophisticated grammar, all the way up to LR(1) can still be processed correctly using a table size much closer to what you get with LALR.
You essentially get the best of both worlds and its tragic that this fairly simple technique is so obscure. I believe the reason for the obscurity of these techniques is that Pager's PGM and lane tracing papers are extremely terse, somewhat confusing, and lacking in sufficient detail in certain areas. For example, the PGM paper on p.256 crucially notes that successors may need to be "regenerated as a distinct state" without further explanation. (Until his protege Chen provided some pseudo-code 34 years later in his dissertation of 2009. BTW, I have verified that the Wisent and Menhir parsers and others implement PGM correctly, if anyone is looking for actual details.)
Note that there is some additional time and complexity required to detect conflicts and regenerate/split nodes and so the benefits or IELR are not entirely free.
More obscurely, it should be noted that IELR suffers the same problem as LALR in that combining states can introduce conflicts between tokens when context aware lexing [Nawrocki 1991] is being utilized. (BTW, Tree Sitter uses context aware lexing but its not very common otherwise - yacc/bison doesn't AFAIK). Suddenly tokens are eligible for matching alongside other tokens that normally would not be matched together. Unless your tokens are globally conflict-free or you've got a priority scheme that always resolves conflicts properly (but then you wouldn't need context-aware-lexing would you?), you'll start matching tokens that shouldn't be matched in a particular parsing state. There are ways to avoid those conflics as well (see PSLR - also from IELR author Joel Denny) but it is a lot more work. Even if you avoid conflicts, invalid content can match tokens that shouldn't be matched which complicates error reporting because you get a confusing parse error instead of a correct tokenization error. So this is one case where the full LR(1) may be preferred over IELR.
But I can't think of a convincing argument for preferring LALR over IELR.
You can think of LALR(1) as just taking an LR(1) state graph and merging together nodes that are compatible (in that they are essentially the same parsing state, but differ in which "lookahead" tokens are valid). A grammar is consider to be LALR(1) if combining states in this way still results in a correct parser.
IELR, which is derived from David Pager's PGM and Lane Tracing "Minimal LR" techniques from the 1970s, applies a stricter compatibility test for nodes. This check usually allows a large majority of merges while rejecting the ones that will lead to parsing errors. In this way, a more sophisticated grammar, all the way up to LR(1) can still be processed correctly using a table size much closer to what you get with LALR.
You essentially get the best of both worlds and its tragic that this fairly simple technique is so obscure. I believe the reason for the obscurity of these techniques is that Pager's PGM and lane tracing papers are extremely terse, somewhat confusing, and lacking in sufficient detail in certain areas. For example, the PGM paper on p.256 crucially notes that successors may need to be "regenerated as a distinct state" without further explanation. (Until his protege Chen provided some pseudo-code 34 years later in his dissertation of 2009. BTW, I have verified that the Wisent and Menhir parsers and others implement PGM correctly, if anyone is looking for actual details.)
Note that there is some additional time and complexity required to detect conflicts and regenerate/split nodes and so the benefits or IELR are not entirely free.
More obscurely, it should be noted that IELR suffers the same problem as LALR in that combining states can introduce conflicts between tokens when context aware lexing [Nawrocki 1991] is being utilized. (BTW, Tree Sitter uses context aware lexing but its not very common otherwise - yacc/bison doesn't AFAIK). Suddenly tokens are eligible for matching alongside other tokens that normally would not be matched together. Unless your tokens are globally conflict-free or you've got a priority scheme that always resolves conflicts properly (but then you wouldn't need context-aware-lexing would you?), you'll start matching tokens that shouldn't be matched in a particular parsing state. There are ways to avoid those conflics as well (see PSLR - also from IELR author Joel Denny) but it is a lot more work. Even if you avoid conflicts, invalid content can match tokens that shouldn't be matched which complicates error reporting because you get a confusing parse error instead of a correct tokenization error. So this is one case where the full LR(1) may be preferred over IELR.
But I can't think of a convincing argument for preferring LALR over IELR.