> Our previous linting setup took 75 minutes to run, so we were fanning it out across 40+ workers in CI. By comparison, oxlint takes around 10 seconds to lint the same codebase on a single worker[…]
So it's in fact 18000 times faster on this embarrassingly parallel problem (but doing less for now).
Do you have some source files that are somehow exempt from bugs and would be a waste of the linter's time?
Probably not, but it's a trick question: if you try to look for exceptions to the rule, you have already wasted so much time that running a linter on all files would be faster.
What if the diff adds a new linter rule, should we only run it on the linter config file?
What if the linter uses more context than a single file, a type-checker for example or even just checking the correct number of arguments (regardless of type) are passed to an imported function - or that that symbol is indeed even callable? Should we only run the linter on the caller's file, or the callee's, when they haven't both changed?
Run the linter on the code base then, when you make the change? Not every check-in on the off chance a rule changed. Or, add some logic that the CI runs it against the whole code base only when the rules changed, otherwise just the relevant files to the commit/pr
Also, ESLint doesn't do type checking. That's typescripts job, and apparently typescripts runtime isn't an issue.
If a different (unchanged) file depends on the one you changed, you could have changed the API in a way that makes the unchanged file unacceptable to your linter.
Yes, because you lint everything in CI. Otherwise, linter warnings will start creeping into your codebase immediately, and the tool becomes much less useful.
I think if my CI was taking 45 mins to lint I'd look at linting only the files changed since the previous build instead of splitting it across 40+ workers. Or writing a new linter in Rust.
But I'm generally working in a (human & financially) resource-constrained environment.
Typescript lints are type-aware so you can’t just lint changed files, you have to relint the entire codebase to check if any type changes have impacted the unchanged code.
I'm not sure if Eslint has this, but there could be cross-file lints (eg. unused variables). If some file changes, you may need to relint dependencies and dependent files. This could recursively trickle.
I'm not sure if Eslint does this either, but indices or some incremental static analysis sounds like it could help the linter minimize rechecks or use previous state.
You can tell eslint about globals in it's config. But if you're using variables that arn't declared in the file somehow, that might be an issue you want to look at in general. That's a potential foot gun a linter should be balking at.
if you have one file that every single file across the repo imports in a way and you make changes to that file, you might run the linter for the entire repo. But again, how likely is this scenario?
If the index or incremental static analysis object was designed well enough, I don't think you would need to lint every file, you would just need to look at files that consume that variable. Maybe you would look at every index?
I'm not sure how well this could scale across (600- 1000?) different lints though. I should look into static analysis a bit more.
As the sibling comment mentions, you may have lint rules that depend on checking for the existence of, or properties of, another file. A popular set of rules comes from https://www.npmjs.com/package/eslint-plugin-import which validates imports, prevents circular dependencies, etc
If a file has been linted, is unchanged since it was linted, there's literally no need to lint it again. Much like if you only need to process one record, you don't query the whole table to get the record.
Disagree on ESLint vs Typescript. ESLint and TyepScripts jobs should have minimal overlap.
ESLint primary job is linting. It should be finding 'foot guns' and code style issues. Things that are absolutely valid in the language, but could lead to potential issues. Because of that, it's totally valid that you're not finding as much value in it. It depends on the rules you enable in it, etc. And yeah, it can feel super nitpicky when it's yelling at you for not having a radix in parseInt().
Typescript's 'compile' step or whatever, it doing type checking and making sure your code is valid. If you're using bare JS, your IDE should be doing this job, not eslint.
(but yes, anything more than a few minutes to lint even a large code base is insane.)
I think the typescript-eslint plugin in particular has some high value eslint rules that complement TypeScript.
For example, the no-floating-promise[0] rule catches some easily-made mistakes involving promises in a way that TypeScript doesn't on its own.
Other rules can be used to increase type safety further. There are various rules relating to `any`, like no-unsafe-argument[1], which can be helpful to prevent such types sneaking into your code without realising it; TS has `noImplicitAny`, but it'll still let you run something like `JSON.parse()` and pass the resulting any-typed value around without checking it.
> For example, the no-floating-promise[0] rule catches some easily-made mistakes involving promises in a way that TypeScript doesn't on its own.
Is there a fast linter that checks for this? I find this error easy to make as well, and it usually causes weird runtime behaviour that's hard to track down.
I get a ton of value from ESLint with TypeScript, and in particular from @typescript-eslint. And yes, 75 minutes is absolutely bonkers. It would have me rethinking a lot of things well short of that time. But automated quality checks wouldn’t be anywhere near the top of that rethinking list. And partly, but not only, because of irrelevant nitpicks. Having humans do those nitpicks is vastly worse in time elapsed, and likely in compute time in many scenarios as well. The more human time is spent on the things linters help with, the more that time is not spent on reviewing and ensuring correctness, performance, design, maintainability, user- and business-implications, etc.
If you leave dependencies off a hook by mistake you’re creating bugs, unintended behaviour and making your code a nightmare to modify. That’s always going to be worth linting for. Can you over lint a codebase, sure I guess so, if your lint stage is taking hours it probably needs optimization, but your assertion that type checking is enough is incorrect.
Having to specify every single closure in dependencies does nothing. Not every dependency has to be specified. Not specifying a dependency is not always a bug.
Honestly if not exhaustively putting some closures in your useEffect deps list is your idea of a “maintenance nightmare” then maybe you should stay away from real production code bases? There are plenty of hairier mistakes and patterns out there than that.
> Our previous linting setup took 75 minutes to run, so we were fanning it out across 40+ workers in CI. By comparison, oxlint takes around 10 seconds to lint the same codebase on a single worker[…]
So it's in fact 18000 times faster on this embarrassingly parallel problem (but doing less for now).