How does it run faster than find? Can I manually implement that speedup using standard unix tools? I need to run find a lot on many machines I don't have access to install anything on.
You can speed up grep by using 'xargs' or 'parallel' because searching tends to be the bottleneck.
But for 'find', the bottleneck tends to be directory traversal itself. It's hard to speed that up outside of the tool. fd's directory traversal is itself parallelized.
The other reason why 'fd' might be faster than a similar 'find' command is that 'fd' respects your gitignore rules automatically and skips hidden files/directories. You could approximate that with 'find' by porting your gitignore rules to 'find' filters. You could also say that this is comparing apples-to-oranges, which is true, but only from the perspective of comparing equivalent workloads. From the perspective of the user experience, it's absolutely a valid comparison.