Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On my system it uses twice as much CPU as plain old ls in a directory with just 13k files. To recursively list a directory with 500k leaf files, lla needs > 10x as much CPU. Apparently it is both slower and with higher complexity.


On the latest release the it can list a tree of 100 in depth with over 100k files in less than 100ms and if cached 40ms


Will definitely prioritize optimization in the next releases. Planning to benchmark against ls on various systems and file counts to get this properly sorted.


Not trying to “gotcha” you, but I would imagine that 10x the CPU of ls is still very little, or am I wrong?


In the case of the 500k tree, `lla` needs 2.5 seconds, so it's pretty substantial.


Is listing a lot of files really CPU-limited? Isn’t the problem IO speed?


What exactly makes ls faster?


But it’s written in rust so it’s super fast. Did you take that into account when running your benchmarks? /s




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: