Yes. Nanite is very clever. A mesh representation that can be shown at a huge range of levels of detail is what makes it go. Much of the work is moved to asset preprocessing. The actual rendering is simplified. There are about as many triangles being drawn as there are pixels, regardless of scene complexity. If a triangle is bigger than a pixel, it's time to zoom in to a higher LOD for that part of the mesh. If a triangle is much smaller than a pixel, a higher LOD can be used. This is a reduction from O(N) to O(1). So draw time is constant regardless of scene complexity.
Watch the SIGGRAPH video to get the feelings of: that's impossible - oh, I kind of see how that works - one pixel triangles? - how do they get that mesh representation set up right? - oh, graph theory - that data format has to be a pain to generate with all those constraints - they're rendering mostly in the CPU? - that's all that needs to be done at render time? - GPUs need to be redesigned to be a better match to this - how to stream this stuff? - that compression scheme is clever.
Nanite really blurs the line between geometry and texture - in a sense it’s a shader that uses triangle mesh data as if it were a texture source.
This siggraph session will expand your mind: https://m.youtube.com/watch?v=eviSykqSUUw