Algorithm (no, I don't mean Mamba etc, you can still use decoder-only transformers with some special attention layers) and engineering side there's still at least 10x improvement possible. Compared to what TensorRT-LLM is able to achieve now.
My concern is, this is only possible because of scale, so local LLMs are going to be dead in the water.
Algorithm (no, I don't mean Mamba etc, you can still use decoder-only transformers with some special attention layers) and engineering side there's still at least 10x improvement possible. Compared to what TensorRT-LLM is able to achieve now.
My concern is, this is only possible because of scale, so local LLMs are going to be dead in the water.