Going just from published research, it seems Lilian Weng focus is LLM agents, safety, and alignment, focusing on how models are used, guided, and evaluated, not how they’re built.
It seems Horace He focuses on deep learning systems and compiler optimization, improving the performance of frameworks like PyTorch.
While both are clearly highly capable, and maybe capable of focusing on other areas, again just from published papers, neither seems to have published work on core LLM architectures or foundational model training, that could help bring about a scientific advance on the performance of current models.
Their contributions seem to be on enhancing usability and efficiency, not the underlying design or scaling of modern LLMs.
If that is the core team, I would be worried if they have the researchers capable of producing a breakthrough worth of the billions committed. But maybe that is why they are still hiring?
I was more concretely referring to the level of talent in the engineering team, for example Lilian Weng and Horace He.
Horace can probably produce $50M of revenue personally per year.