Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder how well it performs on Rust code, which is probably different in terms of patterns from C++ code. They mention the Fuchsia project but they seem to have focused on its C++ components. There is actually more Rust inside Fuchsia now than C++ [0].

I also wonder how this would look like for mainlining. Should the LLVM project depend on tensorflow now? IIRC tensorflow itself depends on LLVM so to avoid circular dependencies, does there have to be a ML-free version of LLVM that tensorflow depends on, which is then used by the proper LLVM? Or can inference be converted into simple C like it was for lpcnet? Lastly, there is the general question of integration of ML models into open source projects. Say it is merged and the old manual heuristic is deleted. What if Google one day decides they don't want to maintain the component any more? Can LLVM maintainers do any refactors of the code around it? Unless Google also shares their training infrastructure, LLVM maintainers can't re-train the model on the post-refactor data.

[0]: https://old.reddit.com/r/rust/comments/k9r3s4/fuchsia_lines_...



>Should the LLVM project depend on tensorflow now?

This bit from the article seem to be relevant.

"The TensorFlow model is embedded with XLA AOT, which converts the model into executable code. This avoids TensorFlow runtime dependency"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: