Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think of it more like self driving cars.

Analogous to the way I think of self-driving cars is the way I think of fusion: perpetually a few years away from a 'real' breakthrough.

There is currently no reason to believe that LLMs cannot acquire the ability to write secure code in the most prevalent use cases. However, this is contingent upon the availability of appropriate tooling, likely a Rust-like compiler. Furthermore, there's no reason to think that LLMs will become useful tools for validating the security of applications at either the model or implementation level—though they can be useful for detecting quick wins.



Have you ever taken a Waymo? I wish fusion was as far along!


My car can drive itself today.


I, too, own a Tesla. And granted, analogous to the way we can achieve fusion today.

Edit: Don’t get me wrong btw. I love autopilot. It’s just completely incapable of handling a large number of very common scenarios.


Yeah, there's a massive difference between a system that can handle a specific number of well-defined situations, and a system that can handle everything.

I don't know what the current state of self-driving cars is. Do they already understand the difference between a plastic bag blowing onto the street, and a football rolling onto the street? Because that's a massive difference, and understanding that is surprisingly hard. And even if you program them to recognize the ball, what if it's a different toy?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: