Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The main problems with software supply chain security are that developers using libraries don't read their code and that that code can be changed later by the author. Neither of those are problems with "AI"-generated code - it lives right in your source files and you have to be actively avoiding reading it to miss critical issues that you're familiar with.


Play this out to the end: instead of having dependencies, you have a giant blob of machine written code that nobody understands. This is the same problem, just with different attack vectors. Instead of trying to get a vulnerability into a popular package, attackers will try to get them into the output from common prompts.

In both cases the problem is the same, and hearkens back to Reflections on Trusting Trust. The total amount of code necessary to implement a useful system is far too large for anyone to fully understand and audit.


This is so unfortunate though. It really means that repetition/reimplementation (without reuse) is really the only guaranteed way of dealing with supply chain security. Other techniques like sandboxing could be useful, but are not a panacea in this case.


I think it's just like the physical supply chain. Everyone will pick some point on the continuum between vulnerability and reimplementation based on their individual needs.

But I think it should be clear that "well we had a black box AI make them" is not going to be a satisfying answer for militaries trying to remove hostile powers from their electronics supply chains. No different with software.


Yeah - I think there needs to be substantially more effort on AI safety/comprehensibility, before we progress much further. But knowing history, it seems likely we’ll sooner reach a point where blind use of AI will result in significant financial and/or human losses, and only then we’ll start applying real care in its application.


Yeah this is basically my perspective as well. I think the long future of this could be pretty great, but I think the period between now and that future may be pretty choppy.


> have to be actively avoiding reading it

A question of volume, surely? It might be OK for snippets, but once you've added a million lines of AI code to your codebase, when are you going to get around to reading it?

Like libraries, you hit a button and add thousands to millions of LOC. If it appears to work, are you going to read it?


Reading the AI code and understanding it would be a harder task than writing it yourself, in my opinion. And that will only be more true over time if it becomes more common to have prompt writing skills than code writing skills.


I’d even argue the next logical step would be FPGA-type programming where there isn’t really a human-readable program anymore, but just a model loaded into hardware.


Yeah. I think AI generated code that humans are supposedly reviewing seems like an unstable equilibrium, which will end up swinging one direction or the other.


Algebraic effects could help a lot. They let you quickly discard giant swaths of pure code, or even impure code that's surely harmless because its effects are limited to, say, graphics or audio output, and focus on the operations that use sensitive things like the file store, the internet, etc.


Yes! I'm looking forward to trying these generative code tools with algebraic effects and similar.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: