I scanned the code and understood what it was doing, but I didn't spend much time on it once I'd seen that it worked.
If I'm writing code for production systems using LLMs I still review every single line - my personal rule is I need to be able to explain how it works to someone else before I'm willing to commit it.
This is why I love using the Deep-Seek chain of reason output ... I can actually go through and read what it's 'thinking' to validate whether it's basing its solution on valid facts / assumptions. Either way thanks for all of your valuable write-ups on these models I really appreciate them Simon!
Nota bene - there is a fair amount of research that indicates models outputs and ‘thoughts’ do not necessarily align with their chain of reasoning output.
You can validate this pretty easily by asking some logic or coding questions: you will likely note that a final output is not necessarily the logical output of the end of the thinking; sometimes significantly orthogonal to it, or returning to reasoning in the middle.
All that to say - good idea to read it, but stay vigilant on outputs.
That's a good note. I use DeepSeek for early planning of a project because of how valuable its reasoning output can be. It's common that I'll describe my problem and first draft architecture and see something in the output like "Since this has to be mobile optimized..." Then I'll stop generation, edit the original prompt to specify that I don't have to worry about mobile, and run it again.
I think is the right way to do it. Produce with LLM, debug and read every online. Delete lots of it.
Many people fear this approach for production, but it is reasonable compared to someone with a single course in Coursera writing production JS code.
Yet, we tend to say the LLM wrote this and that which implies model did all the work. In reality it should be understood as a complex heavy lifting machine which is expected to be operated by a very well prepared operator.
The fact I got a Kango and drilled some holes does not make me engineer right? And it takes an engineer to sign off the building even thought it was archicad doing the math.
This sounds like a recipe for destructive bugs and security vulnerabilities to slip into production.
Reviewing is really hard to do well. Like, on a psychological level. Your brain just starts nodding and humming along, pretending to understand. Humans have to consciously "perform review" to actually review. For example, https://en.wikipedia.org/wiki/Pointing_and_calling and checklists in aviation and health care, Tom Gilb's "Inspection" JPL-inspired spec review processes.
Even HN gets a steady drip of "look at my vibecoded project" -- "umm, you just leaked your API keys".
It's just that reviewing doesn't matter for a space invaders clone.
Reading other people's (or llm's) code is one of the best ways of improving your own coding abilities. Lazy people using llms to avoid reading any code is called "vibe coding", and their abilities atrophy no matter who or what wrote the code they refuse to read.
>We hate reading code and will avoid the hassle every time, but that doesn't mean it is easy.
Speak for yourself. I love reading code! It's hard and it takes a lot of energy, but if you hate it, maybe you should find something else to do.
Being a programmer who hates reading code is like being a bus driver who hates looking at the road: dangerous and menacing to the public and your customers.
That's abusive, unacceptable, and not even a complete list!
You can't go after another user like this on HN, regardless of how right you are or feel you are or who you have a problem with. If you keep doing this, we're going to end up banning you, so please stop now.
They said "production systems", not "critical production applications".
Also the 'if' doesn't negate anything as they say "I still", meaning the behavior is actively happening or ongoing; they don't use a hypothetical or conditional after "still", as in "I still would".
Offtopic, but Django is really bad and a huge pile of code smell. (Not a Django programmer. I manage them and can compare Django-infected projects to normal projects.)
If I'm writing code for production systems using LLMs I still review every single line - my personal rule is I need to be able to explain how it works to someone else before I'm willing to commit it.
I wrote a whole lot more about my approach to using LLMs to help write "real" code here: https://simonwillison.net/2025/Mar/11/using-llms-for-code/