Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It was easy to understand what the code was instructing the computer to do. It was harder to understand what that meant, why it was happening, and how to change it.

A program to calculate payroll might be easy to understand, but unless you understand enough about finance and tax law, you can't successfully modify it. Same with an audio processing pipeline: you know it's doing something with Fourier transforms, because that's what the variable names say, but try to tweak those numbers and you'll probably destroy the sound quality. Or a pseudo-random number generator: modify that without understanding how it works, and even if your change feels better, you might completely break it. (See https://roadrunnerwmc.github.io/blog/2020/05/08/nsmb-rng.htm..., or https://redirect.invidious.io/watch?v=NUPpvoFdiUQ if you want a few more clips.)

I've worked with codebases written by people with varying skillsets, and the only occasions where I've been confused by the subtext have been when the code was plagiarised.



Marcus Müller gives a good explanation in a comment (CC BY-SA 4.0) on Stack Exchange: https://dsp.stackexchange.com/posts/comments/204371

> [The] problem is that the sole medium of transport here for the intent of what the user wanted the language model to write and what we see is the output of the language model. And that in itself is a bit of a problem: had we hand-written code, we could look at what it tries to do; it would have suggestive names, maybe even comments, stemming from the user's original intent, and not from an LLM's interpretation of what the user told it was their intent. Basically, LLMs are intent-obfuscating machines for engineering problems :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: