Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Keep believing it is augmentation.

The end game is outsourcing, instead of team mates doing the actual programing from the other side of the planet, it will be from inside the computer.

Sure the LLMs and Agents are rather limited today, just like optimizating compilers were still a far dream in the 1960's.



And just like optimizing compilers LLMs also emit code that is difficult to verify and no-one really understands, so when the shit hits the fan you have no idea what's going on.


Is it though? Most code that LLM emits are easier to understand than equivalent code by humans in my experience, helped by overt amount of comment added at every single step.

That's not to say the output is correct, there are usually bugs and unnecessary stuff if the logic generated isn't trivial, but reading it isn't the biggest hurdle.

I think you are referring to the situation where people just don't read the code generated at all.. in that case it's not really LLM's fault.


> Most code that LLM emits are easier to understand than equivalent code by humans in my experience

Even if this were true, which I strongly disagree with, it actually doesn't matter if the code is easier to understand

> I think you are referring to the situation where people just don't read the code generated at all.. in that case it's not really LLM's fault

It may not be the LLM's "fault", but the LLM has enabled this behavior and therefore the LLM is the root cause of the problem


That's kinda obvious that's their goal especially with the current focus on coding of most of the AI labs in most announcements - it may be augmentation now but that isn't the end game. Everything else these AI labs do, while fun seems like at most a "meme" to most people in relative terms.

Most Copilot style setup's (not just in this domain) are designed to gather data and train/gather feedback before full automation or downsizing. If they outright said it they may not have got the initial usage needed to do so from developers. Even if it is augmentation it feels like at least to me the other IT roles (e.g. BA's, Solution Engineers maybe?) are safer than SWE's going forward. Maybe its because dev's have a skin in the game and without AI its not that easy of a job over time makes it harder for them to see. Respect for SWE as a job in general has fallen in at least my anecdotal conversations mainly due to AI - after all long term career prospects are a major factor in career value, social status and personal goals for most people.

Their end goal is to democratize/commoditize programming with AI as low hanging fruit which by definition reduces its value per unit of output. The fact that there is so much discussion on this IMO shows that many even if they don't want to admit it there is a decent chance that they will succeed at this goal.


Their end goal is to democratize

Stop repeating their bullshit. It is never about democratizing. If it was, they would start teaching everyone how to program, the same way we started to teach everyone how to read and write not that long ago.


Many tech companies and/or training places did try to though didn't they? I know they do boot camps, coding classes in schools and a whole bunch of other initiatives to get people into the industry. Teaching kids and adults coding skills has been attempted; the issue is more IMO that not everyone has the interest and/or aptitude to continue with it. The problem is that there's parts of the industry/job that aren't actually easy to teach (note not all of it); can be quite stressful and require constant keeping up - IMO if you don't love it you won't stick with it. As software demand grows, despite the high salaries (particularly in the US) and training, supply didn't keep up with demand till recently.

In any case I'm not saying I think they will achieve it, or achieve it soon - I don't have that foresight. I'm just elaborating on their implied stated goals; they don't state them directly but reading their announcements on their models, code tools, etc that's IMO their implied end game. Anthrophic recently announced statistics that most of their model usage is for coding. Thinking it is just augmentation doesn't justify the money IMO put into these companies by VC's, funds, etc - they are looking for bigger payoffs than that remembering that many of these AI companies aren't breaking even yet.

I was replying the the parent comment - augmentation and/or copilots don't seem to be their end game/goal. Whether they are actually successful is another story.


I can tell that I am aware of some in-house trainings, where AI has taken over the jobs of the translation team, for example.

But lets keep cheering until it does become good enough to come for our developer jobs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: