Hacker Newsnew | past | comments | ask | show | jobs | submit | jmull's commentslogin

Hate is too strong a word, but it’s junk.

> unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence.

I'm sure it will be possible, but it may well be very expensive. If it is, why would anyone spend the resources?

AI evolution will certainly follow the money, which is not necessarily the same as the path to AGI.


> or the boilerplate, libraries, build-tools, and refactoring

If your dev group is spending 90% of their time on these... well, you'd probably be right to fire someone. Not most of the developers but whoever put in place a system where so much time is spent on overhead/retrograde activities.

Something that's getting lost in the new, low cost of generating code is that code is a burden, not an asset. There's an ongoing maintenance and complexity cost. LLMs lower maintenance cost, but if you're generating 10x code you aren't getting ahead. Meanwhile, the cost of unmanaged complexity goes up exponentially. LLMs or no, you hit a wall if you don't manage it well.


>There's an ongoing maintenance and complexity cost.

My company has 20 years of accumulated tech debt, and the LLMs have been pretty amazing at helping us dig out from under a lot of that.

You make valid points, but I'm just going to chime in with adding code is not the only thing that these tools are good at.


> LLMs lower maintenance cost

Even the most enthusiastic AI boosters I've read don't seem to agree with this. From what I can tell, LLMs are still mostly useful at building in greenfield projects, and they are weak at maintaining brownfield projects


From my experience greenfield /brownfield is not the best dichotomy here. I observed, how same tooling is generating meaningless slop on greenfield project and 10kLoC of change (leading to an outage) on existing project in one hands and building a fairly complex new project and fixing a long (years) standing bug with a two lines patch in the other.

And I have more examples, where I, personally, was on the both sides of the fence: defined by my level of the same problem understanding, not by the tooling.


> Not most of the developers but whoever put in place a system where so much time is spent on overhead/retrograde activities.

Dude that's everybody in charge. You're young, you build a system, you mold it to shifting customer needs, you strike gold, you assume greater responsibility.

You hire lots of people. A few years go by. Why can't these people get shit done? When I was in their shoes, I was flying, I did what was needed.

Maybe we hired second rate talent. Maybe they're slacking. Maybe we should lay them off, or squeeze them harder, or pray AI will magically improve the outcome.

Come on, look in the mirror.


> its going to take Microsoft a long time to row back

They won't actually move back to a user-focused OS at all. It's nice for them to declare they will, but their culture and business pressures will prevent any kind of sustained effort. (Their users aren't their customers.)


“improve customer tech support”

That’s corporate-speak. They say improve, but it’s perfectly well understood internally to mean drive costs down.

There’s no problem with doing that at the expense of the customer as long as you can get away with it. (Seems like here they were going for a boiling-the-frog approach but moved too quickly.)


They should’ve gone with “Matrix”.

That way they could transition from vr world (the old hype) to AI controlled dystopia (the new hype) with just a cool reveal, no name change needed.


"for many years political manipulation of economic data has screwed things up"

That's quite a claim. A "whopper" one might say.


It seems like some companies may be unaware that not only are they interviewing prospective employees, but candidates are interviewing prospective employers.

I guess if your goal is just to hire desperate people who currently have no better choice (and who will leave as soon as they do), then you can flaunt how little you care about the candidates or the process. But if you're hoping for something better than that, I wouldn't run off as many candidates as possible.

I mean, this is probably a time-saving way to filter out a flood of poor candidates, but you're going to also be filtering out good candidates at a very high rate.


If the goal is to read what actual humans think, it's hard to see how an LLM filter can do anything but obscure and degrade the content.

LLMs, as we know them, express things using the patterns they've been developed to prefer. There's a flattening, genericizing effect built in.

If there are people who find an LLM filter to be an enhancement, they can run everything through their favorite LLM themselves.


I think it's a spectrum:

1. I enter "Describe the C++ language" at an LLM and post the response in HN. This is obviously useless--I might as well just talk to an LLM directly.

2. I enter "Why did Stroustrup allow diamond inheritance? What scenario was he trying to solve" then I distill the response into my own words so that it's relevant to the specific post. This may or may not be insightful, but it's hardly worse than consulting Google before posting.

3. I spend a week with creating a test language with a different trade-off for multiple-inheritance. Then I ask an LLM to summarize the unique features of the language into a couple of paragraphs, and then I post that into HN. This could be a genuinely novel idea and the fact that it is summarized by an LLM does not diminish the novelty.

My point is that human+LLM can sometimes be better than human alone, just as human+hammer, human+calculator, human+Wikipedia can be better than human alone. Using a tool doesn't guarantee better results, but claiming that LLMs never help seems silly at this point.


> 3. I spend a week with creating a test language with a different trade-off for multiple-inheritance. Then I ask an LLM to summarize the unique features of the language into a couple of paragraphs, and then I post that into HN

I think where you are getting hung up is the idea of "better results". We as a community don't need to strive for "better results" we can easily say, hey we just want HN to be between people, if you have the LLM generate this hypothetical test, just tell people in your own words. Maybe forcing yourself to go through that exercise is better in the long run for your own understanding.


My example was not great.

But my point is that I read HN partly because people here are insightful in a way I can't get in other places. If LLMs turn out to ultimately be just as insightful, then my incentive to read HN is reduced to just, "read what other people like me are thinking." That's not nothing, but I can get that by just talking with my friends.

Unless, of course, we could get human+LLM insightfulness in HN and then I'd get the best of both worlds.


If someone can't explain something in their own words, then they don't _really_ understand it. The process of taking time to think through a topic and check one's understanding, even if only for oneself and the rubber duck, will reveal mistakes or points of confusion.


Which gets to the core of the issue nicely, I want to go on to HN and talk to people who know things or have thought about things to the degree that they don't need a cheat sheet off to the side to discuss them.


How is it not better, in your third scenario, if you described what you think are the important and interesting aspects of your idea/demo?

And what motivated you to make it -- probably the most interesting thing to readers, and not something an LLM would know.

Believe me, I don't care what an LLM has to say about your thing. I care about what you have to say about your thing.


Never mind that the previous poster’s insight about caches is correct.

Zig has had caching bugs/issues/limitations that could be worked around by clearing the cache. (Has had, and more that likely still has, and will have.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: