Hacker Newsnew | past | comments | ask | show | jobs | submit | hyperadvanced's commentslogin

zsh is still crazy to me. I use bash for everything. I have no idea what I’m missing out on.

FWIW, I still use bash as well. Nothing against zsh per-se, it's just that I know bash, bash works, and there's no particular pain I experience using bash that will obviously be solved by switching. And when you factor in anticipated switching costs, I haven't found any compelling reason to spend any significant time on zsh so far.

Maybe one day though.


Yes, I'll throw my hat into this group too. Bash is fine.

YMMV but I have found using zsh too frictitious to be helpful. Sure, theoretically zsh living in a bash world (lets face it, all scripts are bash) is completely fine but reality seems to differ. Copied a one liner from shell history into your script? Crash. Use arrays? Weird bugs. Use shell builtins? Whoa unexpected interactivity!!! Etc...

Bash is absolutely fine as a default shell. As an added benefit, you don't feel like an invalid once logged in to a container or server.


I've been using it for the last 6 or 7 years and I can only remember one specific feature I use a lot: "unset HISTFILE" to disable history when I need to run commands with passwords.

Other than that, oh-my-zsh with git, systemd, and fzf plugins. Saves a lot of typing.

The main selling point for me is how easy it is to setup.


Space before the command will have the same effect

Just out of curiosity, what sort of typing do those plugins save in comparison to doing it in bash? Can you give some examples?

The git and systemd ones create several aliases for frequent commands:

git commit -> gc

git status --short -b -> gsb

git checkout -> gco

systemctl --user restart -> scu-restart

Nothing that you couldn't come up with yourself, but I've been using for so long it has become a standard for me.

The fzf plugin enables a fuzzy finder when you hit ctrl+r or ctrl+t. You need fzf installed.


I remember when I switched to zsh solely for SUNKEYBOARDHACK.

Prefixing a space on that command will keep it out of history.

I don't think it is crazy, but I know and love the bash quirks. I've got permanent history setup thanks to Eli Bandersky [1] which I know zsh has a solution to already. But what annoys me with zsh is some of the ways it tab completes when navigating the filesystem, and not by default allowing comments on the command line, e.g. '# github api key blahblahblah', which I can then pull later using phgrep.

Slight pain on a mac to get the latest version and use it as terminal shell, but it gets easier everytime I work on a fresh mac.

[1]: https://eli.thegreenplace.net/2013/06/11/keeping-persistent-...


If you want vi history editing like you are used to in bash for the last 20 years it's subtly different in a manner that makes it insanity inducing. If you use the traditional emacs like editing it's much the same.

Oh that’s good. I’m an emacs guy. I don’t like change.

I’m a hobbyist math guy (with a math degree) and LLMs can at least talk a little talk or entertain random attempts at proofs I make. In general they rebuke my more wild attempts, and will lead me to well-trodden answers for solved problems. I generally enjoy (as a hobby) finding fun or surprising solutions to basic problems more than solving novel maths, so LLMs are fun for me.

I’m strictly talking about “Agentic” coding here:

They are not a silver bullet or truly “you don’t need to know how to code anymore” tools. I’ve done a ton of work with Claude code this year. I’ve gone from a “maybe one ticket a week” tier React developer to someone who’s shipped entire new frontend feature sets, while also managing a team. I’ve used LLM to prototype these features rapidly and tear down the barrier to entry on a lot of simple problems that are historically too big to be a single-dev item, and clear out the backlog of “nice to haves” that compete with the real meat and bread of my business. This prototyping and “good enough” development has been massively impactful in my small org, where the hard problems come from complex interactions between distributed systems, monitoring across services, and lots of low-level machine traffic. LLM’s let me solve easy problems and spend my most productive hours working with people to break down the hard problems into easy problems that I can solve later or pass off to someone on my team to help.

I’ve also used LLM to get into other people’s codebases, refactor ancient tech debt, shore up test suites from years ago that are filled with garbage and copy/paste. On testing alone, LLM are super valuable for throwing edge cases at your code and seeing what you assumed vs. what an entropy machine would throw at it.

LLM absolutely are not a 10x improvement in productivity on their own. They 100% cannot solve some problems in a sensible, tractable way, and they frequently do stupid things that waste time and would ruin a poor developer’s attempts at software engineering. However, they absolutely also lower the barrier to entry and dethrone “pure single tech” (ie backend only, frontend only, “I don’t know Kubernetes”, or other limited scope) software engineers who’ve previously benefited from super specialized knowledge guarding their place in the business.

Software as a discipline has shifted so far from “build functional, safe systems that solve problems” to “I make 200k bike shedding JIRA tickets that require an army of product people to come up with and manage” that LLM can be valuable if only for their capabilities to role-compress and give people with a sense of ownership the tools they need to operate like a whole team would 10 years ago.


> However, they absolutely also lower the barrier to entry and dethrone “pure single tech” (ie backend only, frontend only, “I don’t know Kubernetes”, or other limited scope) software engineers who’ve previously benefited from super specialized knowledge guarding their place in the business.

This argument gets repeated frequently, but to me it seems to be missing final, actionable conclusion.

If one "doesn't know Kubernetes", what exactly are they supposed to do now, having LLM at hand, in a professional setting? They still "can't" asses the quality of the output, after all. They can't just ask the model, as they can't know if the answer is not misleading.

Assuming we are not expecting people to operate with implicit delegation of responsibility to the LLM (something that is ultimately not possible anyway - taking blame is a privilege human will keep for a foreseeable future), I guess the argument in the form as above collapses to "it's easier to learn new things now"?

But this does not eliminate (or reduce) a need for specialization of knowledge on the employee side, and there is only so much you can specialize in.

The bottleneck maybe shifted right somewhat (from time/effort of the learning stage to the cognition and the memory limits of an individual), but the output on the other side of the funnel (of learn->understand->operate->take-responsibility-for) didn't necessary widen that much, one could argue.


> If one "doesn't know Kubernetes", what exactly are they supposed to do now, having LLM at hand, in a professional setting? They still "can't" asses the quality of the output, after all. They can't just ask the model, as they can't know if the answer is not misleading.

This is the fundamental problem that all these cowboy devs do not even consider. They talk about churning out huge amounts of code as if it was an intrinsically good thing. Reminds me of those awful VB6 desktop apps people kept churning out. Vb6 sure made tons of people nx productive but it also led to loads of legacy systems that no one wanted to touch because they were built by people who didn't know what they were doing. LLMs-for-Code are another tool under the same category.


>They still "can't" asses the quality of the output, after all. They can't just ask the model, as they can't know if the answer is not misleading.

Wasn't this a problem before AI? If I took a book or online tutorial and followed it, could I be sure it was teaching me the right thing? I would need to make sure I understood it, that it made sense, that it worked when I changed things around, and would need to combine multiple sources. That still needs to be done. You can ask the model, and you'll have the judge the answer, same as if you asked another human. You have to make sure you are in a realm where you are learning, but aren't so far out that you can easily be misled. You do need to test out explanations and seek multiple sources, of which AI is only one.

An AI can hallucinate and just make things up, but the chance it different sessions with different AIs lead to the same hallucinations that consistently build upon each other is unlikely enough to not be worth worrying about.


If you don’t know k8s, or any tech really, you can RTFM, you can generate or apply some premade manifests, you can feed the errors into the LLM and ask about it, you can google the error message, you can do a lot of things. Often times, in the “real world” of software engineering, you learn by having zero idea of how to do something to start with and gradually come up with ideas from screwing around with a particular tool or prototyping a solution and seeing how well it works.

I agree that some of the above basically amounts to: it’s easier to learn new things. Which itself might sound ho-hum, but it really is a fundamental responsibility of software engineers to learn new things, understand new and complex problems, and learn how to do it correctly and repeatable. LLMs unquestionably help with this, even with their tendency to hallucinate: usually proof by contradiction (or the failure of an over-confident chaos machine) is even better than just having a thing that spits out perfect solutions without needing the operator to understand it.

However, I will say that there is a very large gulf between learning how to reason about complex systems or code and learning how to use the entropy machine to produce nominally acceptable work. Pure reliance and delegation of responsibility to the AI will torpedo a lot of projects that a good engineer could solve, and no amount of lines of code makes up for a poorly conceived product or a brittle implementation that the LLM later stumbles over. Good engineering principles are more important than ever, and the developer has to force the LLM to conform to those.

There are many things to question about agentic coding: whether it’s truly cost/effort effective, whether it saves time, whether it makes you worse at problem solving by handing you facile half-solutions that wither in the face of the chaos of the real world, etc. But they clearly aren’t a technology which “doesn’t do ANYTHING useful”, as some HN posters claim.


I don’t think the conclusion is right. Your org might still require enough React knowledge to keep you gainfully employed as a pure React dev but if all you did was changing some forms, this is now something pretty much anyone can do. The value of good FE architecture increased if anything since you will be adding code quicker. Making sure the LLM doesn’t stupidly couple stuff together is quite important for long term success

It really depends on whether coding agents is closer to "compiler" or not. Very few amongst us verify assembly code. If the program runs and does the thing, we just assume it did the right thing.

> someone who’s shipped entire new frontend feature sets, while also managing a team. I’ve used LLM to prototype these features rapidly and tear down the barrier to entry on a lot of simple problems that are historically too big to be a single-dev item, and clear out the backlog of “nice to haves” that compete with the real meat and bread of my business. This prototyping and “good enough” development has been massively impactful in my small org

Has any senior React dev code review your work? I would be very interested to see what do they have to say about the quality of your code. It's a bit like using LLMs to medically self diagnose yourself and claiming it works because you are healthy.

Ironically enough, it does seem that the only workforce AIs will be shrinking will be devs themselves. I guess in 2025, everyone can finally code


That's a solid answer, I like it, thanks!

You can study the LLM output. In the “before times” I’d just clone a random git repo, use a template, or copy and paste stuff together to get the initial version working.


Studying gibberish doesn't teach you anything. If you were cargo culting shit before AI you weren't learning anything then either.


Necessarily, LLM output that works isn't gibberish.

The code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say "continue" because it would stop in the middle of writing a function.


  Necessarily, LLM output that works isn't gibberish.
Hardly. Poorly conjured up code can still work.


"Gibberish" code is necessary code which doesn't work. Even in the broader use of the term: https://en.wikipedia.org/wiki/Gibberish

Especially in this context, if a mystery box solves a problem for me, I can look at the solution and learn something from that solution, c.f. how paper was inspired by watching wasps at work.

Even the abject failures can be interesting, though I find them more helpful for forcing my writing to be easier to understand.


It's not gibberish. More than that, LLMs frequently write comments (some are fluff but some explain the reasoning quite well), variables are frequently named better than cdx, hgv, ti, stuff like that, plus looking at the reasoning while it's happening provides more clues.

Also, it's actually fun watching LLMs debug. Since they're reasonably similar to devs while investigating, but they have a data bank the size of the internet so they can pull hints that sometimes surprise even experienced devs.

I think hard earned knowledge coming from actual coding is still useful to stay sharp but it might turn out the balance is something like 25% handmade - 75% LLM made.


  they have a data bank the size of the internet so they can
  pull hints that sometimes surprise even experienced devs.
That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers." I just discovered another victim: the Renesas forums. Cloudflare is blocking me from accessing the site completely, the only site I've ever had this happen to. But I'm glad you're able to have your fun.

  it might turn out the balance is something like 25% handmade - 75% LLM made.
Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software.


> they've stolen a mountain of information

In law, training is not itself theft. Pirating books for any reason including training is still a copyright violation, but the judges ruled specifically that the training on data lawfully obtained was not itself an offence.

Cloudfare has to block so many more bots now precisely because crawling the public, free-to-everyone, internet is legally not theft. (And indeed would struggle to be, given all search engines have for a long time been doing just that).

> As the arms race continues AI DDoS bots will have less and less recent "training" material

My experience as a human is that humans keep re-inventing the wheel, and if they instead re-read the solutions from even just 5 years earlier (or 10, or 15, or 20…) we'd have simpler code and tools that did all we wanted already.

For example, "making a UI" peaked sometime between the late 90s and mid 2010s with WYSIWYG tools like Visual Basic (and the mac equivalent now known as Xojo) and Dreamweaver, and then in the final part of that a few good years where Interface Builder finally wasn't sucking on Xcode. And then everyone on the web went for React and Apple made SwiftUI with a preview mode that kept crashing.

If LLMs had come before reactive UI, we'd have non-reactive alternatives that would probably suck less than all the weird things I keep seeing from reactive UIs.


> Cloudfare has to block so many more bots now precisely because crawling the public, free-to-everyone, internet is legally not theft.

That is simply not true. Freely available on the web doesn't mean it's in the Public Domain. The "lawfully obtained" part of your argument is patently untrue. You can legally obtain something, but that doesn't mean any use of it is automatically legal as well. Otherwise, the recent Spotify dump by Anna's Archive would be legal as well.

It all depends on the license the thing is released under, chosen by the person who made it freely accessible on the web. This license is still very emphatically a legally binding document that restricts what someone can do with it.

For instance, since the advent of LLM crawling, I've added the "No Derivatives" clause to the CC license of anything new I publish to the web. It's still freely accessible, can be shared on, etc., but it explicitly prohibits using it for training ML models. I even add an additional clause to that effect, should the legal interpretation of CC-ND ever change. In short, anyone training an LLM on my content is infringing my rights, period.


> Freely available on the web doesn't mean it's in the Public Domain.

Doesn't need to be.

> The "lawfully obtained" part of your argument is patently untrue. You can legally obtain something, but that doesn't mean any use of it is automatically legal as well.

I didn't say "any" use, I said this specific use. Here's the quote from the judge who decided this:

  5. OVERALL ANALYSIS.
  After the four factors and any others deemed relevant are “explored, [ ] the results [are] weighed together, in light of the purposes of copyright.” Campbell, 510 U.S. at 578. The copies used to train specific LLMs were justified as a fair use. Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes.
- https://storage.courtlistener.com/recap/gov.uscourts.cand.43...

> Otherwise, the recent Spotify dump by Anna's Archive would be legal as well.

I specifically said copyright infringement was separate. Because, guess what, so did the judge the next paragraph but one from the quote I just gave you.

> For instance, since the advent of LLM crawling, I've added the "No Derivatives" clause to the CC license of anything new I publish to the web. It's still freely accessible, can be shared on, etc., but it explicitly prohibits using it for training ML models. I even add an additional clause to that effect, should the legal interpretation of CC-ND ever change. In short, anyone training an LLM on my content is infringing my rights, period.

It will be interesting to see if that holds up in future court cases. I wouldn't bank on it if I was you.


> That's a polite way of phrasing "they've stolen a mountain of information and overwhelmed resources that humans would use to other find answers."

Yes, but I can't stop them, can you?

> But I'm glad you're able to have your fun.

Unfortunately I have to be practical.

> Doubtful. As the arms race continues AI DDoS bots will have less and less recent "training" material. Not a day goes by that I don't discover another site employing anti-AI bot software.

Almost all these BigCos are using their internal code bases as material for their own LLMs. They're also increasingly instructing their devs to code primarily using LLMs.

The hope that they'll run out of relevant material is slim.

Oh, and at this point it's less about the core/kernel/LLMs than it is about building ol' fashioned procedural tooling aka code around the LLM, so that it can just REPL like a human. Turns out a lot of regular coding and debugging is what a machine would do, READ-EVAL-PRINT.

I have no idea how far they're going to go, but the current iteration of Claude Code can generate average or better code, which is an improvement in many places.


  The hope that they'll run out of relevant material is slim.
If big corps are training their LLMs on their LLM written code…

You're almost there:

> If big corps are training their LLMs on their LLM written code <<and human reviewed code>>…

The last part is important.


Until the humans are required to (or just plain want to) use LLMs to review the code.

It’s crazy how fast the tables turned on SWE being barely required to do anything to SWE being required to do everything. I quite like the 2026 culture of SWE but it’s so much more demanding and competitive than it was 5 or 10 years ago


You could argue (it certainly has been argued) that the ability for technology to dissolve the usually more coherent identities that we take on daily by granting unlimited role play, trolling, and exploration is simply too much for a lot of people, and makes it hard to maintain a coherent sense of self. This is especially true of people who are “internet addicts” - not that the designation means a whole lot as I’m here at the gym talking to you on the phone.

Don’t get me wrong, I mostly agree with your comment. I think even more dastardly is the tendency for the internet to market new personalities to you, based on what’s profitable


There's also the inconvenient truth that a very specific part of the world was online in the 1990s.

Primarily more educated, more liberal, more wealthy.

Turns out, when you hook the rest of the planet online, you get mass persuasion campaigns, fake genocide "reporting", and enough of an increase in ambient noise that coherent anonymous discourse becomes impossible.

I mean, look at the comments on Fox News or political YouTube videos. That's the real average level of discussion.


It's still possible in smaller, constructive communities, not in large general-purpose social networks.


As a hn poster, I agree with this


The 1990s internet was definitely not more liberal! 4chan style forums were probably the rule. I can’t believe someone would say that, clearly you didn’t use the same internet that I did.


He didn't say the internet was more liberal, he said the people on it were.

Before you start forming your reply, think about the actual culture back then. If you take slashdot as somewhat representative of the 90s internet culture, it was basically anti-corporate, meritocratic, non-judgmental, irreligious, educated, non-discriminatory, and once 2000 came around tended to be highly critical of the Bush agenda.

4chan at that time and places like it represented more of an edgelord culture, where showing vulnerability or sensitivity was shunned, everything revered by the larger populace was ruthlessly mocked, and distrust of society and government in general was taken as natural. Calling them conservative would have been non-sensical.


Exactly. If I had to characterize the general internet (read: what would and wouldn't raise an eyebrow in an average forum) in terms of political alignment, it'd probably be:

   - anarchist 60s/70s
   - libertarian-meritocracy 80s/90s
   - capitalist-meritocracy-liberal 00s
   - polarized liberal-globalist vs conservative-reactionary 10s
   - polarized liberal-individualist vs conservative-statist 20s
That SA / 4chan (both of which were really post-90s) existed were in no way proof of an anti-liberal bent. Their very edgelordness was an implicit reveling in absolute freedom of expression (even if their later liberal-pro-censoring and alt-right splinter movements subsequently forgot that).


4chan was very much left-wing to liberal until Stormfront invaded them back. After Caturday came Soviet Sunday.


The response (usually) is “OK but whatabout the $X billion we spent on the military?”

Which isn’t wrong necessarily, but it doesn’t answer why or whether we should be spending so much money on everything else


I actually agree here too. America (and Americans) spend waaaay too much, and especially on niche things that profit very specific subgroups. We need to get back to the basics. Johnny can't read[1], or do math. That should be funded long before we worry about today's PhDs, those kids are the pipeline of future PhDs.

/r

[1]-https://www.forbes.com/sites/ryancraig/2024/11/15/kids-cant-...


The state/local gov tend to be responsible for public education funding. in the US federal gov only does <10% of the funding.

US public education spending is also top 5 in the world so I don't think a lack of money is why "Johnny can't read or do math", something else is going on


Same. I find that if I can piecemeal explain the desired functionality and work as I would pairing with another engineer that it’s totally possible to go from “make me a simple wheel with spokes” to “okay now let’s add a better frame and brakes” with relatively little planning, other than what I’d already do when researching the codebase to implement a new feature


It's quite interesting because it makes me wonder how we make it efficient and predictable. The human language is just too verbose. There must be some DSL, some more refined way to get to the output we need. I don't know whether it means you actually just need to provide examples or something else. But you know code is very binary, do this do that. LLMs are really just too verbose even in this format right now. That higher layer really needs a language. I mean I get it. It's understanding human language and converting it to code. Very clever. But I think we can do better.


Source? Source? Got any source about me? Yeah well those statistics only deal with other people who aren’t me, so I guess you’re not really trusting the science :/


This is just plain wrong, I vehemently disagree. What happens if a payment fails on my API, and today that means I need to go through a 20-step process with this pay provider, my database, etc. to correct that. But what’s worse is if this error happens 11,000 times and I run a script to do my 20 step process 11,000 times, but it turns out the error was raised in error. Additionally, because the error was so explicit about how to fix it, I didn’t talk to anyone. And of course, the suggested fix was out of date because docs lag vs. production software. Now I have 11,000 pissed off customers because I was trying to be helpful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: