Hacker Newsnew | past | comments | ask | show | jobs | submit | qudat's commentslogin

There is zero reason not to include a picker like helix does. I’m gonna guess 90% of everyone running neovim has a picker

I believe we are thinking about different time horizons, and your language and comparison to <modern editor> reveals a lot about unsaid about your reasoning.

I don't think comparison to other editors is a good basis for deciding what should be pulled in. The vi ecosystem was and remains weird to those outside, but in a way that is internally consistent to the usage patterns of its own user over decades.

Also, percentage of users using X feature is also a bad selection criteria for pulling a plugin provided feature, unless that number is 100% and there is little deviation in configuring it. There is very little friction in pulling in a plugin as a user.

So what are some good criteria for absorbing plugin functionality?

- extensions that provide an API for entire ecosystems (plenary, lazy.nvim)

- plugins that add missing features or concepts that are to useful, and timeless to ignore

- plugins that would benefit themselves or neovim by moving to native code

Honestly, the bar for absorbing plugins should be pretty high. There should be a real benefit that outweighs the cost of more maintenance, coupling, and ultimately cost.

The cost of installing plugins is pretty low, and they are great at keeping the core software simple.



Does ex not count?

Their primary goal in the last year was to move to Azure. Any massive infra migration is going to cause issues.

> Any massive infra migration is going to cause issues.

What? No, no it's not. The entire discipline of Infrastructure and Systems engineering are dedicated to doing these sorts of things. There are well-worn paths to making stable changes. I've done a dozen massive infrastructure migrations, some at companies bigger than Github, and I've never once come close to this sort of instability.

This is a botched infrastructure migration, onto a frankly inferior platform, not something that just happens to everyone.


Because we are constantly writing variables that are lowercase. Coming up with a name that is both short but immediately understandable is what we live for. Variables are our shrine, we stare at them everyday and are used to their beauty and simplicity.

Waiting for the zig port

If you want VC money you need to put an AI spin on it.

Product is free, open source and largely written and maintained by one person. Doesn't seem like a startup payday scheme.

Interesting you mention tmux because it itself resembles a terminal emulator. It has its own terminal feature matrix that controls what your parent emulator can render. It sounds like you aren’t using tabs and splits in tmux but it does include them.

It sounds like you could get away with using a tool like https://zmx.sh which only handles session persistence (attach/detach). It also uses libghostty but only for state restoration on reattach.


A poor man's version of session persistence can even be managed with tmux plugins like tmux-resurrect and tmux-continuum.

I built https://prose.sh as part of my journey into Gemini and back out. Ya, it's just a simple blog, but you can completely manage it with ssh and is compatible with hugo when people want to eject.

We also recently released support for plain-text-lists which is a gemini-inspired spec that use lists as its foundational structure.

https://pico.sh/plain-text-lists

example: https://blog.pico.sh/ann-034-plain-text-lists


> The velocity is up AND the quality is up.

This is not my experience on a team of experienced SWEs working on a product worth 100m/year.

Agents are a great search engine for a codebase and really nice for debugging but anytime we have it write feature code it makes too many mistakes. We end up spending more time tuning the process than it takes to just write the code AND you are trading human context with agent context that gets wiped.


I can't speak to your experience. I can only speak to mine.

We've spent years reducing old debt and modernizing our application and processes. The places where we've made that investment are where we are currently seeing the additional acceleration. The places where we haven't are still stuck in the mud, but per your "search engine for a codebase" comment our engineers are starting to engage with systems they would not have previously touched.

There are areas for sure where LLMs would fall down. That's where we need the experts to guide them and restructure the project so that it is LLM friendly (which also just happens to be the same things that make the app better for human engineers).

And I'm serious about the quality comment. Maybe there's a difference in how your team is using the tools, but I have individuals on my team who are learning to leverage the tools to create better outputs, not just pump out features faster.

I'm not saying LLMs solve everything, FAR from it. But it's giving a master weapon to an experienced warrior.


Your experience matches mine too. Experienced devs are increasing their output while maintaining quality. I'm personally writing better-quality code than before because it's trival to tell AI to refactor or rename something. I care about good code, but I'm also lazy, so I have my Claude skills set up to have AI do it for me. (Of course, I always keep the human in the loop and review the outputs.)

You said that you're restructuring the project to be LLM friendly, which also makes the app better for humans. I 100% agree with this. Code that is unreadable and unmaintainable for humans is much more difficult for AI to understand. I think companies that practiced or prioritized code hygiene will be ahead of the game when it comes to getting good results with agentic AI.


I also agree. In fact, I was hitting a limit on my ability to ship a really difficult feature and after I became good at using Claude, I was able to finally get it done. The last mile was really hard but I had documented things very well so the LLM was able to fly through the bugs, write tests that I dare say are too difficult for humans to design since they require keeping in your head a large amount of context ( distributed computing is really hard) which is where I was hitting my limit. I now think that I can only do the easy stuff by hand, anything serious requires me to get a LLM to at least verify, but of course I just let it do things while I explain the high level vision and the sorts of tests I expect it to have.


Whenever actual studies are made about LLM coding they always show that LLM coding is a net loss in quality and delivery speed.

(They are good as coder psychotherapy tho.)


One of the "actual studies" was pretty much grabbing developers off the street and telling them to "use AI to make things go fast".

And to no-one's surprise there was no increase in velocity.

I'd really like to see a study using people of similar skill level where one group doesn't use AI and never has and the other have been working with AI tooling for a year or more.


> you just need AI certification and over five years experience in AI coding stacks to make use of this

The trillions of dollars of capitalization wasn't based off a promise of "a new IDE that lets you write code slightly faster once you get over the learning curve".


That's just marketing.

AI is a force multiplier, or even an exponent if used correctly.

For regular folk with a skill of 1.01 it gives them the slight bump in speed and knowledge they need to get their ideas into some kind of MVP form.

But if you know what you're doing and work with it, you can get crazy amounts of stuff done - mostly the things you wouldn't have bothered to do at all because of the time investment required.

I've got so many tiny apps running that do specifically what I need, before I would have had to use someone else's "product" for it with a massive amount of useless (to me) features when I just need the one.

I just a few weeks ago simplified a whole massive open source webapp to a single dockerized ~300 line Python script that does the specific bit I want and it's been working flawlessly ever since. I never need to worry about enshittification or pivoting or the original creator adding AI slopped features (and security holes).


Well, things are changing so fast those studies are going to be out of date. And I have no doubt some people are experiencing a net loss while others are not. We need to pry apart why some people are having success with it and others aren't, and build on top of what's working.


Sounds a lot like "software engineering".

Wasn't AI supposed to free us from all that and let us fire all those useless coders?


It’s easier to write code than read it.


Id argue the read-write procedures are happening simultaneously as one goes along, writing code by hand.


It's important to enforce the rules that make the code easier to read.


That’s going to give you all a ton of job security in a year when we realize that prompt first yields terrible results for maintainability.


Or they fire the existing staff who prompted this mess and bring in mkinsey to glue the mess together


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: