A bit off-topic, but I wonder how much Copilot/GPT cuts into the market share of editors like vim and emacs. I used emacs for the past few years, but I recently switched (back) to vscode because the Copilot integration is really good. There are a lot GPT integraton packages for other editors, but they don't come close to the deeper integration that vscode has. And of course, vscode will probably get priority treatment from Microsoft.
At this point, this is (somewhat unfortunately, I really like vim and emacs) a big selling point for vscode.
Regular autocompletion also feels magical and unpredictable.
This is because I need a tighter visual feedback loop to confirm that the IDE has chosen the right identifiers and inserted extra parentheses and such, and so I couldn't get a few keystrokes ahead of what I see. It's nice for debugging where I only need to replace a few things in a few lines, and I suppose it might be nice if I am trying a new language and wasn't typing very fast anyways. It's not something I use regularly for development.
> A bit off-topic, but I wonder how much Copilot/GPT cuts into the market share of editors like vim and emacs.
So you think that the two editors that have been around for nearly 100 years combined and are extremely customizable and have survived every coding fad, language and technology trend, is in danger from AI features?
Short answer: no.
Both Emacs and Neovim has first class support/implementations of Co-Pilot and GPT plugins.
I've been using a text editor and a repl forever now and don't see myself adding to the workflow. I don't optimize things that are so far off the critical path of my productivity.
I use copilot, but I it’s not a super game changer in my opinion. I can live without it. I often turn it off when I’m using SQL or something when it hinders more than helps. It’s honestly best when I’m writing JavaScript because I’m not very good with the language. But when I’m writing a language like python which I’ve been using for a long time it kind of gets in the way.
copilot works wonderfully inside NeoVim; I use it daily. You are right that some of the more advanced integrations (Chat, direct analysis/explanation) aren't ported though. I use shellgpt in tmux for a (not as good but almost) experience.
I gave it a try the other day using copilot.lua, and it made the whole editor unusably slow. It somehow caused syntax highlighting to "lag" for tens of seconds every time I made any sort of edit.
Maybe I need to be using the official plugin, and maybe I need to disable LSP based syntax highlighting, but that makes me wonder what it's doing.
What languages and LSPs are you using? I've never experienced this with Python (pyright), Go (gopls) or Rust (rust-analyzer). I'm using tree-sitter for syntax highlighting. The only special thing is that I'm using lazy.nvim (the plugin manager) to load copilot.lua on the InsertEnter event.
I am using typescript with typescript-language-server. I'll give the lazy stuff a try, but I am not sure why this would make the editor more responsive outside of the initial load.
Does it really work wonderfully for you? I am often times shocked by how it just can't guess the most obvious completions. Then I jump into VSCode to sanity check, and it works fine there.
Copilot really does work wonderfully for me, as expected with any AI code completion. I run copilot and coc.vim simultaneously, to be noted. So when I just want to autocomplete a word, it'll be coc.vim, but when I want a whole line/method/etc built I'll let copilot be my copilot.
It's not perfect, but it's still generally valuable. I do still bounce between VSCode/NeoVim (depending how often my current task will need a stable debugger vs how often i need better neovim code search), so I see how the VSCode implementation is better; but I still prefer to live in neovim as much as possible.
It's the only thing that tempts me away from Vim. Vim does have an official copilot plugin but it seems gimped compared to VSCode. I wonder if that's intentional.
I don't know what kind of GPT you want in your editor. A colleague of mine attached the llama model to his vim, but that doesn't seem too useful to me.
But it has to use your code for inferencing, doesn't it? So it is sending it to someone.
I see this being a blocker at a lot of places. I know in my workplace the infosec people are apoplectic about it and we supposedly have a 'partnership' with OpenAI.
At this point, this is (somewhat unfortunately, I really like vim and emacs) a big selling point for vscode.