Yeah, I also don't want my editor to have syntax highlighting. I refuse to use any editor that has it. Don't even get me started on native vcs support. My rule of thumb is: "If it auto-completes then it gets the auto-yeet."
>> That seems like a lot of hoops to jump through considering that rust allows arbitrary code execution during compile time anyway.
If you mean build.rs build scripts, yes, those do run, but it is not arbitrary code. You can view and inspect them before building. If you need more security, you can download all the dependencies and build inside an isolated container.
You can't post like this here, so I've banned the account.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
It forces what the foundation wants you to do. It takes away control from you, it's unethical.
Not only that, it opens a can of worms. A compiler should be a compiler.
Not an self updating application, nor an dependency modulator from GitHub. How can I trust it when it does all these things?
Call me old fashioned at 35.
If you want the latest then go download the latest. Is that now to hard for the user?
Just because the latest is out doesn't mean it's any better than the previous version. What happens in a CrowdStrike scenario? What happens when Go gets retired in 50 years?
I don't want to work with the latest. Should I? TCL 9 is getting there but TCL 8.7 is still perfectly operatable. Should I be using 9 because it exists? My work only has 8.6 on production.
So your toolchain updates and they've removed a thing. You've got to hunt down the previous version, let alone needing to discover why it was working yesterday and not today. Unnecessary overhead.
You use a dependency that's not updated for the future version?
What stops someone from crafting a malicious binary? Malware hijacking the download path?
Auto-updating takes away your integrity. Your making blind trust that everything is what it is.
How can I be sure that the updated compiler is the compiler and not a malicious crafted version? If you can't trust the compiler how can you trust your code?
Yes, I could turn it off, but could I turn it on instead.
I shouldn't need to turn it off, I'll update when I want to update tyvm.
I’ve seen copilot spit out garbage dozens of lines long for something I swore must be one or two stdlib functions. Yep, it was, after reading some real documentation and using my brain. It was some NodeJS stuff, which I never work with. Don’t get me wrong, I still find it a helpful tool, but it is not at all a good, seasoned programmer— it is an algorithm predicting the next token based on the current tokens.
> I still find it a helpful tool, but it is not at all a good, seasoned programmer
How quickly the goalposts move! Two years ago it was "AI will never be able to write code", now we're complaining that "AI is not a seasoned programmer". What are we going to be complaining about, two years from now?
I’m not really complaining, I’m saying it’s useful, but don’t pretend it’s better than what it is. It’s cruise control on a car— makes long drives better, but doesn’t literally drive for you.
Reminds me of The Primeagen quote:
“If copilot made you 10x better, then you were only a 0.1x programmer to begin with”.
As someone who uses ChatGPT and Claude daily, but cancelled my Copilot subscription after a year of use because it intimately just wasn’t that helpful to me and didn’t provide enough benefit over doing it by hand, I kind of sort of agree. Maybe not entirely, but I can’t shake the feeling that there might be some truth in it.
The code that AI generates for me is rarely good. It’s possible to get good code out of it, but it requires many iterations of careful review and prompting, but for most cases, I can write it quicker by hand. Where it really shines for me in programming and what I still use ChatGPT and Claude for is rubber ducking and as an alternative to documentation (eg “how do I do x in css”).
Besides the code quality being mediocre at best and outright rubbish at worst, it’s too much of a “yes man”, it’s lazy (choose between A and B: why not a hybrid approach? That’s… not what I asked for), and it doesn’t know how to say “I don’t know”.
I also feel it makes you, the human programmer, lazy. We need to exercise our brains, not delegate too much to a dumb computer.
> I also feel it makes you, the human programmer, lazy. We need to exercise our brains, not delegate too much to a dumb computer.
I kinda feel like this isn't talked about enough, my main concern right from the beginning was that new programmers would rely on it too much and never improve their own abilities.
I am a competent coder. I have been a coder since I was in middle school. I know at least 10 languages, and I could write my own from scratch.
I know c++ dart golang java html css javascript typescript lua react vue angular angularjs c# swift sql in various dialects including mysql and postgres, and have worked professionally in all these regards. I love to challenge myself. In fact, if I done something before, I find it boring.
So copilot helps me because I always find something new to do, something I don't understand, something I'm not good at.
So yes, I'm confident I'm competent. But I always do things I'm not good at for fun. So it helps me become well rounded.
So your assertion it only helps me because I'm incompetent is true and false. I'm competent, I just like to do new stuff.
That's all very nice but it contains a fatal logical flaw: it assumes CoPilot actually gives you good code. :D
I mean it does, sometimes, but usually it's either boilerplate or something you don't care about. Boilerplate is mostly managed very well by most well-known IDEs. And neither them nor CoPilot are offering good algorithmic code... OK, I'll grant you the "most of the time, not never" thing.
This statement would have been a huge red flag for me, if I had interviewed you. Don't get me wrong, you could use and program in 10 languages. Maybe you can be proficient in many at different times of your life. But know them at once? No.
That’s a different discussion. I’m disputing the claim that AI can make you a 2x dev when it seems like it’s mostly beneficial when you don’t know what you’re doing.
I use multiple daily and have definitely seen a productivity boost. If nothing else, it saves typing. But I'd argue they are in essence a better search engine - it answers "you don't know what you don't know" questions very well, providing a jumping off point when my conception of how to achieve something with code or tooling is vague.
Typing is, or at least it should be, the least of your time spent during the day doing programming. I don't find optimizing the 5-10% of my workday spent typing impressive, or even worth mentioning.
Granted there are languages where typing takes much more time, like Java and C# but... eh. They are quite overdue for finding better syntax anyway! :)
I didn't mean typing in the sense of autocomplete, I meant typing in the sense of stubbing out an entire class or series of test cases. It gives me scaffolding to work with which I can take and run with.
I think for more boilerplate-esque code-monkey type code it can be a boon.
I think the unfortunate reality is that this makes up a shockingly large amount of software engineering. Take this object and put it into this other object, map this data to that data, create these records and move them into this object when then goes into that other object.
You shouldn't be. For code bases where context is mostly local they destroy human throughput by comparison. They only fall down hard when used in spaghetti dumpster fire codebases where you have to paste the contents of 6+ files into context or the code crosses a multiple service boundaries to do anything.
A competent engineer architects their systems to make their tools as effective as possible, so maybe your idea of competent is "first order" and you need higher order conception of a good software engineer.
could you provide some examples, code or repos and questions where it does a good job for you (question can also just be completion). Obviously you're having really good experiences with the tools that other aren't having. I'd definitely appreciate that over a lot of assurances taht I'm doing it wrong with my spaghetti code.
Take a SQL schema, and ask AI to generate crud endpoints for the schema, then sit down and code it by yourself. Then try generating client side actions and state management for those endpoints. Time yourself, and compare how long it takes you. Even if you're fast and you cut and paste from template work and quickly hand edit, the AI will be done and on a smoke break before you're even a quarter of the way through.
Ask the AI to generate correctly typed seed data for your database, using realistic values. Again, the AI will be done long before you.
Try porting a library of helper functions from one language to another. This is another task where AI will win handily
Also, ask AI to write unit tests with mocks for your existing code. It's not amazing at integration tests but with mocks in play it slays.
Most of the things you list can be done deterministically, without the risk of AI errors. The first one in particular is just scaffolding that Visual Studio has had for Entity Framework and ASP.NET MVC for a decade now. And if you were using, e.g., LISP, you'd just write a DEFCRUD macro for it once, and re-use it for every similar project.
Thank you. That was a lot more informative than the argument you were having. And it's now obvious to me the value you're getting and a few areas that would work for me that I'll try. I'm not slapping out crud apps day to day, but can see how I could accelerate myself.
Your experience mirrors mine. I use ChatGPT or meta for boiler plate code like this. I write golang at my day job, and there's a lot of boiler plat for that language, saves a lot of time, but most importantly, does the tedious boring things I hate doing.
Why does no one uses snippets or create scaffold for projects? My main requirement for a tool is to be deterministic. So that I don't spend time monitoring it as failure is very distinct from success.
Those things are a tiny part of the work though and are all about generating boilerplate code. Tons of boilerplate code isn't the hallmark of a great codebase, I don't think many programmers spends more than 10% of their time writing boilerplate code, unless they work at a very dysfunctional org.
It is true it is faster than humans at some tasks, but the remaining tasks were most of the time, you can't gain more than 11% speedup by speeding up 10% of the work.
What's your point? The other person is speeding themself up, and it works for them. What's the appropriate bar for speedups. What would be enough to satisfy you? What problems do you have that AI isn't speeding up that you still feel aren't worth spending your brain on?maybe list them and see if the other people has thoughts on how to go about it?
Things don't move forward by saying it can't be done or belittle others accomplishments.
> They only fall down hard when used in spaghetti dumpster fire codebases where you have to paste the contents of 6+ files into context or the code crosses a multiple service boundaries to do anything.
So humans do better than them in at least 80% of all code everywhere, if not 95% even? Cool, good to know.
Care to provide some examples to back your otherwise extraordinary claim btw?
I don't "hate" AI because (1) AI does not exist so how can I hate something that doesn't exist? And (2) I don't "hate" a machine, I cringe at people who make grand claims with zero proof. Yes. Zero. Not small, not infinitesimal -- zero.
I just can't make peace with the fact that I inhabit the same planet as people who can't make elementary distinctions.
You failed to mention the word "can", which is a theoretical.
We can all Google stuff because internet is big and supports anyone's views, which means it's more important than ever to be informed and be able to infer well. Something that you seem to want to defer to sources that support your beliefs. Not nice finding that on a hacker forum but statistical outliers exist.
Live long and prosper. And be insulted, I suppose.
In case you haven't realized yet, a hash table that maintains the insertion order can be still do O(1) deletes as long as the key order doesn't change arbitrarily after the initial insertion.
An array can be used to efficiently simulate a linked list and other data structure, however. (Or an intrusive linked list may be embedded into the bucket structure like PHP, but this is less efficient with open addressing scheme which is nowadays better for cache locality.)
That depends on what someone is willing to compromise. Extra space to point back at exactly that key (but that also needs to be updated each compaction?); personally I'd normally rather pay the lookup or key sort on iterator snapshot fee. An 'insert, or re-sorted order' side index which allows for nodes to be marked as 'deleted' (maybe nil / null, maybe a sentinel value meaning skip?); I might propose that to see if it fit the requirements well enough.
Go grew too big (in popularity) to be small. Popular languages tend to grow as the userbase grows. This is natural — the language gets used across a wider range of industries, and the programs being written in the language become larger, more mature and more diverse.
The language maintainers need to balance a small specification or a small implementation versus the users' demand, but in most cases at least some user demands win routinely. This is especially true when the language has just one major implementation. There is a significant cost for making that implementation more complex, but there is a more significant cost for making millions of programs that rely on a single implementation more complex just to make that implementation simpler.