As others have mentioned, unless you have a strong conscience and really know what you're doing, it's far too tempting to just accept AI-generated suggestions without really thinking, and IMHO losing that understanding is a dangerous path to go down. AI can only increase quantity, not quality. The industry desperately needs far more of the latter.
I felt this on myself as well when I tried to use copilot. Especially when it was later in the day. I still use it for some boilerplate code / building visualizations that feels like boilerplate, but turned it off for any real important code. Atm I find most value in AI in discussing design decisions and to evaluate alternative approaches to mine. There it really had a huge positive impact on my workflow
It's a concern I have too, when I get tired, I start to just delegate to co-pilot suggestions as I get desperate, if I didn't have co-pilot, I'd probably just log off for the day.
I actually don't really use copilot as I didn't find it that helpful, so I don't really have the problem anymore, but I could see it was a danger. Bit like driving when tired.
I look at is as asking an intern to do some work that I don't have time for. Do I have to check their work? Yes. Might I have to correct and guide the outcome? Again, yes. Am I going to ask them to implement something novel and groundbreaking? Not really, that'd be a disaster unless they are a prodigy. None of that removes my capacity, or any kind of danger.
I find this absolutely nothing like delegating to an intern personally. Interns usually do their best because they will be held accountable if they don't.
I don't know how to articulate it, but wherever I hear the financial analysts talking about how much work AI is going to do for us, I just have this spidey-sense that they're severely underestimating the social aspect of why anyone tries to achieve a good outcome
they think they can just spend 100,000$ on GPUs and get 10x the output of someone buying a house and raising kids getting paid a 6 figure salary
At work I already had teammates who would grant LGTM to code they barely read, but looked fine after reading the description and hopefully also skimming over the code.
I feel that this path of least resistance/effort will also apply to things that an LLM spit as those are highly likely to look correct on the surface level.
I disagree. I don't want to go into the docs to understand specific syntax or options of a library. Just let me write in natural language what I want and give me the result
If I can't tell based on the code what it's supposed to do, then it's a shitty library or api.
Not to be rude but I have a hard time relating. Code is its own meaning, and to read it is to understand it. The only way you can use code you don't "understand" is to lack understanding of the language you are using.
In my almost 30 years in the industry I've run into plenty of people who claimed they knew what code did by reading it, but in practice every single one of them turned out to be only another human flummoxed by unexpected runtime behavior.
I never found my experience that simple. Even at times when I was paying attention which is always in short supply when you need it most.
Libraries are biggest pain point. You don't know what a function really doing unless you have used it before yourself. Docs are not always helpful even when you read them.
Lot of assumptions that may not be totally wrong but not right also. In c++, using [] on a map to access an element is really really dangerous if you haven't read the docs carefully and assumes that it does what you believe it should do.
> using [] on a map to access an element is really really dangerous if you haven't read the docs carefully and assumes that it does what you believe it should do
It's not that bad, as it just inserts a default-constructed element if it's not present. What would you expect it to do, such that it returns a reference of the appropriate type (such that you can write `map[key] = value;`)? Throw an exception? That's what .at() is for. I totally agree that the C++ standard library is full of weird unintuitive behavior, and it's hard to know which methods will throw exceptions vs have side effects vs result in undefined behavior without reading documentation, but map::operator[] is fairly tame.
Meanwhile, operator[] to access an element of a vector will result in a buffer overflow if not appropriately bounds-checked.
Something akin to `std::optional` would have been great. These days, folks don't come to C++ from C and are not used to such under-the-hood tricks.
The committee decided that since the reference can't be null, therefore, we'll insert something into the map when someone "query" for a key. Perhaps two mistakes can make a right!! It is hard to make peace with it when you are used to a better alternative.
> These days, folks don't come to C++ from C and are not used to such under-the-hood tricks.
It's funny to see that, since I'm used to folks coming to C++ from C and bemoaning that C is a much more straightforward language which doesn't engage in such under-the-hood tricks.
> Something akin to `std::optional` would have been great.
So, map.find(), which returns an iterator, which you can use to check for "presence" by comparing against map.end()? (Or if you actually want to insert an element in your map, map.insert() or, as of C++17, map.try_emplace().)
Again, the standard library is weird and confusing unless you've spent years staring into the abyss, and there are many ways of doing subtly different things.
C++ does not have distinct operators for map[key] vs map[key] = val. Languages like JS and Python do, which is why they can offer different behavior in those scenarios (in the former case, JS returns a placeholder value; Python will raise an exception). But, that's only really relevant in the context of single-dimensional maps.
Autovivification is rather rare today, but back in the early 90s when the STL first came about (prior to C++'s design-by-committee process, fwiw), what language was around for inspiration? Perl. If you don't have autovivification (or something like it), then map[i][j][k] = val becomes much more painful to write. (Maybe this is intended.) Workarounds in, say, Python are either to use a defaultdict (which has this exact same behavior of inserting elements upon referencing a key that isn't present!) or to use a tuple as your key (which doesn't allow you to readily extract slices of your dict).
I have seen with my own two eyes programmers push code they didn't read because it produced the expected output, whether via chatgpt or stack overflow, copy paste, run, passed the tests
Related: https://navendu.me/posts/ai-generated-spam-prs/