When you've lived with AI boosting your productivity for a year or more it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
>it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
In my youth, I would have argued this was bad. Now, I tend to agree. Not that studies are worthless; but they are just part of the accumulation of evidence, and when they contradict a clear result you are directly seeing, you need to weight the evidence appropriately.
(Obviously, replicated studies showing clear effects should be more heavily weighted.)
When you've lived with -AI- stimulants boosting your productivity for a year or more it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
we don't demand every developer pop Adderall though
> When you've lived with AI boosting your productivity for a year or more it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
Me: The person above literally pitches an unsupported belief against a study
You: it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
Really? Really?!!
As for "boosting your productivity", it's also what I'm talking about in the article I linked:
--- start quote ---
For every description of how LLMs work or don't work we know only some, but not all of the following:
- Do we know which projects people work on? No
- Do we know which codebases (greenfield, mature, proprietary etc.) people work on? No
- Do we know the level of expertise the people have? No. Is the expertise in the same domain, codebase, language that they apply LLMs to? We don't know.
- How much additional work did they have reviewing, fixing, deploying, finishing etc.? We don't know.
Even if you have one person describing all of the above, you will not be able to compare their experience to anyone else's because you have no idea what others answer for any of those bullet points.
--- end quote ---
So what happens when we actually control and measure those variables?
Wait, don't answer: "no, it's easier to believe yourself over a study".
See? Skeptics don't even have to "jump on anything that supports their skepticism." Even you supply them with material.
What I'm willing to assert as fact, based not just on my own experiences (though they're a major role) but on observing this space for several years and talking to literally hundreds of people, is that LLMs can provide you a very real productivity boost in coding if you take the time to learn how to use them - or if you get lucky and chance upon the most productive patterns.
You remind me of the haskell hype of ~2016-2018 where the community wrote tons of blog posts and passionate comments on HN about the theory of types and the "productivity boost" of their language while simultaneously producing a paltry output of actual useful software in the hands of actual users.
Im sure they were completely genuine in how they felt, just as i am sure you are too.