IMO this is a bad take. I use LLMs for things I don’t know how to do myself all the time. Now, I wouldn’t use one to write some new crypto functions because the risk associated with getting it wrong is huge, but if I need to write something like a wrapper around some cloud provider SDK that I’m unfamiliar with, it gets me 90% of the way there. It also is way more likely to know at least _some_ of the best practices where I’ll likely know none. Even for more complex things getting some working hello world examples from an LLM gives me way more threads to pull on and research than web searching ever has.
>It also is way more likely to know at least _some_ of the best practices
What's way more likely to know the best practices is the documentation. A few months ago there was a post that made the rounds about how the Arc browser introduced a really severe security flaw by misconfiguring their Firebase ACLs despite the fact that the correct way to configure them is outlined in the docs.
This to me is the sort of thing (although maybe not necessarily in this case) out of LLM programming. 90% isn't good enough, it's the same as Stackoverflow pasting. If you're a serious engineer and you are unsure about something, it is your task to go to the reference material, or you're at some point introducing bugs like this.
In our profession it's not just crypto libraries, one misconfigured line in a yaml file can mean causing millions of dollars of damage or leaking people's most private information. That can't be tackled with a black box chatbot that may or may not be accurate.
Writing a wrapper is easier to verify because of the context of the API or SDK you're wrapping. Seems wrong? Check the docs. Doesn't work? Curl it yourself.
> write something like a wrapper around some cloud provider SDK that I’m unfamiliar with
you're equating "unfamliar" with "don't know how to do" but I will claim you do know how to do it, you would just be slow because you have to reference documentation and learn which functions do what.