Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> so I kept giving the model the output, and it provided updated formulas back and forth for 4.5hrs

I read this as: "I have already ceded my expertise to an LLM, so I am happy that it is getting faster because now I can pay more money to be even more stuck using an LLM"

Maybe the alternative to going back and forth with an AI for 4.5 hours is working smarter and using tools you're an expert in. Or building expertise in the tool you are using. Or, if you're not an expert or can't become an expert in these tools, then it's hard to claim your time is worth $100/hr for this task.



I agree going back and forth with an AI for 4.5 hours is usually a sign something has gone wrong somewhere, but this is incredibly narrow thinking. Being an open-ended problem solver is the most valuable skill you can have. AI is a huge force multiplier for this. Instead of needing to tap a bunch of experts to help with all the sub-problems you encounter along the way, you can just do it yourself with AI assistance.

That is to say, past a certain salary band people are rarely paid for being hyper-proficient with tools. They are paid to resolve ambuguity and identify the correct problems to solve. If the correct problem needs a tool that I'm unfamiliar with, using AI to just get it done is in many cases preferable to locating an expert, getting their time, etc.


If somebody claims that something can be done with LLM in 10 minutes which takes 4.5 hours for them, then they are definitely not experts. They probably have some surface knowledge, but that’s all. There is a reason why the better LLM demos are related to learn something new, like a new programming language. So far, all of the other kind of demos which I saw (e.g. generating new endpoints based on older ones) were clearly slower than experts, and they were slower to use for me in my respective field.


No true Scotsman


There was no counter example, and I didn’t use any definition, so it cannot be that. I have no idea what you mean.


> If somebody claims that something can be done with LLM in 10 minutes which takes 4.5 hours for them, then they are definitely not experts.

Looks like a no true scotsman definition to me.

I'm don't fully agree or disagree with your point, but it was perhaps made more strongly than it should have been?


For no true Scotsman, you need to throw out a counter example by using a misrepresented or wrong definition, or just simply using a definition wrongly. But in any case I need a counter example for that specific fallacy. I didn’t have, and I still don’t have.

I understand that some people maybe think themselves experts, and they could achieve similar reduction (not in the cases which I said that it’s clearly possible), but then show me, because I still haven’t seen a single one. The ones which were publicly showed were not quicker than average seniors, and definitely worse than the better ones. Even in larger scale in my company, we haven’t seen any performance improvement in any single metric regarding coding after we introduced it more than half years ago.


Here's your counterexample: “Copilot has dramatically accelerated my coding. It’s hard to imagine going back to ‘manual coding,’” Karpathy said. “Still learning to use it, but it already writes ~80% of my code, ~80% accuracy. I don’t even really code, I prompt & edit.” -- https://siliconangle.com/2023/05/26/as-generative-ai-acceler...


It's not a counterexample. There is exactly zero exact information in it. It's just a statement from somebody who profits from such statements. Even if I just say that's not true has more value, because I would even benefit from what Karpathy said, if it had been true.

So, just to be specific, and specifically for ChatGPT (I think it was 4), these are very-very problematic, because all of these are clear lies:

https://chatgpt.com/share/675f6308-aa8c-800b-9d83-83f14b64cb...

https://chatgpt.com/share/675f63c7-cbc4-800b-853c-91f2d4a7d7...

https://chatgpt.com/share/675f65de-6a48-800b-a2c4-02f768aee7...

Or this which one sent here: https://www.loom.com/share/20d967be827141578c64074735eb84a8

In this case, the guy clearly slower than simple copy-paste, and modification.

I had very similar experiences. Sometimes it just used a different method, which does almost the same, just worse. I had to even check what the heck is the used method, because it's not used for obvious reasons, because it was an "internal" one (like apt and apt-get).


I learn stuff when using these tools just like I learn stuff when reading manuals and StackOverflow. It’s basically a more convenient manual.


A more convenient manual that frequently spouts falsehoods, sure.

My favorite part is when it includes parameters in its output that are not and have never been a part of the API I'm trying to get it to build against.


My favorite part is when it includes parameters in its output that are not and have never been a part of the API I'm trying to get it to build against.

The thing is, when it hallucinates API functions and parameters, they aren't random garbage. Usually, those functions and parameters should have been there.

Things that should make you go "Hmm."


More than that, one of the standard practices in development is writing code with imaginary APIs that are convenient at the point of use, and then reconciling the ideal with the real - which often does involve adding the imaginary missing functions or parameters to the real API.


> Usually, those functions and parameters should have been there.

There is a huge leap here. What is your argument for it?


Professional judgement.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: