Hacker News new | past | comments | ask | show | jobs | submit login

This only makes sense if you have an all or nothing concept of the value of output from AI.

Every prompt and answer is contributing value toward your progress toward the final solution, even if that value is just narrowing the latent space of potential outputs by keeping track of failed paths in the context window, so that it can avoid that path in a future answer after you provide followup feedback.

The vast majority of slot machine pulls produce no value to the player. Every single prompt into an LLM tool produces some form of value. I have never once had an entirely wasted prompt unless you count the AI service literally crashing and returning a "Service Unavailable" type error.

One of the stupidest takes about AI is that a partial hallucination or a single bug destroys the value of the tool. If a response is 90% of the way there and I have to fix the 10% of it that doesn't meet my expectations, then I still got 90% value from that answer.






> Every prompt and answer is contributing value toward your progress toward the final solution

This has not been my experience, maybe sometimes, but certainly not always.

As an example: asking chatgpt/gemini about how to accomplish some sql data transformation set me back in finding the right answer because the answer it did give me was so plausible but also super duper not correct in the end. Would've been better off not using it in that case.

Brings to mind "You can't build a ladder to the moon"


In your anecdote I still see this as producing value. If I was lacking in knowledge about the problem space, and therefore fell into the trap of pursuing a "plausible but also super duper not correct" answer from an LLM, then I could have easily fell into that trap solo as well.

But with an LLM, I was able to eliminate this bad path faster and earlier. I also learned more about my own lack of knowledge and improved myself.

I truly mean it when I say that I have never had an unproductive experience with modern AI. Even when it hallucinates or gives me a bad answer, that is honing my own ability to think, detect inconsistencies, examine solutions for potential blindspots, etc.


> One of the stupidest takes about AI is that a partial hallucination or a single bug destroys the value of the tool. If a response is 90% of the way there and I have to fix the 10% of it that doesn't meet my expectations, then I still got 90% value from that answer.

That assumes that the value of a solution is linear with the amount completed. If the Pareto Principle holds (80% of effects come from 20% of causes), then not getting that critical 10+% likely has an outsized effect on the value of the solution. If I have to do the 20% of the work that's hard and important after taking what the LLM did for the remainder, I haven't gained as much because I still have to build the state machine in my head to understand the problem-space well enough to do that coding.


This isn't a bad thing at all. It just means that AI utilization doesn't have quite the exponential impact that many marketers are trying to sell. And that's okay.

I personally think of AI tools as an incremental aid that enables me to focus more of my efforts on the really hard 10-20% of the problem, and get paid more to excel at doing what I do best already.


This assumes you can easily and reliably identify the 10% you need to fix.

Why wouldn't you be able to do identify the 10% that you need to fix?

AI is not an excuse to turn off your brain. I find it ironic that many people complain that they have a hard time identifying the hallucinations in LLM generated content, and then also complain that LLM's are making LLM users dumber.

The problem here is also the solution. LLM's make smarter people even smarter, because they get even better at thinking about the hard parts, while not wasting time thinking about the easy parts.

But people who don't want to think at all about what they are doing... well they do get dumber.


It is extremely well known in the world of programming that reading code is substantially harder than writing it. Just because you have the code in front of you does not mean that determining that it is correct is a trivial (or even moderately easy) task.

That's right. I don't think that AI makes coding easy or trivial. What it does do, is it accelerates your ability to get past the easy and trivial stuff, to the hard parts.

When you get deep into engineering with AI you will find yourself spending a dramatically larger percentage of your time thinking about the hardest things you have ever thought about, and dramatically less time thinking about basic things that you've already done hundreds of times before.

You will find the limits of your abilities, then push past those limits like a marathon runner gaining extra endurance from training.

I think the biggest lie in the AI industry is that AI makes things easier. No, if anything you will find yourself working on harder and harder things because the easy parts are done so quickly that all that is left is the hard stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: