Clone the dependency you want to use in the directory of your code.
Instruct it to go into the directory and look at that code in order to complete task X: "I've got a new directory xyz, in it contains a library to do feature abc. I'll need to include it here to do A to function B and so on"
The weird version mixing bug will disappear. If it's closed source, then just do the documentation.
You need to line up the breadcrumbs right.
#2 is "create a patch file that does X. Do not apply it". Followed by "apply the previous patch file". Manually splitting the task fixes the attention.
Another method is to modify the code. Don't use "to-do" it will get confused. Instead use something meaningless like 1gwvDn, then at the appropriate place
[1gwvDn: insert blah here]
Then go to the agent and say
"I've changed the file and given you instructions in the form [1gwvDn:<instructions>]. Go through the code and do each individually.
Then the breadcrumbs are right and it doesn't start deleting giant blocks of code and breaking things
#3 You will never start anything unless you convince yourself it's going to be easy. I know some people will disagree with this. They're wrong. You need to tell yourself it's doable before you attempt it.
#4 is because we lose ownership of the code and end up playing manager. So we do the human thing of asking the computer to do increasingly trivial things because it's "the computer's" code. Realize you're doing that and don't be dumb about it.
This is a very typical reply when we see someone pointing out the flaws of AI coding tools.
"You are using it wrong, AI can do everything if you properly prompt it"
Yes, it can write everything if I provide enough context, but it ain't 'Intelligence' if context ~= output.
The point here is providing enough context itself is challenging and requires expertise, this makes AI ides unusable for many scenarios.
We already have a term for prompting a computer in a way that causes it to predictably output useful software; we called that programming, and people on this website used to think that knowing how to do that was a worthwhile field of study.
This is a very typical reply when we see someone excited about AI and then a luddite needs to come along and tell them why they should not be so excited and helpful.
I mostly jest but your comment comes off quite unhelpful and negative. The person you replied to wasn’t blaming the parent comment, just offering helpful tips. I agree that today’s AI tools aren’t perfect, but I also think it’s important for developers to invest time refining their toolset. It’s no different from customizing IDE shortcuts. These tools will improve, but if devs aren’t willing to tinker with what they use, what’s the point?
I think your reply is a bit unfair. The first problem of the AI using outdated libraries sure it can be fixed if you are a competent developer who keeps up with industry standards. But if you need someone like that to guide an AI, then "agentic" coding loses a lot of its "agentic" ability which is what the article is talking about. If you need enthusiasts and experts to steer the AIs, they are a bit like "full self driving" where you are still asked to touch the steering wheel if it thinks you're not paying attention to the road.
I don't see how it's unfair at all. This is novel technology with rough edges. Sharing novel techniques for working around those rough edges is normal. Eventually we can expect them to be built-in to the tools themselves (see MCP for how you might inject this kind of information automatically.) A lot of development is iterative and LLMs are no exception.
Sure it is normal, but the reply was also a normal reply because, as you said it's a product with rough edges advertised as if it's completely functional. And the comment I replied to, criticizing the other reply, was not fair because he's only giving a normal reply, imo
I don't believe it is a normal reply on HN, or at least it shouldn't be.
We all know that this technology isn't magic. In tech spaces there are more people telling you it isn't magic than it is. The reminder does nothing. The contextual advice on how to tackle those issues does. Why even bother with that conversation, you can just take the advice or ignore it until the technology improves since you've already made up your mind about the limit you or others should be willing to go.
If it doesn't meet the standard of what you believe is advertised than say that. Not, "workarounds" are problematic because they obfuscate how someone should feel about how the product is advertised. Maybe you are an advertising purist and it bothers you, but why invalidate the person providing the context into how to utilize those tools in their current state better?
I didn't say it's magic. I said what it is advertised as.
> The reminder does nothing. The contextual advice on how to tackle those issues does.
No, the contextual advice doesn't help because it doesn't tackle the issue because the issue is "It doesn't work as advertised". We are in a thread of an article whose main thesis is "We’re still far away from AI writing code autonomously for non-trivial tasks." Giving advice that doesn't achieve autonomous writing code for non-trivial tasks doesn't help achieve that goal.
And if you want to talk about replies that do nothing. Calling the guy a Luddite for saying that the tip doesn't help him use the agent as an autonomous coder, is a huge nothing.
> since you've already made up your mind about the limit you or others should be willing to go.
Please read the article and understand what the conversation is about. We are talking about the limits that the article outlined, and the poster is saying how he also hit those limits.
> If it doesn't meet the standard of what you believe is advertised than say that.
The article says this. The commenter must have assumed people here read the articles.
> why invalidate the person providing the context into how to utilize those tools in their current state better?
Because that context is a deflection from the main point of the comment and conversation. It's like in a thread of mechanics talking about how an automatic tire balancer doesn't work well, and someone comes in saying "Well you could balance the tires manually!" How helpful is that?
It is definitely not unfair. #2 is a great strategy, I'm gonna try in my agentic tool. We obviously need experts to steer the AIs in this current phase, who said we don't?
> We obviously need experts to steer the AIs in this current phase, who said we don't?
I don't know if it's explicitely said but if you call it agentic, it sounds like it can do stuff independently (like an agent). If I still need to hand feed it everything, I wouldn't really call it agentic.
There are two different roles here, the dev that creates the agent and the user that uses the agent. The user should not need to adapt/edit prompts but the dev definitely should for it to evolve. Agents aren't AGI, after all.
> We obviously need experts to steer the AIs in this current phase, who said we don't?
Much of the marketing around agents is about not needing them. Zuck said Meta's internal agent can produce the code of an average junior engineer in the company. An average junior engineer doesn't need this level of steering to know to not include a 4 year old outdated library in a web project.
And in the time I line up the breadcrumbs to help this thing to emulate an actual thought process, I would probably alrady have finshed doing it myself, especially since I speed up the typing-it-all-out part using a much less "clever" and "agentic" AI system.
Finding a way to convince yourself something is easy is the best trick to unblocking yourself and stopping distractions. This is the closest thing I know to the "1 simple trick" thing.
This feels like when I explain to a layperson cool dev tool like GIT.
They just roll eyes and continue to do copies of the files they work on.
I just roll eyes on that explanation because it exactly feels like additional work I don’t want to do. Doing my stuff the old way works right away without having to do some explanation tricks and setting up context for the tool I expect to do correct thing on the first go.
it's clearly a preference I feel strongly about. I've been programming for over 30 years btw - I can manually do this. It's a new tool I'm trying to learn.
I was personally clocking in about 1% of the openrouter token count last year every day. openrouter has grown quite a bit, but I realize I'm certainly in the minority/on the edge here.
Clone the dependency you want to use in the directory of your code.
Instruct it to go into the directory and look at that code in order to complete task X: "I've got a new directory xyz, in it contains a library to do feature abc. I'll need to include it here to do A to function B and so on"
The weird version mixing bug will disappear. If it's closed source, then just do the documentation.
You need to line up the breadcrumbs right.
#2 is "create a patch file that does X. Do not apply it". Followed by "apply the previous patch file". Manually splitting the task fixes the attention.
Another method is to modify the code. Don't use "to-do" it will get confused. Instead use something meaningless like 1gwvDn, then at the appropriate place
[1gwvDn: insert blah here]
Then go to the agent and say
"I've changed the file and given you instructions in the form [1gwvDn:<instructions>]. Go through the code and do each individually.
Then the breadcrumbs are right and it doesn't start deleting giant blocks of code and breaking things
#3 You will never start anything unless you convince yourself it's going to be easy. I know some people will disagree with this. They're wrong. You need to tell yourself it's doable before you attempt it.
#4 is because we lose ownership of the code and end up playing manager. So we do the human thing of asking the computer to do increasingly trivial things because it's "the computer's" code. Realize you're doing that and don't be dumb about it.