Hacker News new | past | comments | ask | show | jobs | submit login

> auto run your code, compile it, feed errors back to the LLM,

Can't wait for companies to juice profits by having the LLM run excessive cycles or get stuck in a loop and run up my bill




aider jams the backend on my PC, i have to kill the tcp connection or python to stop it running a GPU on the backend, from time to time. I can't imagine paying for tokens and not knowing if it's working or wasting money.


The loops and constant useless changes drive me nuts haha


aider sucessfully made, 1-shot, a 2048 clone in architect mode, serverless, local html+js+css. i pushed the git repo it made to my github, aider2048clone. I used deepseek-r1-llama-70b distill, it took ~3 hours. after the first 10 minutes i didn't want to interrupt it, because who cares how long it takes if it works?

I haven't been able to get it to do anything but waste my tokens with deepseek itself as the backend (aider --model deepseek[/deepseek-reasoner|/deepseek-chat] i think but am not certain).


I think the architect mode might be worth looking at but i'm going to attempt to aider.exe $(*.txt) and then switch to /ask mode and see if it can be used as a 0-shot document query.

because even a rudimentary, garbage implementation would be fun to have, i think.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: