Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

is the point of this to actually assign tasks to an AI to complete end to end? Every task I do with AI requires atleast some bit of hand holding, sometimes reprompting etc. So I don't see why I would want to run tasks in parallel, I don't think it would increase throughput. Curious if others have better experiences with this


The example use-cases in the videos are pretty compelling and much smaller scope.

“Here’s an error reported to the oncall. Give a try fixing it” (Could be useful even if it fails)

Refactor this small piece I noticed while doing something else. Small-scoped stuff that likely wouldn’t get done otherwise.

I wouldn’t ask LLMs for full-features in a real codebase but these examples seem within the scope of what they might be able to accomplish end-to-end


I am working with a 3rd party API (Exa.ai) and I hacked together a python script. I ran a remote agent to do these tasks simultaneously (augment.new, I’m not affiliated, I have early access)

Agent 1: write tests, make sure all the tests pass.

Agent 2: concert python script to fastapi

Agent 3: create frontend based on fastapi endpoints

I get a PR, I check code and see if it works and then merge to main. All three PR’s worked flawlessly (front end wasn’t pretty).


with a bad ai it is pointless, with a good ai it is powerful.

codex-1 has been quite good in my experience




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: