Hacker News new | past | comments | ask | show | jobs | submit login

From my testing the robots seem to 'understand' the code more than just learn how do thing X in code from reading code about doing X. I've thrown research papers at them and they just 'get' what needs to be done to take the idea and implement it as a library or whatever. Or, what has become my favorite activity of late, give them some code and ask them how they would make it better -- then take that and split it up into simpler tasks because they get confused it you ask them to do too much at one time.

As for debugging, they're not so good at that. Some debugging they can figure out but if they need to do something simple, like counting how far away item A is from item B, then I've found you pretty much have to do that for them. Don't get me wrong, they've found some pretty deep bugs I would have spend a bunch of time tracking down in gdb, so they aren't completely worthless but I have definitely given up on the idea that I can just tell them the problem and they get to work fixing it though.

And, yeah, they're good at writing tests. I usually work on python C modules and my typical testing is playing with it in the repl but my current project is getting fully tested at the C level before I have gotten around to the python wrapper code.

Overall its been pretty productive using the robots, code is being written I wouldn't have spent the time working on, unit testing is being used to make sure they don't break anything as the project progresses and the codebase is being kept pretty sound because I know enough to see when they're going off the rails as they often do.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: