Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting about the auto-completion. That was really the only Copilot feature I found to be useful. The idea of writing out an English prompt and telling Copilot what to write sounded (and still sounds) so slow and clunky. By the time I've articulated what I want it to do, I might as well have written the code myself. The auto-completion was at least a major time-saver.

"The card game state is a structure that contains a Deck of cards, represented by a list of type Card, and a list of Players, each containing a Hand which is also a list of type Card, dealt randomly, round-robin from the Deck object." I could have input the data structure and logic myself in the amount of time it took to describe that.



I think you should embrace a bit of ambiguity. Don't treat this like a stupid computer where you have to specify everything in minute detail. Certainly the more detail you give, the better to an extent. But really: Treat it like you're talking to a colleague and give it a shot. You don't have to get it right on the first prompt. You see what it did and you give it further instructions. Autocomplete is the least compelling feature of all of this.

Also, I don't remember what model Copilot uses by default, especially the free version, but the model absolutely makes a difference. That's why I say to spend the $20. That gives you access to Sonnet 4 which is where, imo, these models took a giant leap forward in terms of quality of output.


Is Opus as big a leap as sonnet4 was?


Thanks, I shall give it a try.


One analogy I have been thinking about lately is GPUs. You might say "The amount of time it takes me to fill memory with the data I want, copy from RAM to the GPU, let the GPU do it's thing, then copy it back to RAM, I might as well have just done the task on the CPU!"

I hope when I state it that way you start to realize the error in your thinking process. You don't send trivial tasks to the GPU because the overhead is too high.

You have to experiment and gain experience with agent coding. Just imagine that there are tasks where the overhead of explaining what to do and reviewing the output are dwarfed by the actual implementation. You have to calibrate yourself so you can recognize those tasks and offload them to the agent.


There's a sweet spot in terms of generalization. Yes, painstakingly writing out an object definition in English just so that the LLM can write it out in Java is a poor use of time. You want to give it more general tasks.

But not too general, because then it can get lost in the sauce and do something profoundly wrong.

IMO it's worth the effort to know these tools, because once you have a more intuitive sense for the right level of abstraction it really does help.

So not "make this very basic data structure for me based on my specs", and more like "rewrite this sequential logic into parallel batches", which might take some actual effort but also doesn't require the model to make too many decisions by itself.

It's also pretty good at tests, which tends to be very boilerplate-y, and by default that means you skip some cases, do a lot of brain-melting typing, or copy-and-paste liberally (and suffer the consequences when you missed that one search and replace). The model doesn't tire, and it's a simple enough task that the reliability is high. "Generate test cases for this object, making sure to cover edges cases A, B, and C" is a pretty good ROI in terms of your-time-spent vs. results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: