Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All I really care about is the end result and, so far, LLMs are nice for code completion, but basically useless for anything else.

They write as much code as you want, and it often sorta works, but it’s a bug filled mess. It’s painstaking work to fix everything, on part with writing it yourself. Now, you can just leave it as-is, but what’s the use of releasing software that crappy?

I suppose it’s a revolution for that in-house crapware company IT groups create and foist on everyone who works there. But the software isn’t better, it just takes a day rather than 6 months (or 2 years or 5 years) to create. Come to think of it, it may not be useful for that either… I think the end-purpose is probably some kind of brag for the IT manger/exec, and once people realize how little effort is involved it won’t serve that purpose.



I love the subtle mistakes that get introduced in strings for example that then take me all the time I saved to fix.


Do you have an example of this?


Can’t remember account login so created a new account to respond.

I recently used Claude with something along the lines of “Ruby on rails 8, Hotwire, stimulus, turbo, show me how to do client side validations that don’t require a page refresh”

I am new to prompt engineering so feel free to critique. Anyway, it generated a stimulus controller called validations_controller.js and then proceeded to print out all of the remaining connected files but in all of them it referred to the string “validation” not “validations”. The solution it provided worked great and did exactly what I wanted to (though I expected a turbo frame based solution not a stimulus solution, but whatever it did what I asked it to do) with the exception of having to change all of the places where it put the string “validation” where it needed to put “validations” to match the name it used in the provided stimulus controller.


Say you hire a developer and ask him to directly debug an issue by simply skimming through the codebase, do you think he can complete this task say in 5-10 minutes? No, right? In claude code(CC), do the following: 1. /init which acts as a project guide. 2. Ask it to summarize the project as save it as summary.md 3. The prompt needs to be clear and detailed. Here’s an example: https://imgur.com/a/RJyp3f9


I remember reading the origin article for that prompt example and laughing at how long it likely took to write that essay when typing "hikes near San Francisco" into your favoured search engine will do the same thing, minus the hallucinations.


You can ask AI to help with your prompt


are you in the habit of saving bad LLM output to later reference in future Internet disputes?


??? A zillion LLM tools maintain history for you automatically. As long as you remember what the chat was about, it's only a search away.


Have you tried using Cursor rules? [1]

Creating a standard library stdlib with many (potentially thousands) of rules, and then iteratively adding to and amending the rules as you go, is one of the best practices for successful AI coding.

[1] https://docs.cursor.com/context/rules-for-ai


> …many (potentially thousands) of rules, and then iteratively adding to and amending the rules as you go…

Is this an especially better (easier, more efficient) route to a working, quality app/system than conventional programming?

I’m skeptical if the answer to the way to achieve 10x results is 10x more effort.


It's such a fast moving space, perhaps the need for 'rules' is just a temporary thing, but right now the rules will help you to achieve more predictable results and higher quality code.

You could easily end up with a lot of rules if you are working with a reasonably large codebase.

And as you work on your code, every time you have to deal with an issue of the code generation you ask Cursor to create a new rule so that next time it does it correctly.

In terms of AI programming vs conventional programming, the writing's on the wall: AI assistance is only getting better and now is a good time to jump on the train. Knowing how to program and configure your AI assistants and tools is now a key software engineering skill.


Or it's just a bubble and after diminishing returns they'll go in the bin with all the blockchain startups lol


10x more effort once, 10x faster programming forever. Also once you got examples of the rules files, LLM can write most of them for next projects.


I think specifying rules could be very useful in the same way as types, documentation, coding styles, advanced linting, semgrep etc.

We could use it for LLM driven coding style linting. Generate PRs for refactoring. Business logic bug detector

Also, you can just tell copilot to write rules for you.


at that point aren't you just replacing regular programming with creating the thousands of rules? I suppose the rules are reusable so it might be a form of meta-programming or advanced codegen


People endlessly creating "rules" for their proompts is the new version of constantly tweaking Vim or Emacs configurations


Born too late to explore the world.

Born to early to explore the galaxy.

Born at the right time to write endless proompts for LLMs.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: