Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs open up the door to performing radical updates that we'd never really consider in the past. We can port our libraries from one language to another. We can change our APIs to fix issues, and give downstream users an LLM prompt to migrate over to the new version automatically, instead of rewriting their code themselves. We can make massive internal refactorings. These are types of tasks that in the past, rightly, a senior engineer would reject in a project until its the last possibly option. Breaking customers almost never pays off, and its hard to justify refactoring on a "maintenance mode" project.

> But if it’s more about finding the right prompt and letting an LLM do the work, maybe that changes our decision process.

I don’t see much difference between documenting any breaking changes in sufficient detail for your library consumers to understand them vs “writing an LLM prompt for migrating automatically”, but if that’s what it takes for maintainers to communicate the changes, okay!

Just as long as it doesn’t become “use this LLM which we’ve already trained on the changes to the library, and you just need to feed us your codebase and we’ll fix it. PS: sorry, no documentation.”



There's a huge difference between documentation and prompts. Let me give you a concrete example.

I get requests to "make your research code available on Hugging Face for inference" with a link to their integration guide. That guide is 80% marketing copy about Git-based repositories, collaboration features, and TensorBoard integration. The actual implementation details are mixed in through out.

A prompt would be much more compact.

The difference: I can read a prompt in 30 seconds and decide "yes, this is reasonable" or "no, I don't want this change." With documentation, I have to reverse-engineer the narrow bucket which applies to my specific scenario from a one size drowns all ocean.

The person making the request has the clearest picture of what they want to happen. They're closest to the problem and most likely to understand the nuances. They should pack that knowledge densely instead of making me extract it from documentation links and back and forth.

Documentation says "here's everything now possible, you can do it all!" A prompt says "here's the specific facts you need."

Prompts are a shared social convention now. We all have a rough feel for what information you need to provide - you have to be matter-of-fact, specific, can't be vague. When I ask someone to "write me a prompt," that puts them in a completely different mindset than just asking me to "support X".

Everyone has experience writing prompts now. I want to leverage that experience to get cooperative dividends. It's division of labor - you write the initial draft, I edit it with special knowledge about my codebase, then apply it. Now we're sharing the work instead of dumping it entirely on the maintainer.

[1] https://peoplesgrocers.com/en/writing/write-prompts-not-guid...


I was pretty hand-wavy when I made the original comment. I was thinking implicitly to things like the Python sub-interpreter proposal, which had strong pushback from the Numpy engineers at the time (I don't know the current status, whether it's a good idea, etc, just something that came to mind).

https://lwn.net/Articles/820424/

The objections are of course reasonable, but I kept thinking this shouldn't be as big a problem in the future. A lot of times we want to make some changes that aren't _quite_ mechanical, and if they hit a large part of the code base, it's hard to justify. But if we're able to defer these types of cleanups to LLMs, it seems like this could change.

I don't want a world with no API stability of course, and you still have to design for compatibility windows, but it seems like we should be able to do better in the future. (More so in mono-repos, where you can hit everything at once).

Exactly as you write, the idea with prompts is that they're directly actionable. If I want to make a change to API X, I can test the prompt against some projects to validate agents handle it well, even doing direct prompt optimization, and then sharing it with end users.


Yes, there's a difference between "all documentation for a project" and "prompt for specific task".

I don't think there should be a big difference between "documentation of specific breaking changes in a library and how consumers should handle them" and "LLM prompt to change a code base for those changes".

You might call it a migration guide. Or it might be in the release notes, in a special section for Breaking Changes. It might show up in log messages ("you're using this API wrong, or it's deprecated").

Why would describing the changes to an LLM be easier than explaining them to the engineer on the other end of your API change?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: