Having read parts of e.g. the "Refactoring" and "Patterns of Enterprise Architecture" books and ThoughtWorks and Fowler web pages and blog posts, and "The Clean Coder", and about distributed computing algorithms; I've been working with a limited set of refactoring terms in my prompts like "factor out", "factor up", "extract an interface/superclass from".
TIL according to Wikipedia, the more correct terms are "pull up" and "push down".
How should they learn terms for refactoring today? Should they too train to code and refactor and track customer expectations without LLMs? There's probably an opportunity to create a good refactoring exercise; with and without LLMs and IDEs and git diff.
System Prompt, System Message, User, User Prompt, Agent, Subagent, Prompt Template, Preamble, Instructions, Prompt Prefix, Few-Shot examples; which thing do we add this to:
First, summarize Code Refactoring terms in a glossary.
Would methods for software quality teams like documentation and tests prevent this cognitive catch-up on so much code with how much explanation at once?
Generate comprehensive unit tests for this. Generate docstrings and add comments to this.
If you build software with genai from just a short prompt, it is likely that the output will be inadequate in regards to the unstated customer specifications and that then there will need to be revisions. Eventually, it is likely that a rewrite or a clone of the then legacy version of the project will be more efficient and maintainable. Will we be attached to the idea of refactoring the code or to refactoring the prompts and running it again with the latest model too?
Retyping is an opportunity to rewrite! ("Punch the keys" -- Finding Forrester)
Are the prompts worth more than the generated code now?
simonw/llm by default saves all prompt inputs and outputs in a sqlite database. Copilot has /save and gemini-cli has /export, but they don't yet autosave or flush before attempting to modify code given the prompt output?
Catch up as a human coder, Catch up the next LLM chat context with the prior chat prompt sequences (and manual modifications, which aren't but probably should be auto-committed distinctly from the LLM response's modifications)
TIL according to Wikipedia, the more correct terms are "pull up" and "push down".
How should they learn terms for refactoring today? Should they too train to code and refactor and track customer expectations without LLMs? There's probably an opportunity to create a good refactoring exercise; with and without LLMs and IDEs and git diff.
System Prompt, System Message, User, User Prompt, Agent, Subagent, Prompt Template, Preamble, Instructions, Prompt Prefix, Few-Shot examples; which thing do we add this to:
First, summarize Code Refactoring terms in a glossary.
Code refactoring: https://en.wikipedia.org/wiki/Code_refactoring
"Ask HN: CS papers for software architecture and design?" (2017) https://news.ycombinator.com/item?id=15778396
"Ask HN: Learning about distributed systems?" (2020) https://news.ycombinator.com/item?id=23932271
Would methods for software quality teams like documentation and tests prevent this cognitive catch-up on so much code with how much explanation at once?
Generate comprehensive unit tests for this. Generate docstrings and add comments to this.
If you build software with genai from just a short prompt, it is likely that the output will be inadequate in regards to the unstated customer specifications and that then there will need to be revisions. Eventually, it is likely that a rewrite or a clone of the then legacy version of the project will be more efficient and maintainable. Will we be attached to the idea of refactoring the code or to refactoring the prompts and running it again with the latest model too?
Retyping is an opportunity to rewrite! ("Punch the keys" -- Finding Forrester)
Are the prompts worth more than the generated code now?
simonw/llm by default saves all prompt inputs and outputs in a sqlite database. Copilot has /save and gemini-cli has /export, but they don't yet autosave or flush before attempting to modify code given the prompt output?
Catch up as a human coder, Catch up the next LLM chat context with the prior chat prompt sequences (and manual modifications, which aren't but probably should be auto-committed distinctly from the LLM response's modifications)