When one side of the discussion makes their ignorance a point of pride, defining themselves entirely based on what they now need not know, I believe you've inadvertently insulted yourself.
I don't see many on one "side" with prideful ignorance. There are a few loud ones, sure. But I love to see the many ideas I've not had the time to implement come to fruition. I don't get the same satisfaction. It doesn't seem as much "mine" as if I did it all by hand. However once the tool is built, I use it to build more things. More tools.
Not having to know the lower levels means you can free your mind for things at higher levels of abstraction. It's not a void in our brain you don't fill with other things.
I've noticed that too and it's not too different from political discussions. At the end of the day, I think the split is really about different values people have, their identity, and justice.
A lot of developers' identities is tied to their ability to create quality solutions as well as having control over the means of production (for lack of a better term). An employer mandating that they start using AI more and change their quality standards is naturally going to lead to a sense of injustice about it all.
I work with/am friends with many junior-ish developers who are in the same place as you (got into programming in their late 20s around the 2020 hiring cycle). I'm very sorry for the stress you're dealing with.
I don't know if this describes your situation, but I know many people who are dealing with positions where they have no technical mentorship, no real engineering culture to grow in, and a lot of deadlines and work pressure. Coupled with this, they often don't have a large social group within programming/tech, because they've only been in it for a few years and have been heads down grinding to get a good job the whole time. They're experiencing a weird mixture of isolation, directionless-ness, and intense pressure. The work is joyless for them, and they don't see a future.
If I can offer any advice, be selfish for a bit. Outsource as much as you want to LLMs, but use whatever time savings you get out of this to spend time on programming-related things you enjoy. Maybe work the tickets you find mildly interesting without LLMs, even if they aren't mission critical. Find something interesting to tinker with. Learn a niche language. Or slack off in a discord group/make friends in programming circles that aren't strictly about career advancement and networking.
I think it's basically impossible to get better past a certain level if you can't enjoy programming, LLM-assisted or otherwise. There's such a focus on "up-skilling" and grinding through study materials in the culture right now, and that's all well and good if you're trying to pass an interview in 6 weeks, but all of that stuff is pretty useless when you're burned out and overwhelmed.
Yea I never had real mentorship and I am responsible for 6 projects as solo developer. I am heavily against using LLMs for my tasks as thats just passionless mind numbing back and forth with a machine trying to get it to spit out stuff I actually understand.
I also learned that I absolutely hate most programmers. No offense. But most I've been talking to have a complete lack of ethics. I really love programming but I have a massive issue with how industry scale programming is performed (just offloading infra to AWS, just using random JS libs for everything, buying design templates instead of actually building components yourself, 99% of apps being simple CRUD and I am so incredibly tired of building http based apps, web forms and whatnot...)
I love tech, but the industry does not have a soul. The whole joy of learning new things is diminishing the more I learn about the industry.
I think it's fair to question the use of the term "engineering" throughout a lot of the software industry. But to be fair to the author, his focus in the piece is on design patterns that require what we'd commonly call software engineering to implement.
For example, his first listed design pattern is RAG. To implement such a system from scratch, you'd need to construct a data layer (commonly a vector database), retrieval logic, etc.
In fact I think the author largely agrees with you re: crafting prompts. He has a whole section admonishing "prompt engineering" as magical incantations, which he differentiates from his focus here (software which needs to be built around an LLM).
I understand the general uneasiness around using "engineering" when discussing a stochastic model, but I think it's worth pointing out that there is a lot of engineering work required to build the software systems around these models. Writing software to parse context-free grammars into masks to be applied at inference, for example, is as much "engineering" as any other common software engineering project.
Most of the inference techniques (what the author calls context engineering design patterns) listed here originally came from the research community, and there are tons of benchmarks measuring their effectiveness, as well as a great deal of research behind what is happening mechanistically with each.
As the author points out, many of the patterns are fundamentally about in-context learning, and this in particular has been subject to a ton of research from the mechanistic interpretability crew. If you're curious, I think this line of research is fascinating: https://transformer-circuits.pub/2022/in-context-learning-an...
Any of the "design patterns" listed in the article will have a ton of popular open source implementations. For structured generation, I think outlines is a particularly cool library, especially if you want to poke around at how constrained decoding works under the hood: https://github.com/dottxt-ai/outlines
Based on the comments, I expected this to be slop listing a bunch of random prompt snippets from the author's personal collection.
I'm honestly a bit confused at the negativity here. The article is incredibly benign and reasonable. Maybe a bit surface level and not incredibly in depth, but at a glance, it gives fair and generally accurate summaries of the actual mechanisms behind inference. The examples it gives for "context engineering patterns" are actual systems that you'd need to implement (RAG, structured output, tool calling, etc.), not just a random prompt, and they're all subject to pretty thorough investigation from the research community.
The article even echoes your sentiments about "prompt engineering," down to the use of the word "incantation". From the piece:
> This was the birth of so-called "prompt engineering", though in practice there was often far less "engineering" than trial-and-error guesswork. This could often feel closer to uttering mystical incantations and hoping for magic to happen, rather than the deliberate construction and rigorous application of systems thinking that epitomises true engineering.
There’s nothing particularly wrong with the article - it’s a superficial summary of stuff that has historically happened in the world of LLM context windows.
The problem is - and it’s a problem common to AI right now - you can’t generalize anything from it. The next thing that drives LLMs forward could be an extension of what you read about here, or it could be a totally random other thing. There are a million monkeys tapping on keyboards, and the hope is that someone taps out Shakespeare’s brain.
I don't really understand this line of criticism, in this context.
What would "generalizing" the information in this article mean? I think the author does a good job of contextualizing most of the techniques under the general umbrella of in-context learning. What would it mean to generalize further beyond that?
There's been a decent chunk of research in this direction over the years. Michael O'Boyle is pretty active as a researcher in the space, if you're looking for stuff to read: https://www.dcs.ed.ac.uk/home/mob/
LoRa uses singular value decomposition to get the low rank matrices. In different optimizers, you'll also see eigendecomposition or some approximation used (I think Shampoo does something like this, but it's been a while).