> I have bad news for everyone. Nothing in computing is synchronous.
I think you need to sit this one out. This sort of vacuous pedantry does no one any good, and ignores that it's perfectly fine to model and treat some calls are synchronous, including plain old HTTP ones. Just because everything is a state machine this does not mean you buy yourself anything of value by modeling everything as a state machine.
I think you should have sat it out actually. The “vacuous pedantry” is responsible for a huge class of bugs in computing. It’s juniors misunderstanding how these processes work or in any case developers not understanding where the wait is that cause all kinds of race conditions. So yes you absolutely buy yourself something by understanding and accounting for how the processes you’re wielding actually work.
> That sounds like a lot of over engineering and a good way to never complete the project. Perfect is the enemy of good.
Strong disagree. Addressing expectable failure modes is not over engineering. It's engineering. How do you put together a system without actually thinking through how it fails, and how to prevent those failure scenarios from taking down your whole operation?
> Didn't we get to the point where we realized that microservices cause too much trouble down the road?
That's a largely ignorant opinion to have. Like any architecture, microservices have clear advantages and tradeoffs. It makes no sense to throw vague blanket statements at an architure style because you assume it "causes trouble", particularly when you know nothing about requirements or constraints and all architectures are far from bullet proof.
For sure. There are some systems I would hate to build as a monolith and some systems I would hate not to. There's a good reason microservices showed up.
To be fair I've never worked at Amazon, but at this point they have 1.6 million employees worldwide. I don't care what their hiring brochures say, if you think they don't suffer the same ailments as every corporation that size I have a bridge to sell you.
Certain sectors are high performing centers of excellence whose staff write blog posts that get posted to HN, publish papers, get put on the covers of hiring media and give speeches. The majority of the company is somewhere in the middle holding down their relatively uneventful but important functions, and probably a larger chunk than Corporate leadership would like to acknowledge are deadweight hiding in the cracks.
Yeah, if that culture is actually widespread I imagine their deadweight is more the variety that's figured out how to game the system or has connections, rather than the "I'm going to do literally no work and watch youtube all day" varieties that I've witnessed.
>Is Hungary a paragon of growth and strategic planning?
Kinda, yeah. Foreign investors love doing business with dictators due to guaranteed stability versus the political pendulum swinging every 4 years.
HEnce why more manufacturing is opening in Hungary.
BMW: Opened a new, fully electric vehicle plant in Debrecen in September 2025, with series production of the BMW iX3 starting in late October 2025.
Mercedes-Benz: Announced expansion of its Kecskemét plant to start producing the A-Class model in 2026, further solidifying its presence in Hungary where it already manufactures other models.
BYD: The Chinese EV maker is building its first European production facility in Szeged, with operations set to begin by the end of 2025 or early 2026. The €4 billion plant will initially produce tens of thousands of vehicles annually, focusing on EVs for the European market.
CATL: The world's largest battery manufacturer is constructing a massive plant in Debrecen, expected to start production in early 2026. This supports Hungary's EV ecosystem, supplying batteries to automakers like BMW.
Flex (Flextronics): Inaugurated a new high-tech "NextGen" plant in Zalaegerszeg in November 2025, worth 35 billion HUF (about €90 million). It focuses on electronics and components for the automotive industry, creating 210 jobs in research and production.
> Say what you want about China's monoparty system, but it enables planning that spans decades.
That's not a trait of a monoparty system, or even any totalitarian autocratic system. There are democratic multiparty systems which have plans that spans decades, and autocratic totalitarian regimes that barely put together a coherent multi-year project.
> when our SW project was going to shit due to insufficient resources and mismanagement from the start, what they did to address it was not to add more developers, but add two managers from other projects to our daily standup, which became a 45-60 minute daily, and I'll let you guess if that improved the product deliverables and team morale.
That's what you do if your goal is to blame developers for the project failing, and you double-down on management to underline the root cause as developers going off rails and thus the fix is to reign them in with management. The managers jumping on board have no downside as the project was either doomed or they flexed their superior management skills to revive the project.
> It's not just brain atrophy, I think. I think part of it is that we're actively making a tradeoff to focus on learning how to use the model rather than learning how to use our own brains and work with each other.
I agree with the sentiment but I would have framed it differently. The LLM is a tool, just like code completion or a code generator. Right now we focus mainly on how to use a tool, the coding agent, to achieve a goal. This takes place at a strategic level. Prior to the inception of LLMs, we focused mainly on how to write code to achieve a goal. This took place at a tactical level, and required making decisions and paying attention to a multitude of details. With LLMs our focus shifts to a higher-level abstraction. Also, operational concerns change. When writing and maintaining code yourself, you focus on architectures that help you simplify some classes of changes. When using LLMs, your focus shifts to building context and aiding the model effectively implement their changes. The two goals seem related, but are radically different.
I think a fairer description is that with LLMs we stop exercising some skills that are only required or relevant if you are writing your code yourself. It's like driving with an automatic transmission vs manual transmission.
Previous tools have been deterministic and understandable. I write code with emacs and can at any point look at the source and tell you why it did what it did. But I could produce the same program with vi or vscode or whatever, at the cost of some frustration. But they all ultimately transform keystrokes to a text file in largely the same way, and the compiler I'm targeting changes that to asm and thence to binary in a predictable and visible way.
An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now). So teams start cargo culting ways to deal with specific LLMs' idiosyncrasies and your domain knowledge becomes about a specific product that someone else has control over. It's like learning a specific office suite or whatever.
> An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now)
So basically, like a co-worker.
That's why I keep insisting that anthropomorphising LLMs is to be embraced, not avoided, because it gives much better high-level, first-order intuition as to where they belong in a larger computing system, and where they shouldn't be put.
Sort of except it seems the more the co-worker does the job it atrophies my ability to understand.. So soon we'll all be that annoyingly ignorant manager saying, "I don't know, I want the button to be bigger". Yay?
Only if we're lucky and the LLMs cease being replaced with improved models.
Claude has already shown us people who openly say "I don't code and yet I managed this"; right now the command line UI will scare off a lot of people, and people using the LLMs still benefit from technical knowledge and product design skills, if the tools don't improve we keep that advantage…
…but how long will it be before the annoyingly ignorant customer skips the expensive annoyingly ignorant manager along with all us expensive developers, and has one of the models write them bespoke solution for less than the cost of off-the-shelf shrink-wrapped DVDs from a discount store?
> using the LLMs still benefit from technical knowledge and product design skills, if the tools don't improve we keep that advantage…
I don't think we will, because many of us are already asking LLMs for help/advice on these, so we're already close to the point where LLMs will be able to use these capabilities directly, instead of just for helping us drive the process.
Indeed, but the output of LLMs today for these kinds of task are akin to a junior product designer, a junior project manager, a junior software architect etc.
For those of us who are merely amateur at any given task, LLMs raising us to "junior" is absolutely an improvement. But just as it's possible to be a better coder than an LLM, if you're a good PM or QA or UI/UX designer, you're not obsolete yet.
> and can at any point look at the source and tell you why it did what it did
Even years later? Most people can’t unless there’s good comments and design. Which AI can replicate, so if we need to do that anyway, how is AI specially worse than a human looking back at code written poorly years ago?
I mean, Emacs's oldest source files are like 40 years old at this point, and yes they are in fact legible? I'm not sure what you're asking -- you absolutely can (and if you use it long enough, will) read the source code of your text editor.
The little experience I have with LLM confidently shows that LLMs are much better at navigating and modifying a well structured code base. And they struggle, sometimes to a point where they can't progress at all, if tasked to work on bad code. I mean, the kind of bad you always get after multiple rounds of unsupervised vibe coding.
> I'm building my entire back-end on CF Workers. The DX was really frustrating starting out, but I'm using Rust/WASM, which means most of my issues get caught at compile time.
Cloudflare Workers support WASM, which is how they support any runtime beyond JavaScript. Cloudflare Worker's support for WASM is subpar, which is reflected even in things like the Terraform provider. Support for service bindings such as KV does not depend on WASM or anything like that: you specify your Wrangler config and you're done. I wonder what you are doing to end up making a series of suboptimal decisions.
The bindings are still done at the JS level. But to answer your question, I'm building a git workflow engine (kind of a lightweight GitHub Actions alternative; see https://codeinput.com). In that context, you get lots of events and payloads from Git/GitHub that typically require very little resources to respond to or relay.
The worker model made sense, so I developed the whole app around it. Now of course, knowing what I know today, I might have considered different options. But at the time, I read the description (and had some Cloudflare Workers experience) and thought this looked good.
> Most people won't care because the extent of their debugging skills is console.log, echo, print. repeat 5000 times.
I don't agree. The first thing any developer does when starting out a project is setting up their development environment, which includes being able to debug locally. Stdout is the absolute last option on the table, used when all else fails.
I think you need to sit this one out. This sort of vacuous pedantry does no one any good, and ignores that it's perfectly fine to model and treat some calls are synchronous, including plain old HTTP ones. Just because everything is a state machine this does not mean you buy yourself anything of value by modeling everything as a state machine.
reply