Hacker Newsnew | past | comments | ask | show | jobs | submit | blakehaswell's commentslogin

One thing that might not be obvious about checklists is how they're used.

I used to think checklists were used by reading the item, then doing the thing. I literally thought of them as a recipe that you would follow. Complete a step, check the box, repeat... This is typically referred to as a "read, do" checklist. In aviation this style of checklist is typically reserved for non-normal operations—procedures that you wouldn't use often enough to commit to memory.

The other style of checklist is "do, confirm". In this style you complete a procedure by memory, and read through a checklist to ensure you didn't miss anything (no box ticking, you just read the items and confirm to yourself that they're complete). In aviation this is the style used for normal operations, and for the initial action-items in an emergency (which although not commonly used, must be committed to memory because they are time-critical).

Because you're expecting that the procedure is completed by memory, a "do, confirm" checklist can be extremely brief. You do not need to write in detail what each step involves, you just need a word or two to name the step. Additionally, they're an extremely low operational burden; it takes a couple of seconds to read through a "do, confirm" checklist but the upside of catching common errors is significant.


I feel for everyone working there who is suffering whiplash from being pulled from one direction to the other.

What can you possibly learn about a new feature in a few hours apart from the gut reaction of a mob? If that's enough to change your mind, what evidence was there in the first place that the feature was a good idea? Probably none.


Yes, WIP (work on progress) limits are a key feature of Kanban.


Citation needed. Whether the server is serving HTML or JSON it still needs to serialise the data, so I don't think serialising JSON is going to be significantly faster than serialising HTML. Plus then the client needs to deserialise that JSON before it can render HTML, so all of that work related to (de)serialising JSON is work which doesn't even need to happen if the server is rendering HTML. Not to mention the work on the client to parse and evaluate the JS which needs to happen before it can even start rendering HTML.

As for data across the wire, GZIP is a thing so again I would want to see real world performance numbers to back your claims.


To clarify, are you saying up to two-dozen services for a development team with ~12 developers on it?


Above all I'm saying that sentences like "microservices are better", "monoliths are better", "42 services are the best" are all stupid without context.

What your business does, how many people you have 3 or 10k, what kind of roles and seniority you have, how long you're in the project - 3 months in or 10 years in, how crystalized architecture is, at what scale you operate, how does performance landscape looks like, what kind of pre-deployment quality assurance policies are dictated by the business, are offline upgrades allowed or we're operating in 24h, which direction system is evolving, where are gaps (scalability, quality...) etc are all necessary to determine correct answer.

Building website for local tennis club will require different approaches than developing high frequency trading exchange and both will be different from approaches for system to show 1bn people one advert or the other.

Seeing world as hotdog and not-hotdog (microservices vs monoliths) makes infantile conversations. There is nothing inherently wrong with microservices, monoliths or any of approaches to manage complexity ie:

- refactoring code to a shared functions

- encapsulating into classes or a typed object

- encapsulating into a modules

- simply arranging code into better directory structures, flattening, naming things better, changing cross-sections ie. by behavior instead of physical-ish classes and objects

- extracting code to packages/libraries inside monorepo or its own repository ie. open sourcing non-business specific, generic projects or rely on 3rd party package/library

- extracting into dedicated threads, processes, actors/supervisors etc.

- extracting to service in monorepo or dedicated repository or creating internal team to black box it and communicate via api specs or use 3rd party service

...bonus points for:

- removing code, deleting services, removing nonsensical layers of complexity, simplifying, unifying etc.

etc


I don't know what the op intended, but services can be deployed in process inside one monolith.


I really enjoyed this, I think looking the organisational structure through time is a good take that I haven't really seen addressed so clearly before.

I would have liked to see an exploration of the "through time" lens on some of the more micro code-organisation structures he talked about at the end like class hierarchies. It's definitely a common problem in legacy code—there's some idea you want to express but the existing structures make that very difficult and so you end up twisting your idea to fit, further ossifying the existing structures.

I've also seen cases where the organisation structure was changed to affect some change, but the existing code structure makes that so difficult that the software doesn't actually change to reflect the new structure at all, and instead the new organisation is just slowed down by coordination costs at the organisational level as well as different coordination costs at a technical level.


> did Facebook, Apple, Amazon, Netflix, Google, etc all make a terrible engineering mistake?

Is that so impossible? There are many other considerations that go into technology choices at these companies. There are trade-offs involved, and for companies with huge teams of developers the considerations need to be very different than for small–medium sized groups of developers.

I would argue that a smaller group of developers can focus much more on user experience and engineering efficiency, whereas a large company has a organisational scaling issues and a significant bureaucracy to support. At a large company, engineering considerations come second to very many things. It would actually be surprising if the trade-offs and choices those companies made were correct for other very different companies.


What exactly do you mean by multiple entry points? Do you have multiple processes which run independently but are co-located in the same repository or are you talking about something else?


You can have more than one Main function, just pick which one to use when you compile. In our case it’s PHP, so the API entry point uses a different index than jobs, for example. They can be deployed differently and scaled differently, but operationally it’s just deploying the same code/configuration everywhere and the only difference is routing. When you are writing code, you very, very rarely have to worry about which context you are writing for.


I have done this several times, with various small variations, and find it works well. I don't have a good name for it.

I think it's a variety of cookie-cutter scaling:

https://paulhammant.com/2011/11/29/cookie-cutter-scaling/

But there's nothing in that about using different entrypoints or routing.

FWIW, variations on the theme:

1. A single binary with a single entrypoint, which can play multiple roles simultaneously (UI, API, scheduled jobs), but where different kinds of request are routed to different pools of instances

2. A single binary with multiple embedded configurations, selecting via command line arguments etc, each for a single different role (UI, admin console, data ingestion)

3. A single (Java) binary with multiple entrypoints (main methods), each playing a different role (live calculations, batch calculations, data recording)

4. A single (C++) codebase building multiple binaries (via CMake add_executable), each playing a different role (calculating prices for potatoes, calculating prices for oranges)

5. A single repository with multiple completely separate applications, with some shared submodules, each built separately, playing a different role (receiving transactions, validating transactions, reporting transactions)

That last one is probably not an example of what you are talking about, but it's closer to the one before than to the first in the list. There is a sort of "ring species" [1] shape to this variation.

[1] https://en.wikipedia.org/wiki/Ring_species


Isn't 5 just... microservices in a monorepo?


Yes!


We've been doing this too for some "logical" services. For example, we might have a service which has a REST API and but also needs to do long running processing in response to requests via said REST API. The code for both lives in one repo and can share code, data structure definitions, databases etc. One container is made but we deploy it twice with different args. One is set to run the REST API,and the other runs the processing. Both are closely related, but in the cloud we can scale and monitor them separately. It gives a lot of the benefits of "standard" microservices with much less of the code and repo level screwing around. It relaxes the general microservices approach that: one service == one git repo == one database == one container == one deployment in GCP/etc.


I've done this before, it felt like a bit of a hack tbh, but I'm glad to hear someone else is doing it!


> great speed in iterating new features […] makes our customers happy

For me this is a leap. I can't think of many examples of software which I use where new features have actually made me happy. Normally it's just change which forces me to learn something new while I'm in the middle of trying to accomplish something actually productive.

I think our industry over-estimates the value of "new features". But in my experience 90% of new features provide neutral or negative value. If—instead of a new feature—the software I used released a performance improvement, then that would actually help make me more productive.


I don’t mean new features in the frameworks, our customers don’t care (and don’t know what frameworks we use, AFAIK).

They care about the features in our products and the frameworks we use allow us to do cool fast.


there are lots of publicly available feedback channels, should be possible to verify your hypothesis


Back in 2014 Spotify released some videos[1][2] about their engineering culture. From what I understand these were widely cited by consultants and the squad/tribe/chapter model was implemented verbatim at a number of companies. I'm guessing that is "getting Spotified".

[1]: https://engineering.atspotify.com/2014/03/27/spotify-enginee... [2]: https://engineering.atspotify.com/2014/09/20/spotify-enginee...


Allegedly they never actually implemented the model either

https://www.jeremiahlee.com/posts/failed-squad-goals/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: