I think this depends on the library. There are some libraries that are just used in a handful of services, and in those cases I don't think that wrapping is worth the overhead.
However for some of the core platform functionality depended on by all services, we've seen a lot of value in wrapping because it:
- We can provide a more opinionated interface (a third party library is typically more flexible because it has vastly more variety in its use cases)
- We can hook in our own instrumentation easily (e.g. metrics, logging) or change functionality with config/feature flags
- We can transparently change the implementation in the future
I think we've gone back on forth on this over the years. The rate of new services is decreasing, so I think we have shifted more from lots of very small services (low thousands lines of code) to bigger ones that are more like an entire product (e.g. 100k LoC+). But I wouldn't be surprised if the pendulum shifts back again in the future - there are downsides with the larger services, like greater contention withe other engineers.
This is a potential downside of this architecture. As others have already said, there are mitigations, but fundamentally each edge request is going to be more costly (time or money) to serve than having it served by a single machine/DB.
One of our engineering principals is to avoid premature optimisation, which is possibly one of the reasons our architecture has grown in this way. So far, whenever we've needed to fix a performance issue we've been able to solve it locally rather than change the architecture.
At the business level, we've been optimising for growth rather than costs, but this could change in the future, at which point we may need to reconsider our architecture. But for now it's working for us.
I think the other responses have explained the definition I had in mind for "anti-value".
"Frugality" is a bit less explicit than "move fast _and break things_. I think the reason it could be considered an anti-value is because it implies obvious _costs_. For example it can cost time in "shopping around", or it could mean that you miss out on opportunities - for example missing out on a great employee because you pay below market rate.
Thank you for raising the point about contradictions in values. I tried to work it into the original blog post, but I felt like it detracted from my main point.
I previously felt these contradictions are problematic because they make it harder to use values for prioritisation. It's a good point you make about different values being applicable in different contexts. I wonder whether the context can be incorporated into the value?
For example "Bias for action when the cost of failure is low". In an oncall incident, restarting a stateless service is worth trying even before the problem is understood in depth, because risk of failure is low. There are potential actions in an oncall incident that could quite easily make things much worse - then it's probably worth diving deeper before taking action.
It might not be pithy enough for the value itself, but I think it's at least adding this kind of context to the subtext like Amazon have done in the page you linked to.
I had a go at doing roughly the same thing a couple of years ago [0]. It was mostly straightforward, but the part I really struggled with were rendering the sea. The encoding of OSM coastline is quite quirky [1]. When the edge of the rendering intersects with the coastline it's very tricky to compute which side of a coastline is sea and which is land. As an example - how would you render this [2]?
I wasn't sure how I could solve this, so I wrote up a more abstract formulation of the problem here [3], and asked for help. I think the proposed solution make sense, but I think I would have to implement it by rendering individual pixels and wouldn't be able to lean on a higher level graphics library.
I'm looking forward to seeing how the author solved this problem.
Potentially dumb suggestion: what if one just declares anything that has a significant chunk of non-bridge or non-tunnel street on it land? It's not a universal solution but perhaps sufficient for the application?
I recently made a map (using Generic Mapping Tools) of one of the Aleutian Islands in Alaska, for one of my dad’s grad students to use in a paper. That would constitute a good example of where you couldn’t just tell my # of built structures (since at certain zoom levels, there are none!)
Sure, but are you likely to have users want to A/B test traffic light configurations there? My suggestion was very specific to the application in question.
You might want to look into OSMCoastline, a separate piece of software written specifically to make the coastline usable for renders: https://osmcode.org/osmcoastline/
A simple solution is paint the sea with triangles of which one side is perfectly horizontal. And always work bottom to top.
You will have to pre scan the ways for instance where they change from ascending to descending. Here the sea will spilt into a left and right section. Make a list of these and sort them from top to bottom.
While generating triangles, check that list to where you should split.
I see this sentiment a lot in the Go community. I think it is reasonable in some cases, but there are many use cases (vanilla CRUD web apps) where a web framework is really helpful.
The standard library is very low level. Want sessions? DIY. Want user auth? DIY. Want CSRF protection? DIY. The list goes on.
It feels like a waste of time implementing these "solved problems" from scratch, but the biggest problem is how easy it is to introduce security vulnerabilities when implementing from scratch, or forgetting to do so.
It’s nice to learn concepts from first principals by using the standard library. But once I know how these things work, I’d rather rely on someone else’s battle tested code and best practices.
Yes, you can add in separate libraries to solve these specific problems, but they are less likely to compose as well as they would in a framework. On top of this, each time you pull in a new library you have to spend time evaluating it. When I use a framework I don't have to think.
The general advice is not DIY everything by using the stdlib, it's to use packages that conform to the stdlib interface, because doing so gives you infinite composability. All of the concerns that have been pointed out have good, testing and rock solid implementations available that you can just drop in, with mix and match from different authors and frameworks. All because they use the same interface.
This isn't even a new idea. Many Ruby on Rails plugins are actually Rack plugins (even to the point of Rails itself being implemented as a collection of Rack middleware). Rack is the interface that defines how a request is to handled, similar to the Go stdlib interface.
It's definitely true that idiomatic Go tends towards copying being better than dependencies, but the standard interfaces make it much easier to use and swap tried and tested dependencies because they all share the same interface.
I find that this is a totally fine trade off to make until it isn't, and by then you're completely confined by your choice of framework. Better to use libraries built to compose around interfaces taken from the standard library so you lose none of the control. I'll also emphasize that doing it this way does make discovering the initial pieces harder than if they're all in one framework together, but I find the slight increase in research yields orders of magnitude improved results once your code starts to age past the two year mark and you reap all the reliability and composibility of the standard library.
Also, here's a list of out-of-the-box library implementations for all the features you mentioned:
I agree with this. I do believe, that if youre writing a prototypical, client facing, transactional application, a web framework is very useful.
I also advice any individuals coming from Django or friends to use a web framework.
But after some time spent with the stdlib, and understanding how some of those implementations work, it gets to the point where Id rather not read another set of documentation, learn a new mental model, and deal with bugs. This all comes with a framework.
After awhile you realize that the stdlib provides most of what you need, and that writing more vanilla Go can be simpler then learning a full framework.
I will admit, most my work in Go revolves around internal services, and dont deal with web technologies such as CSRF and CORS. So I do acknowledge my opinion here is leaned toward those use cases.
Pusher | https://pusher.com/ | Ruby on Rails Engineer | £50-£70k + equity | London office or remote (EU timezone preferred) | Full-Time
Pusher’s APIs provide realtime capabilities to thousands of developers around the globe. Every day billions of messages are sent through millions of WebSocket connections to our servers. Through our many SDKs we make it easy for developers to make amazing realtime features like chat, live updates, and various collaborative tools.
We are looking for a RoR to help build features for our dashboard. Our customers use our dashboard to manage their apps and get insights on their usage. We have a fairly large RoR code base, which recently got refreshed (upgraded to RoR 5.2) and have begun using view-components and webpack. We run it all on Heroku.
There are still opportunities to modernise and simplify the system as well as adding new exciting features to the dashboards and this role will be essential to achieving those goals. You will work closely together with our Front End engineer on new features and app performance.
However for some of the core platform functionality depended on by all services, we've seen a lot of value in wrapping because it:
- We can provide a more opinionated interface (a third party library is typically more flexible because it has vastly more variety in its use cases) - We can hook in our own instrumentation easily (e.g. metrics, logging) or change functionality with config/feature flags - We can transparently change the implementation in the future