I might try to explain this but I think I will definitely miss something or won't be able to articulate. Instead, I would suggest episode 1 of Life of Birds [1].
David Attenborough talks about the evolution of birds, what classifies a bird, where do we draw the line between dinosaurs and birds (he even talks about this elephant bird) in the first episode of Life of Birds [1]. Also, watch the whole documentary if you're interested.
It looks like the author used Medium's import tool[1] but didn't change the gist links to embeds. The source for the import is a gist itself[2], which just has the raw links.
If I, as an upstream developer, license my library in GPL, it is exactly because I don't want any random developer to use my library and don't pass on the same freedom to end users. I intentionally want anyone using my library to be in the same freedom sphere (for lack of a better word).
If you think this as problematic/cancerous then it's a pity. This license was designed to retain freedom of software users.
When I was younger I lived in Brussels for 3 years, it gave me both different language exposure (French/Flemish vs English) and different cultural exposure. But more recently I spent 6 months living in Denver which, although it was still US based, helped remind me of things which living in the Bay Area does differently, from driving to politics to governance to industry.
When we first started writing the data APIs, GraphQL had just come out and was pretty rough around the edges.
We love how the tooling and community around GraphQL have matured over time and today we are very excited about announcing GraphQL on HN.
The idea is that you create tables on Postgres, define permissions and
they can be automatically queried and manipulated over a GraphQL interface from
front-end applications.
We have taken great care to make these APIs as efficient as possible. In fact, the low memory footprint and negligible latency when
compared to querying postgres directly, you could even replace the ORM with GraphQL APIs for most use-cases on your server-side code.
Nope. This is kind of by design. Hasura aims to expose a very light, high-performance abstraction with self-contained access control (you can specify access control based on any relationship in the schema).
For any customisation, there are 2 options:
1) Use schema-stitching on the client-side
2) Write a graphql proxy that proxies to Hasura's graphql engine
In most cases, 2 will never be required or be fruitful, because the access control pain will keep increasing as the schema size increases!
Custom resolvers will also usually not be required because of Postgres views. Are there any specific use-cases you're thinking about?
Unfortunately, we don't have subscriptions yet. Most subscriptions implementations are not very useful because they don't handle the state initialisation or atleast-once semantics. Furthermore, add access control to subscriptions and you basically have zero performance. Firebase has a local database which is kept in sync (and handles these problems underneath) which is probably the only way of exposing this functionality "neatly" to frontend developers. Do you have any specific requirements around subscriptions? Would love to hear your inputs!
> Can I stick Hasura in front of my existing Postgres Db? Can I use my own migration tool?
We have internal tooling to make that happen on request on the paid tiers, but the experience is not smooth enough for it to be generally available as a feature yet. Currently, the best way to try out your existing schema is to `pg_dump` and `psql` into Hasura's postgres.
Yes you can use your own migrations! All you will need to do is add metadata to tell Hasura's graphQL engine to "track" whatever tables you add through your own tools. Hasura's migration tooling is inspired from rail's tooling and is essentially raw SQL. We'll be adding more docs and sample use-cases in the days to come!
I am not sure how you have read this 4 times, and missed these parts.
> leaving us with no real solution, resolution or a closing to the mystery of why tcp_mem was higher than expected
One user-space program was faulty and was not closing TCP sockets.
> what was done to resolve the memory issue is
The faulty program was fixed.
> Without trying to sound to snarky I find it highly concerning that the industry is now working with tools like docker and Kubernties and we some how just throw out the fact that these sit on top of Linux.
This I agree with, and this was the learning of the author, which they mention in the article.
You are right, I did somehow miss or dismissed the two sentences that describe the already assumed problem once we started talking about tcp_mem hitting a max.
To me there was a ton a fan-fair about the problem and then 2 sentences about the problem and solution. Stopping and starting services is one of the worst ways to find out which service is using a resource. This is because one of the hardest parts of troubleshooting a problem like this is finding process in the bad state. Once you find the problem process you have a wealth of tools provided by Linux to collect a ton of data that will help ensure you actually solve the problem once you change code. This is what I expected from a blog post titled as such.
Had Hasura not been mentioned at all I would have likely passed on my comments -- and probably should have anyways. But there is something that gets my goat about blog post that are full of text with nothing really all that insightful to only find out it was a advertisement in the end.
Let me see if I can illustrate my point better.
* 311 words describing the problem and talking about Hasura.
* 81 words describing kubelet.
* 75 words of epiphany that containers run on Linux.
* 54 words of twittering.
* 52 words talking about the actual problem.
* 43 words on a feature request.
* 41 words plug for Hasura.
While writing this other replies have come in so I will try to address them here as well.
>He says it was a user space program not closing sockets.
reply
He does say this but completely lacks on how he came to the conclusion and how he proved this before making a code change. The tools are there to show this as proof, at least something more than killing a service and seeing a number drop. A blog post like this should leave the reader with a full understanding of how and why. There maybe was a story in the edge case -- but we don't know because no real debugging efforts were reported by this blog post -- asking on twitter does not count.
> Hm...how do you propose kubernetes / kubernetes users solve these kinds of problems? It could be a fairly common error that’s hard to catch on a system of large number of nodes where you’re not supposed to actively think about the fact that you have nodes. What’s the right tooling / monitoring to have on a system of 20nodes where one node is basically screwed?
The problem is thinking you can absolve your self of all system administration task because you use kube or some other container based system.
Ideally you would have have built heath monitoring into your original application and you would have spotted the issue in your own dashboards long before you exhausted node level resources.
I will admit I may have been harsh, and am still being harsh, but that is because this at face value looks like a an attempt to promote a product via a blog post. Many companies push or even require this sort of thing. As far as quality, normally the types blog post like this that reach the front page of HN are chopped full of very useful information and dive deep in to the why and how. If I were a blog reviewer I would say this blog left me dry and wanting more details and the content did not warrant an entire blog post to be trumpeted around. It simply described mundane day to day work of a junior developer is expected -- required to do. If anything I am sad that the developer had a golden chance to learn some really neat stuff about Linux and twitter was his chosen resource :(
Was a long time Vim user. Heavily customized Vim setup. But got tired of keeping up with the ecosystem, new plugins etc. Switched to Spacemacs with Vim mode recently (cannot ever forego Vim modes once its in your muscle memory). Loving the new setup so far.
But Vim, even with heavily customized setup, is so blazing fast, it can make other editors piss in their pants. E.g, opening the first file on Spacemacs takes good 3-4 seconds.
Keel has permissions to update all deployments in our cluster. Because it runs within the cluster instead of outside (like Travis CI), and the only interaction with Keel is through webhooks, there is no need for finegrained RBAC.
The webhooks only tell Keel that a new image is available, they cannot modify other parts of the Kubernetes deployment directly.
I don't think it's constructive to the discussion here to say X has drawbacks (and then go on at it), use Y (and then go on at it).
I have not used Erlang or Rust, and when I read this comment, it seemed flamewar-ish to me.