Hacker Newsnew | past | comments | ask | show | jobs | submit | gurrone's commentslogin

The film is centered around the idea of establishing an alternative to the GDP as the metric to measure success of a country/society. The film follows mostly Katherine Trebeck on her journey of convincing countries to look beyond the GDP. Since it's quite hard to view such independent movies this is a rare occasion to view it online in a virtual screening. This screening is organized by the Permaculture Institute and donation based! Hint: There is another screening one day later in case you miss the one the 29th.


Might be noteworthy that in recent enough k8s lifecycle.preStop.sleep.seconds is implemented https://github.com/kubernetes/enhancements/blob/master/keps/... so no longer any need to run an external sleep command.


That is only partially true. So you spin up a GKE cluster, setup your deployment push it out via kubectl. OK your app is running but now you need access to it. The portable way is a Service Loadbalancer but it's just a TCP loadbalancer. So you go for the Ingress API. Then you want to do it a little bit more, you learn that the Ingress controller on GKE just configures you a L7LB at Google. Nice, that can do what I want. I want it to run dual-stack IPv4 and IPv6 (my prior example for those shortcomings in GKE was setting response-header but that was added lately after only 3yrs). Oh snap supported by the LB but not by the Ingress controller. Then you dig deeper and learn that development already shifted from the Ingress to the Gateway API. And now you're already knee deep into problems, because what you want to do is not really part of the Ingress or Gateway API and now you're at the mercy of the vendor you choose. Or you run a vendor neutral Ingress controller, like the classic nginx one. That later choice means you've to make yourself familiar with the oddities of that component as well. And then you also want something for DNS, Let's Encrypt and so on. Half a dozen controller installations later you finally have something. But now you've to maintain it because the managed service is only for k8s.

But one should not forget that you also had to build up a lot of vendor specific know how in the past. Someone had to configure your F5 BigIP and your Juniper Router and the Cisco Switch and of course the Dell or HPe boxes you bought.

I take more concerns with k8s immature ecosystem which is kind of reinventing classic unix stuff for distributed computing. And that just started and you've to lifecycle components with breaking changes every few weeks. And people took issues with updating Ubuntu LTS releases every two years. Now they have to update some component every week.


This was refreshing. Now let's talk about logging and metrics!


I don't know about every week... I ignore my k8s setup for 6-12 months at a time. Once in DigitalOcean bugs me to upgrade k8s and that, I admit, has been a bit of a disaster in the past.

I don't know. I had a pretty good thing going prior to k8s too, just some rsync and `ln -sfn` and it was easy, simple and very fast, but like you said, upgrading Ubuntu and PHP and other services becomes the problem there. Couldn't do that without downtime.

Trade-offs.


This has been my experience with kubernetes as well.

Look I can and have done all these things, but it's just not worth my time to do them for my little apps. I'd rather be talking to customers and shipping features at this point in my career.


Currently dealing with almost exactly this using Citrix LBs and k8s. You can't even really tell what is happening with the Citrix ingress controller when things break. :-/


... and mrsk is imperative compared to the declarative approach of swarm and k8s. Especially if you go all in on gcp and use gke + config-connector + fluxcd or argocd and all the other controller, it takes time to know and understand how successful your latest change was. In the end k8s + controller is a huge asynchronous reconciliation loop. It might succeed to apply your changes at some point in time, but you've no idea when it starts and when it ends. That often sucks and takes a lot of time. And even more time if you've to figure out which change failed and why and if it's the final state already. Some older dudes with grey hair might remember cfengine and its eventual consistency approach.


That just made me look at Google Cloud again, and it's depressing to see. At least some types of load balancer do support dual-stack setups, but not if you configure them via GKE with the k8s ingress controller. If you use that one you're out of luck and they maybe now implement it for the new gateway API controller. So if you use GKE and ingress you can configure two of them. One with a static v4 address and another one with a static v6 address. Of course you pay twice then.


Would use OpenBSD + unbound to get NAT64 + DNS64. I'd prefer a dual-stack setup with RFC1918 IPv4 internally + a NAT44 gateway and IPv6 "just" on top. Drawback: if you find yourself to have to do a lot of firewalling it essentially doubles your work.


20 years ago it was the lack of IPv6 support on the CPE holding IPv6 on the server side back, nowadays it's a lack of IPv6 at major SaaS providers causing issues. In most of the scenarios I was involved in we made sure that the CDN in front of the product was able to terminate IPv6 and left everything behind it v4 only. About 1/3 to 1/2 of the traffic received was sent via IPv6 on those setups. Maybe time to turn that around and use the CDN to make the product also available via v4? Leaves you with maintaining a NAT gateway for your own infrastructure.

BTW also only one of the office networks I had to deal with in the past 20 years hat experimental IPv6 support, and that was at a small local hosting company. Everything bigger than that also sticks to IPv4 only for now. :(

Strange how things change but still stay the same.


Same if you're lead climbing, there is a small chance (if you left a longish tail end) to clip the wrong part of the rope. If if it does not directly result in a dangerous fall you might trap yourself inside a quickdraw.


Ironically I lately had a payment service provider handing me newly generated ecdsa ssh keys where ed25519 should be supported to the best of my knowledge. And fluxcd moved from rsa to ecdsa by https://github.com/fluxcd/flux2/releases/tag/v0.21.0.

Kinda strange people are moving on to EC cipher - which is good, but to the cipher which has the NIST/NSA smell.


I have upgraded all of my ECDSA host keys to the 521 curve, which has some praise from DJB, unlike the 256 and 384 curves (ssh-keygen -b #: "For ECDSA keys, the -b flag determines the key length by selecting from one of three elliptic curve sizes: 256, 384 or 521 bits").

http://blog.cr.yp.to/20140323-ecdsa.html

"To be fair I should mention that there's one standard NIST curve using a nice prime, namely 2^521 - 1; but the sheer size of this prime makes it much slower than NIST P-256."


Also an interesting option is using deb822 sources.list format and inline the key https://lists.debian.org/debian-devel/2021/11/msg00026.html

Still a bit ugly depending on the point of view you take but a 3rd party vendor can just tell the user to download this file and store it in /etc/apt/sources.list.d/ which should make that whole thing a bit more frictionless.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: