I 100% agree that AI data centers are bad for people.
In my opinion, Compute-related data centers are a good product tho. Offering up some gpu services might be good but honestly I will tell you what happened (similar to another comment I wrote)
AI gave these data centers companies tons of money (or they borrowed) and then they brought gpus from nvidias and became gpu-centric (also AI centric) to jump even more on the hype
these are bad The core offering of datacenters to me feels like it should be normal form of compute (CPU,ram,storage,as an example yabs performance of the whole server) and not "just what gpu does it have"
Offering up some gpu on the side is perfectly reasonable to me if need be perhaps where the workloads might need some gpu but overall compute oriented datacenters seem nice.
Hetzner is a fan favourite now (which I deeply respect) and for good measure and I feel like their modelling is pretty understandable, They offer GPU's too iirc but you can just tell from their website that they love compute too
Honestly the same is true for most Independent cloud providers. The only places where we see a complete saturation of AI centric data centers is probably the American trifecta (Google,azure and amazon) and Of course nvidia,oracle etc.
Compute oriented small-to-indie data centers/racks are definitely pleasant although that market has raced to the bottom, but only because let's be really honest, The real incentives for building softwares happens when VSCode forks make billions so people (techies atleast) usually question such path and non-techies just don't know how to sell/compete in the online marketplaces usually.
What you're monitoring is "Did my system request a renewed cert?" but what most people's customers care about is instead, "Did our HTTPS endpoint use an in-date certificate?"
For example say you've got an internal test endpoint, two US endpoints and a rest-of-world endpoint, physically located in four places. Maybe your renewal process works with a month left - but the code to replace working certificates in a running instance is bugged. So, maybe Monday that renewal happens, your "CT log monitor" approach is green, but nobody gets new certs.
On Wednesday engineers ship a new test release to the test endpoint, restarting and thus grabbing the renewed cert, for them everything seems great. Then on Friday afternoon a weird glitch happens for some US customers, restarting both US servers seems to fix the glitch and now US customers also see a renewed cert. But a month later the Asian customers complain everything is broken - because their endpoint is still using the old certificate.
> Did our HTTPS endpoint use an in-date certificate?
For any non-trivial organization, you want to know when client certificates expire too.
In my experience, the easiest way is to export anything that remotely looks like a certificate to the monitoring system, and let people exclude the false positives. Of course, that requires you to have a monitoring system in the first place. That is no longer a given.
So, I've worked for both startups and large entities, including both an international corporation and a major university, and in all that time I've worked with exactly one system that used client TLS certificates. They mostly weren't from the Web PKI (and so none of these technologies are relevant, Let's Encrypt for example has announced and maybe even implemented choices to explicitly not issue client certs) and they were handled by a handful of people who I'd say were... not experts.
It's true that you could use client certs with say, Entra ID, and one day I will work somewhere that does that. Or maybe I won't, I'm an old man and "We should use client certs" is an ambition I've heard from management several times but never seen enacted, so the renaming of Azure AD to Entra ID doesn't seem likely to change that.
Once you're not using the Web PKI cert expiry lifetimes are much more purpose specific. It might well make sense for your Entra ID apps to have 10 year certs because eh, if you need to kill a cert you can explicitly do that, it's not a vast global system where only expiry is realistically useful. If you're minting your own ten year certs, now expiry alerting is a very small part of your risk profile.
Client certificates aren't as esoteric as you think. They're not always used for web authentication, but many enterprises use them for WiFi/LAN authentication (EAP-TLS) and securing confidential APIs. Shops that run Kubernetes use mTLS for securing pod to pod traffic, etc. I've also seen them used for VPN authentication.
Huh. I have worked with Kubernetes so I guess it's possible that's a second place with client certs and I never noticed.
The big employers didn't use EAP-TLS with client certs. The University of course has Eduroam (for WiFi), and I guess in principle you could use client certs with Eduroam but that sounds like extra work with few benefits and I've never seen it from either the implementation side or the user side even though I've worked on or observed numerous Eduroam installs.
I checked install advice for my language (it might differ in other languages) and there's no sign that Eduroam thinks client certificates would be a good idea. Server certs are necessary to make this system work, and there's plenty of guidance on how to best obtain and renew these certificates e.g. does the Web PKI make sense for Eduroam or should you just busk it? But nothing about client certificates that I could see.
I can't comment on Eduroam as I have no experience working in the Edu space, but in general, EAP-TLS is considered to be the gold standard for WiFi/LAN authentication, as alternatives like EAP-TTLS and PEAP-MSCHAPv2 are all flawed in one way or another and rely on username/password auth, which is a weaker form of authentication than relying on asymmetric cryptography (mTLS). Passwords can be shared and phished, if you're not properly enforcing server cert validation, you will be susceptible to evil twin attacks, etc.
Of course, implementing EAP-TLS usually requires a robust way for distributing client certificates to the clients. If all your devices are managed, this is often done using the SCEP protocol. The CA can be either AD CS, your NAC solution, or a cloud PKI solution like SecureW2.
Yeah, I don't think EAP-TLS with client certs would work out well for Eduroam applications. You have a very large number of end users, they're only barely under your authority (students, not staff) and they have a wide variety of devices, also not under your control.
But even in Enterprise corporate settings I did not ever see this though I'm sure some people do it. It sounds like potentially a good idea, of course it can have excellent security properties, however one of the major downside IMHO is that people wind up with the weakest link being a poorly secured SCEP endpoint. Bad guys could never hope to break the encryption needed to forge credentials, but they could trivially tail-gate a call center worker and get real credentials which work fine, so, who cares.
Maybe that's actually enough. Threat models where adversaries are willing to physically travel to your location (or activate a local asset) might be out of your league anyway. But it feels to me as if that's the wrong way to look at it.
I am airgapped and the certs are usually wildcard with multiple SANs. You would think that the SANs alone would tell you which host has a cert. But, it can be difficult to find all the hosts or even internal hosts that use TLS.
while you are right, security is generally not cheap.
you can get that $5 china fido key, but are you sure it's you who owns it?
I was recently looking for a security key, and eventually I did pay the yubico tax, because saving $20 by getting another one seemed unwise given the stakes.
>you can get that $5 china fido key, but are you sure it's you who owns it?
Seems like a moot point because it'd be very difficult for a rogue fido key to exfiltrate data. I'd be far more concerned about random chinese IOT gadgets, which most people don't have a problem with.
Hmm yes but it's possible to compromise private key generation to only create a very small predictable subset of keys. In fact some smartcards from Infineon suffered from this as a bug. And thus they can be brute forces. It requires some serious crypto chops to determine if this is the case. Obviously it's not like the first 60 bits being zero or something. And the private key is made to not be extracted in this kind of device making it even harder.
It won't be as easy as that because you can generate a private key multiple times and notice it's the same.
However yes a very limited entropy in the private key is much harder to detect especially because on this kind of device you can't see the private key directly.
I'm not sure whether this question was asked in good faith, but is actually a damn good one.
I've looked into self hosting and git repo that has horizontal scalability, and it is indeed very difficult. I don't have the time to detail it in a comment here, but for anyone who is curious it's very informative to look at how GitLab handled this with gitaly. I've also seen some clever attempts to use object storage, though I haven't seen any of those solutions put heavily to the test.
I'd love to hear from others about ideas and approaches they've heard about or tried
These days, people solve similar problems by wrapping their data in an OCI container image and distribute it through one of the container registries that do not have a practically meaningful pull rate limit. Not really a joke, unfortunately.
Even Amazon encourages this, probably not intentionally, more like as a bandaid for bad EKS config that people can do by mistake, but still - you can pull 5 terabytes from ECR for free under their free tier each month.
I'd say it'd just Kubernetes in general should've shipped with a storage engine and an installation mechanism.
It's a very hacky feeling addon that RKE2 has a distributed internal registry if you enable it and use it in a very specific way.
For the rate at which people love just shipping a Helm chart, it's actually absurdly hard to ship a self contained installation without just trying to hit internet resources.
Explain to me how you self-host a git repo without spending any money and having no budget which is accessed millions of time a day from CI jobs pulling packages.
That is roughly the number of new requests per second, but these are not just light web requests.
The git transport protocol is "smart" in a way that is, in some ways, arguably rather dumb. It's certainly expensive on the server side. All of the smartness of it is aimed at reducing the amount of transfer and number of connections. But to do that, it shifts a considerable amount of work onto the server in choosing which objects to provide you.
If you benchmark the resource loads of this, you probably won't be saying a single server is such an easy win :)
Using the slowest clone method they measured 8s for a 750 MB repo, 0.45s for a 40MB repo. appears to be linear so 1.1s for 100MB should be a valid interpolation.
And remember we're using worst case assumptions in places (using the slowest clone method, and numbers from old hardware). In practice I'd bet a fastish laptop would suffice.
edit: actually on closer look at the github reported numbers the interpolation isn't straightforward: on the bigger 750MB repo the partial clone is actually said to be slower then the base full clone. However this doesn't change the big picture that it'll easily fit on one server.
.. or a cheaper one as we would be using only tens of cores in the above scenario. Or you could use a slice of an existing server using virtualization.
If people depend on remote downloads from different companies for their CI pipelines they’re doing it wrong. Every sensible company sets up a mirror or at least a cache on infra that they control. Rate limiting downloads is the natural course of action for the provider of a package registry. Once you have so many unique users that even civilized use of your infrastructure becomes too much you can probably hire a few people to build something more scalable.
Philosophers argued since 200 years ago, when the steam engine was invented, that technology is out of our control and forever was, and we are just the sex organs for the birth of the machine god.
100 local people to maintain the data center while it replaces 1 million people with the AIs running inside