If you want to be super minimal, I prefer acme.sh[1] instead. It even comes preconfigured for various DNS providers[2], and you can even create your own hook if there isn't already one[3].
If you have over a thousand lines of bash (or any other kind of shell code really), that is a pretty big red flag that you probably shouldn't be using bash IMO.
Well, if you have to use shell that's one thing, but I would be hesitant to use such a large shell script for something as important as certificate issuance in an environment where I didn't have to.
Just as a counter point, I've been using acme.sh for ~3 years now and it's been rock solid.
I get your point, and was pretty shocked to find acme.sh, but after the certbot PPA made a giant mess of my system I gave it a try. On the other hand, why should I balk at running thousands of lines of bash, but be fine with thousands (or many more) lines of C, Python, PERL...? You can write crappy or beautiful code in any language...
> If you only have shell on your servers then it is time to start looking for a new job.
Perhaps you wish to have "real" certs on appliances like F5s and Isilons (FreeBSD-based) where you can't install extra stuff, but where curl and openssl (and bash/zsh) are present.
Or perhaps you want to run simple software that you can actually audit. While "over a thousand lines of bash" may take a little while to examine, good luck auditing Zope, which is what certbot pulls in as a dependency:
Here we are talking about a lightweight C executable though that doesn't have those dependencies. You are also not limited to provisioning certificates on appliances as well, and in those cases I don't think have a thousand line bash script offers anymore security (probably less) than a full-featured C program.
At my last job I ran an Isilon: I could upload a cert for the HTTP server via the web UI, but there was no ACME client. I could SSH in, drop dehydrated and have it work because all I needed was a shell, curl, and openssl.
Similarly with F5: there is (was?) no native ACME client (at least a few years ago when I first looked at it). So I download dehydrated and used various CLI interfaces to schedule automated runs and importation of the certificates.
There was no pre-compiled binary, and no compilers, on either system, and so talking about a "lightweight C executable" is non-sensical. Further, even if we (managed to) compiled things off-host, when we did an OS upgrade on either system, a whole bunch of libraries would change and we'd have to (remember to) re-compile. There is no such worry with a shell script.
If you want to have ACME-fetched certs on a general computer system, then compiling a C program (large or small) is an option. But there are scenarios where compiled/compiling C programs is not an option, and you telling me otherwise when I have personal experience of these situations takes some chutzpah.
I wouldn't be offended. Many people including me have personal and work experience in this area as well. No one is saying you're wrong, but even you acknowledge there are other ways to upload certificates.. usually involving an API as well. If you want to run unchecked third-party 1000+ line bash scripts on production appliances, by all means go right ahead.
> If you want to run unchecked third-party 1000+ line bash scripts on production appliances, by all means go right ahead.
Again, I have a better chance at reading all the code of dehydrated (which I have, in fact, done), then reading all of the Python code that certbot pulls in via dependencies on Ubuntu/Debian.
If you’re provisioning an immutable VM or a container you don’t want to add unnecessary cruft. The official LetsEncrypt client and its > 100 dependencies is a non-starter.
It has limitations, and quoting takes a hot minute to grok, but those don't come into play for a surprising amount of medium-large projects when used properly.
I use POSIX sh only and get by with maintainability and handling failure modes just fine.
Bash is great for cases where you are gluing together a bunch of other commands. But it also has a lot of pitfalls, that something as large and complex as this will absolutely run into (to be fair, so does c).
As a specific example, the ACME protocol requires working with json. Doing this in bash is very difficult and error prone, especially if you want to avoid a dependency on something like jq, as this does.
On the otherhand, if you're a user and not a developer, you know something written in bash will be written to run on any machine out there. Doesn't matter if it's old or new. Whereas if it's written in C++xx or Rust or the like it'll only compile/run on rolling release distros (or for the 3 months after a normal distro is released that it's up to date).
It might only compile on a machine with a recent compiler if it’s aggressive with using new language features, but why would need to compile a Rust/C++ version from source? A compiler binary will run jut fine on old distro versions.
Typically projects will compile the binaries on systems with a much older version of libc (but the latest compiler) specifically to avoid this problem.
If they’re not doing this then I believe they won’t work on older systems even if compiled with old compiler versions.
Also it can be highly locked down - run as it's own unprivileged user, with access only to directories served by another webserver for the ACME handshake, storing certs, and a tightly restricted sudoer to restart the webserver on cert cycle.
I found this project while looking for a way to renew my SSL certificate without having to use certbot which has a lot of dependencies including python. This program is really small and simple and does exactly what I need. It's perfect.
If you like minimal dependencies another one to take a peek at may be acme.sh [1]. It depends on bash, openssl and curl. It seems to work fine in ash as well. It has code to handle most API's and most importantly to me is the great documentation.
In the same spirit of minimal and light weight there is also testssh.sh [2] for testing TLS on HTTPS/SMTPS servers that also depends on bash and openssl.
I'd prefer to use a C, Go, or Rust app at this point. I love shell scripts because it was one of the first scripting languages I learned, but I'd trust a developer capable of writing C, Go or Rust to do a better job and make something more optimized than what is within the scope of Posix shell scripting.
To me it is a utility stuff. As long as it does the job (and acme.sh does it just fine) and does not require pulling down half of the Internet for dependencies I would not give rat's ass about what language has been used to write it.
It is to an extent. I'm not saying acme.sh is bad just that if there is a tool in Go, Rust or C that does the same thing and is more efficient then I'm picking something that isn't wrapped in a bunch of shell code. Same with tiny webservers.
Personally, I'm a fan of https://github.com/diafygi/acme-tiny. 200 lines of python without any additional python requirements and only the openssl binary as external dependency.
If we're plugging implementations, I tend to use the single-file implementation that ships with 9front. I wrote the first cut, but it's been improved heavily by others:
This! Thanks for also mentioning it. So plain easy and just does what I need!
Thanks to the author for publishing it.
I maintain my own patch, so tiny-acme supports an '--outfile' option (it originally only writes to stdout). This comes in handy when it is run by systemd service/timer.
The pull request is on hold, because the code then exceeds then 200 lines threshold :shrug:
Yeah IMHO this is the way to go. Individual web apps managing their own SSL certs is a longterm mess. Only your proxy or HTTP gateway/router should ever touch or know about SSL certs.
Caddy is great for web servers, but it's still not possible to have it run commands post certificate provisioning. So it's kind of a non-starter for anything but web-servers as there is no way to tell a different system to reload certs.
We're working on this! Hoping to have our new event dispatching system ready in the next few months. This'll let you hook into the post-issuance event and do whatever you want afterwards.
I used dehydrated pretty effectively (along with openssl) to renew and sync certs between several layers of proxies/loadbalancers. I ended up creating a nice k8s deployment with CronJob to implement this with ingress-nginx.
It was, I maintain a port here [0]. Release tarballs can be downloaded here [1], I plan to add them to the git tags but did not get to it yet. For example alpine linux ships it in the repositories [2].
I've recently spent some time sandboxing nginx on my web server by running the master process as the user www-data instead of root, adding some seccomp filters through systemd and writing a custom AppArmor profile[1]. And the hardest part was dealing with certbot actually. Not least because it's written in Python, and so adding capabilities to the script itself didn't do a thing. They needed to be added to the Python interpreter executable[2]. The whole thing seemed more complicated than its worth. I've looked briefly at what supposedly configuring an OpenBSD server with OpenBSD httpd + OpenBSD's acme-client would look like, and it seems so much easier and the secure configuration is the official one, so I'd need to worry much less about updates breaking my stuff.
So I'm considering moving to OpenBSD for my web server. The only thing giving me pause currently is that I saw no mention of HTTP/2 support, so I suspect there is none. Although maybe I don't need it. We'll see.
[1]: That last apart I actually already had, but it needed some tweaking this time, because I moved nginx's PID file to its own runtime directory created by systemd, instead of allowing it to write to /run as root.
[2]: Of course only for testing. I revoked them after verifying that everything works. And now they're only assigned dynamically by systemd when the service is run.
One of the things I don't like about Github is that, while they do list languages used, they don't have a section for listing dependencies. Some (Ok, many) projects don't list dependencies and assume we're all OK with so many that we don't deserve to know and are expected to just pipe a URL to a shell to install them all.
We need more projects like this, because many of us would like clean, reproducible environments that won't be in dependency hell every few years when an update to one dependency isn't compatible with updates to others.
It is in windows, the cert store comes with windows and you use for many applications from code signing, email signing, server cert signing to authenticating users and the computer to the network/AD.
PKI is the way windows and the corporate world went. On nix you have PGP, with GPG it is sort of baked in I guess but it is used only in package signing and signing of text content in various contexts.
I believe the "web of trust" is preferred over PKI but nix folks because they don't want to depend on certificate authorities mediating trust, instead you have key servers that only help in distributing keys. For *nix, the burden of forming trust is on the user. You have to either go to key signing parties (very scalable lol) or I guess google around and ask in chatrooms and look up trustworthy sites until you are confident a key us trustworthy? I mean you can cross-sign and all that but in reality, you have to investigate each key one by one and hope you don't screw up. Or what happens in most cases is people trusr https/PKI so key IDs on people's websites or a command asking you to import a key from their site over https is trusted... because PKI/TLS is much more practical. Get the GPG fans to like PKI and maybe Linux will have PKI cert management out of the box. The Linux kernel does support X509 and pki certs afaik, I don't know if you can make it store root certs and have it verify cert chains for you from userspace. Very possible but only applications using kTLS might want that. Perhaps a feature request with your favorite browsers and package managers to use kTLS instead of openssl/gnutls is a start?
Cool! I hadn't seen tls-alpn-01 for authentication before.
Instead of using the ualpn daemon to respond to the challenges and proxying all other connections through to nginx, would it be possible to do it solely in nginx?
Security software written in C, with no unit tests. You cannot run away from this software fast enough. I cannot think of any worse idea than "I wrote my own base64 codec in bare C without tests and without code review". "Minimal dependencies" does not even begin to make up for how bad this idea is. It would be strongly preferable to depend on third-party code that has been reviewed, tested, and implemented in a reasonable language.
You use C and C++ apps daily— many of which probably have poorer automated testing than you’d think. It’s not great, but it isn’t the world-ending catastrophe that you seem to think it is.
he is spot on, you know - this software is a contender for foundational infrastructure. The best foundational infrastructure software is like SQLITE - more tests than code, tiny and runs everywhere
I know you jest, but if you're in k8s, I'd check out cert-manager (https://cert-manager.io/). It works quite well, and as it integrates w/ k8s, it stores the cert in a Secret where an Ingress picks it up automatically, and it solves the whole multiple-replicas-all-need-the-cert problem that I have w/ certbot.
(I agree, while certbot works, it's a bit of a usability nightmare.)
Wish it had a way to kick off a pod after a certificate is issued, that way I could automate applying the issued certificates to the devices where they'll be used.
(Yeah I currently do this with a cron job so it's not _so_ bad)...
Yes. I would appreciate such a feature, too. For Ingresses, it doesn't matter (as the controller is smart & picks it up automatically) but we have a database whose cert requires a rolling restart, and yeah, it'd be good to have a way to trigger that.
(We get around this, normally, by having regular reboots of the nodes for system updates. So no pod lives long enough for a replacement cert to not get picked up in time, usually. In the off case reboots don't happen, we have alerting.)
The thread above has mentioned a very real problem with snap running on OpenVZ. It requires either squashfs as a kernel module or FUSE+squashfuse. Many OpenVZ hosts will not provide either and docker/k8s is obviously out if you are already operating inside a container...
Like, squashfs is great and all and packaged app images are an acceptable way of delivering desktop apps... but there's so much lossage here when certbot indirectly requires both just for an install.
Frankly at this point I'd happily accept it if certbot required docker or k8s over snap.
At least then I know I'll have to run it in a container (and give up then) instead of waste hours figuring out why I get a snap error message when I try to run/install certbot before finally coming to the conclusion it is impossible.
[1] https://github.com/acmesh-official/acme.sh
[2] https://github.com/acmesh-official/acme.sh/wiki/dnsapi
[3] https://github.com/acmesh-official/acme.sh/wiki/DNS-API-Dev-...