Hacker News new | past | comments | ask | show | jobs | submit login

> I already addressed that; I'm using rustup as a toolchain manager, not for downloading.

I missed that, but see no problem with that.

> You could alternatively set rustup to download from a internal server that contains a reduced set of preapproved binaries only.

It's better, but less idea from a system maintainer's point of view than distro packages, because multiple package systems (which is essentially what rustup is in that use) is more work. It may be the best overall solution depending on how well multiple rust distro packages can be made to play together. Distro packages to provide the binaries and rustup (or something) to just switch between local binaries would be ideal for the system maintainer that wants control. I understand it's less ideal from the rust developer (as in someone that works on the rest ecosystem, as opposed to someone that works in the rust ecosystem), because the goals are different.

> If you want a tool that manages toolchains but does not contain code that accesses the internet, that's a more stringent requirement that rustup doesn't satisfy

Less that it can't (but that is a reality some places), more that it definitely won't, and someone exploring in it won't make it do so accidentally. Don't let the new dev accidentally muck up the toolchain.

> You can also create an rpm for your specific two-compiler setup, of course. That's annoying though.

Annoying for those wanting to get new rust versions out to people, annoying for devs that want the latest and greatest right away, but only slightly annoying, and easily amortized, by those that need to support the environment (ops, devops, sysadmin, whatever you want to name them).

> Set default to stable, and `alias fuzz='rustup run nightly cargo afl'` :)

If I was tasked on making sure we had some testing system in place that ran some extensive code tests in QA or something, or for a CI system, I wouldn't rely on that. It works, it's simple, but when it breaks, the parts I have to look at to figure out why and how, as all over the place, and deep.

Did someone change rustup on the system?

Did someone change the nightly that's used on the system?

Did someone muck up the .multirust?

If any of those happened, what was the state of the prior version? What nightly was used, what did the .multirust look like, did a new nightly catch a whole bunch more stuff that we care about, but aren't ready to deal with right now and is causing our CI system problems?

Theoretically I would build a $orgname-rust-afl RPM, and it would have a $ordname-rust-nightly RPM dependency. $orgname-rust-afl would provide a script that's in the path, called rust-afl-fuzz which runs against the rust-nightly compiler (directly, without rustup if possible. less components to break), to do the fuzzing. RPM installs are logged, the RPM itself is fully documented on exactly what it is and does, and after doing so once, all devs can easily add the repo to their own dev boxes and get the same setup definitively, and changing the RPM is fairly easy after it's been created. DEB packages shouldn't be much different, I don't expect other distros to be as well.

What did I get out of this? Peace of mind that almost no matter what happened if my pager went off at 9 PM (or 3 AM!) that weird system interactions, automatic updating, stupid config changes, etc weren't likely the cause of the problem, and if worse came to worse, I could redeploy a physical box using kickstart and our repos within an hour or two. When you have a pager strapped to you for a week or two at a time, that stuff matters a lot.

To achieve this we went so far as to develop a list of packages that yum wasn't allowed to automatically update (any service that box was responsible for) when everything else auto-updated, which were automatically reported to a mailing list as having updates so someone could go manually handle those by removing one of the redundant servers from the load balancing system at a time to update and restart the service (if not the server), re-join to the load balancer, and then move to the next server, for zero downtime updates.

The yum stuff was handled through a yum-autoupate system postrun script (a cron job script provided CentOS specific package). Yum auto-update didn't support this feature, so we added as a feature of that configuration, made our own RPM to supersede CentOS's version of the package, and then created another RPM for the actual postrun script to be dropped in place, and added them to our default install script (kickstart). We were able to drop support for our version of yum-autoupdate when CentOS accepted our patch.

All that's really just a long winded story to illustrate that sysadmins like their sleep. If your tool reduces the perceived reliability of our systems, expect some pushback. If your tooling works well with our engineering practices, or at least we can adapt it easily enough, expect us to fight for you. :)

Rustup is great, but when I was at this job, I would have had little-to-no use for it (besides maybe looking at how it works to figure out how to make an RPM, if one didn't exist to use or use as a base for our own). I know, because that's the situation perlbrew was in with us.

> Again, this is for tooling, you can easily paper over the fact that the tool uses a different compiler.

Sure, but in this scenario papering over is less important than easily discoverable and extremely stable.




> Distro packages to provide the binaries and rustup (or something) to just switch between local binaries would be ideal for the system maintainer that wants control

I think in this case update-alternatives or some such would be better? Not sure if it can be made to work with a complicated setup that includes cross toolchains. But I agree, in essence.

But anyway the local rustup repo thing was just an alternate solution, I prefer distributing the .multirust directory.

> Don't let the new dev accidentally muck up the toolchain. > ... > If I was tasked on making sure we had some testing system in place that ran some extensive code tests in QA or something, or for a CI system, I wouldn't rely on that. It works, it's simple, but when it breaks, the parts I have to look at to figure out why and how, as all over the place, and deep.

abstracting over rustup fixes this too. Keep .multirust sudo-controlled and readonly, don't make rustup directly usable, and just allow the tooling aliases.

> directly, without rustup if possible. less components to break

yeah, this is possible. It's pretty trivial to integrate afl-rs directly into a compiler build, perhaps as an option. You can then just build the stable branch of rust (so that you get backported patches) and use it.

> Rustup is great, but when I was at this job, I would have had little-to-no use for it

Right, as you have said there are other options out there to handle this issue :) Which is good enough.

When you do care about reproducibility but don't want to repackage Rust, rustup with a shared readonly multirust dir can be good enough. Otherwise, if you're willing to repackage rust, that works too :)


Sure, and to be clear, rustup works perfectly fine for my current needs. I just play around with it a bit when I have time, and even if I was to use rust in production, I would use rustup in my current environment (where the dev team consists of me, myself, and I ;) ). Almost all the benefits of controlling the packaging go right out the window when there's very few devs involved and they aren't expected to change.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: