Hacker Newsnew | past | comments | ask | show | jobs | submit | stouset's commentslogin

I’m always a bit shocked how seriously people take concerns over the install script for a binary executable they’re already intending to trust.

Between you and me, are a bunch of other hops. Blindly trusting dependencies is one part of why npm is burning down at the moment.

Why trust un-signatured files hosted on a single source of truth? It isn't the 90s anymore.


What’s your alternative?

A mirrored package manager, where signature and executable are always grabbed from different sources.

Like apt, dnf, and others.


Pretty sure my apt sources have the signing and package pointing to the same place

If you have more than a single source, then apt will already be checking this for you.

The default is more than a single source.


All of mine point to like somethingsomething.ubuntu.com

> I’m always a bit shocked how seriously people take concerns over the install script for a binary executable they’re already intending to trust.

The issue is provenance. Where is the script getting the binary from? Who built that binary? How do we know that binary wasn't tampered with? I'll lay odds the install script isn't doing any kind of GPG/PGP signature check. It's probably not even doing a checksum check.

I'm prepared to trust an executable built by certain organisations and persons, provided I can trace a chain of trust from what I get back to them.


This really isn’t true. All of my git knowledge—except for the CLI flags—is directly useful for jj.

The jj CLI is very easy to grok, even for a seasoned git user. Maybe even especially so for a seasoned git user.


If jj took many weeks of relearning, I might be right there with you. But the overwhelming majority of people I’ve personally seen who try the switch convert within a day, are barely slowed down by day two, and are effectively fluent within three days to a week at most.

> jj is harder to adopt for people with a thorough mental model of git

No, it really isn’t. I have used git since shortly after it was first released and I’ve written a git implementation.

I switched to jj in one day. And the amount of git arcana I have to keep in working memory is now basically nil. My VCS now works in almost a 1:1 mapping with how my brain wants to interact with my repo rather than having to go through a translation layer.

If you understand what git commands are doing, what jj does is essentially trivial to add to your mental model.

I also get the benefit of being able to use workflows that I always want to use in git but which are an enormous pain in practice. And I get access to wildly powerful new workflows I didn’t even consider because they would be outlandish in git.


Good article, but one (very minor) nit I have is with the PizzaOrder example.

    struct PizzaOrder {
        size: PizzaSize,
        toppings: Vec<Topping>,
        crust_type: CrustType,
        ordered_at: SystemTime,
    }
The problem they want to address is partial equality when you want to compare orders but ignoring the ordered_at timestamp. To me, the problem is throwing too many unrelated concerns into one struct. Ideally instead of using destructuring to compare only the specific fields you care about, you'd decompose this into two structs:

    #[derive(PartialEq, Eq)]
    struct PizzaDetails {
        size: PizzaSize,
        toppings: Vec<Topping>,
        crust_type: CrustType,
        … // additional fields
    }

    #[derive(Eq)]
    struct PizzaOrder {
        details: PizzaDetails,
        ordered_at: SystemTime,
    }

    impl PartialEq for PizzaOrder {
        fn eq(&self, rhs: &Self) -> bool { 
            self.details == rhs.details
        }
    }
I get that this is a toy example meant to illustrate the point; there are certainly more complex cases where there's no clean boundary to split your struct across. But this should be the first tool you reach for.

You have a good point there, that is better. But it is still, well honestly, wrong. Two orders ordered at different times are just not the same order, and using a typeclass approach to say that they most definitely are is going to bite you in the back seat.

PartialEq and Eq for PizzaDetails is good. If there is a business function that computes whether or not someone orders the same thing, then that should start by projecting the details.


Yeah, I immediately twitched when I saw the PartialEq implementation. Somebody is going to write code which finds the "correct" order and ends up allowing someone to order the same pizza but get yours, while you have to wait for it to be made and cooked again.

It's not difficult to write the predicate same_details_as() and then it's obvious to reviewers if that's what we meant and discourages weird ad-hoc code which might stop working when the PizzaDetails is redefined.


I do agree that implementing PartialEq on orders in this way is a bad fit. But it is a synthetic example to make a point, so I tried to keep it in the spirit of the original article (while ironically picking nits in the same vein myself).

> But it is still, well honestly, wrong. Two orders ordered at different times are just not the same order

I probably don't have enough context but whatever identity makes up "your order" goes in the PizzaOrder and not the PizzaDetails. The delivery address, for example, goes in the PizzaOrder.


You can solve this in the general case by implementing the typeclass for the coarser equality relation over an ad-hoc wrapper newtype.

Well it isn't a good call. This is the kind of code that OOP makes people write.

While better, a person modifying PizzaDetails might or might not expect this change to affect the downstream pizza deduplication logic (wherever it got sprinkled throughout the code). They might not even know that it exists.

Ideally, imho, a struct is a dumb data holder - it is there to pass associated pieces of data together (or hold a complex unavoidable state change hidden from the user like Arc or Mutex).

All that is to say that adding a field to an existing struct and possibly populating it sparsely in some remote piece of code should not changed existing behavior.

I wonder whether there's a way to communicate to whoever makes changes to the pizza details struct that it might have unintended consequences down the line.

Should one wrap PizzaDetails with PizzaComparator? Or better even provide it as a field in PizzaOrder? Or we are running into Java-esq territory of PizzaComparatorBuilderDefaultsConstructorFactory?

Should we introduce a domain specific PizzaFlavor right under PizzaDetails that copies over relevant fields from PizzaDetails, and PizzaOrder compares two orders by constructing and comparing their flavours instead? A lot of boilerplate.. but what is being considered important to the pizza flavor is being explicitly marked.

In a prod codebase I'd annotate this code with "if change X chaange Y" pre submit hook - this constraint appears to be external to the language itself and live in the domain of "code changes over time". Protobufs successfully folded versioning into the language itself though. Protobufs also have field annotations, "{important_to_flavour=true}" field annotation would be useful here.


Decomposing things just to have different equality notions doesn't generalize.

How would you decompose a character string so that you could have a case-insensitive versus sensitive comparison?

:)


Right, I did note that this decomposition isn’t always applicable. But it often is, and you should default to that when possible.

> How would you decompose a character string

With a capitalization bit mask of course!

And you can speed up full equality comparisons with a quick cap equality check first.

(That is the how. The when is probably "never". :)


Don't forget to store the locale used for capitalization, too.

I am quite certain that someone who has been on HN as long as you have is capable of understanding the difference between 0% compiler-enforced memory safety in a language with very weak type safety guarantees and 95%+ of code regions even in the worst case of low-level driver code that performs DMA with strong type safety guarantees.

Please explain the differences in typical aliasing rules between C and Rust. And please explain posts like

https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/

https://news.ycombinator.com/item?id=41947921

https://lucumr.pocoo.org/2022/1/30/unsafe-rust/


The first two is the same article, but they point out that certain structures can be very hard to write in rust, with linked lists being a famous example. The point stands, but I would say the tradeoff is worth it (the author also mentions at the end that they still think rust is great).

The third link is absolutely nuts. Why would you want to initialize a struct like that in Rust? It's like saying a functional programming language is hard because you can't do goto. The author sets themselves a challenge to do something that absolutely goes against how rust works, and then complains how hard it is.

If you want to do it to interface with non-rust code, writing a C-style string to some memory is easier.


You phrase that as if 0-5% of a program being harder to write disqualifies all the benefits of isolating memory safety bugs to that 0-5%. It doesn't.

And it can easily be more than 5%, since some projects both have lots of large unsafe blocks, and also the presence of an unsafe block can require validation of much more than the block itself. It is terrible of you and overall if my understanding is far better than yours.

And even your argument taken at face value is poor, since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall. And Rust specifically have developers use unsafe for some algorithm implementations, for flexibility and performance.


> since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall.

(Emphasis added)

But is it worse overall?

It's easy to speculate that some hypothetical scenario could be true. Of course, such speculation on its own provides no reason for anyone to believe it is true. Are you able to provide evidence to back up your speculation?


Even embedded kernels can and regularly do have < 5% unsafe code.

Is three random people saying unsafe Rust is hard supposed to make us forget about C’s legendary problems with UB, nil pointers, memory management bugs, and staggering number of CVEs?

You have zero sense of perspective. Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it) we’re talking about a tiny fraction of the overall code of Rust programs in the wild. You have to pay careful attention to C’s issues virtually every single line of code.

With all due respect this may be the singular dumbest argument I’ve ever had the displeasure of participating in on Hacker News.


> Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it)

I think there's a very strong dependence on exactly what kind of unsafe code you're dealing with. On one hand, you can have relatively straightforwards stuff like get_unsafe or calling into simpler FFI functions. On the other hand, you have stuff like exposing a safe, ergonomic, and sound APIs for self-referential structures, which is definitely an area of active experimentation.

Of course, in this context all that is basically a nitpick; nothing about your comment hinges on the parenthetical.


[flagged]


> Shold one compare Rust with C or Rust with C++?

Well, you're the one asking for a comparison with C, and this subthread is generally comparing against C, so you tell us.

> Modern C++ provides a lot of features that makes this topic easier, also when programs scale up in size, similar to Rust. Yet without requirements like no universal aliasing. And that despite all the issues of C++.

Well yes, the latter is the tradeoff for the former. Nothing surprising there.

Unfortunately even modern C++ doesn't have good solutions for the hardest problems Rust tackles (yet?), but some improvement is certainly more welcome than no improvement.

> Which is wrong

Is it? Would you be able to show evidence to prove such a claim?


Hedge funds’ goals are often not to maximize profit, but to provide returns uncorrelated with the rest of some benchmark market. This is useful for the wealthy as it means you can better survive market crashes.

I am frankly astonished at the number of otherwise-intelligent people who actually seem to believe in this stuff.

One of the worst possible things to do in a competitive market is to trade by some publicly-available formulaic strategy. It’s like announcing your rock-paper-scissors move to your opponent in advance.


Technical analysis is a basket of heuristics. Support / resistance / breakout (especially around whole numbers) seems to reflect persistent behavior rooted in human psychology. Look at the heavy buying at the $30 mark here, putting a floor under silver: https://finviz.com/futures_charts.ashx?p=d&t=SI This is a common pattern it can be useful to know.

Technical Analysis isnt a system of predicting the future. Its an analysis of what has happened in the past. You are fundamentally misunderstanding what you're talking about.

Its just a system of interpreting money flows and trends on a graph.


A couple of subtleties in that. Rather than rock paper scissors with three options, there are hundreds of technical strategies out there so you may still be doing something unusual. Secondly the mass of the public are kind of following a technical strategy of just buy index funds because the index has gone up the past. Which is ignoring the fundamental issue of whether stocks decent value for money at the moment.

Are you with a straight face saying that occasionally having a safety bug in limited unsafe areas of Rust is functionally the same as having written the entire program in an unsafe language like C?

One, the dollar cost is not the same. The baseline floor of quality will be higher for a Rust program vs. a C program given equal development effort.

Second, the total possible footprint of entire classes of bugs is zero thanks to design features of Rust (the borrowck, sum types, data race prevention), except in a specifically delineated areas which often total zero in the vast majority of Rust programs.


> The baseline floor of quality will be higher for a Rust program vs. a C program given equal development effort.

Hmm, according to whom, exactly?

> Second, the total possible footprint of entire classes of bugs is zero thanks to design features of Rust (the borrowck, sum types, data race prevention), except in a specifically delineated areas which often total zero in the vast majority of Rust programs.

And yet somehow the internet went down because of a program written in rust that didn’t validate input.


> Hmm, according to whom, exactly?

Well, Google for one. https://security.googleblog.com/2025/11/rust-in-android-move...

> And yet somehow the internet went down because of a program written in rust that didn’t validate input.

You're ignoring other factors (it wasn't just Cloudflare's rust code that led to the issue), but even setting that aside your framing is not accurate. The rust program went down because the programmer made a choice that, given invalid input, it should crash. This could happen in every language ever made. It has nothing to do with rust.


Google's Android teams also categorize old C code as C++, and mix gotos into their modern C++ code.

> This could happen in every language ever made. It has nothing to do with rust.

Except it does. This also has to do with culture. In Rust, I get the impression that one can set it up as roughly two communities.

The first does not consider safety, security and correctness to be the responsibility of the language, instead they consider it their own responsibility. They merely appreciate it when the language helps with all that, and take precautions when the language hinders that. They try to be honest with themselves.

The second community is careless, might make various unfounded claims and actions that sometimes border on cultish and gang mob behavior and beliefs, and can for instance spew unwrap() all over codebases even when not appropriate for that kind of project, or claim that a Rust project is memory safe even when unsafe Rust is used all over the place with lots of basic bugs and UB-inducing bugs in it.

The second community is surprisingly large, and is severely detrimental to security, safety and correctness.


Again, this has nothing to do with the point at hand, which is that "in any language, a developer can choose the crash the problem if a unrecoverable state happens". That's it.

Tell me about how these supposed magical groups have anything at all to do with language features. What language can magically conjure triple the memory from thin air because the upstream query returned 200+ entries instead of the 60-ish you're required to support?


I don't think you're actually disagreeing with the person you're responding to here. Even if you take your grouping as factual, there's nothing that limits said grouping to Rust programmers. Or in other words:

> This could happen in every language ever made. It has nothing to do with rust.


> And yet somehow the internet went down because of a program written in rust that didn’t validate input.

What? The Cloudflare bug was from a broken system configuration that eventually cascaded into (among other things) a Rust program with hardcoded limits that crashed loudly. In no way did that Rust program bring down the internet; it was the canary, not the gas leak. Anybody trying to blame Rust for that event has no idea what they're talking about.


> And yet somehow the internet went down because of a program written in rust that didn’t validate input.

Tell me which magic language creates programs free of errors? It would have been better had it crashed and compromised memory integrity instead of an orderly panic due to an invariant the coder didn't anticipate? Type systems and memory safety are nice and highly valuable, but we all know as computer scientists we have yet to solve for logic errors.


> And yet somehow the internet went down because of a program written in rust that didn’t validate input.

No, it _did validate_ the input, and since that was invalid it resulted in an error.

People can yap about that unwrap all they want, but if the code just returned an error to the caller with `?` it would have resulted in a HTTP 500 error anyway.


The default approach is to use a container that enforces synchronization. If you need manual control, you are able to do that, you just need to explicitly opt into the responsibility that comes with it.

If you use unsafe to opt out of guarantees that the compiler provides against data races, it’s no different than doing the exact same thing in a language that doesn’t protect against data races.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: