Hacker Newsnew | past | comments | ask | show | jobs | submit | arccy's commentslogin

too many words with no conclusion at all

hipsters are more like a permanent rejection of the mainstream, trend makers want the mainstream to follow them

npm is run by github / microsoft now, which also sells security products...

this is like saying just use C and don't write any memory bugs. possible, but life could be a lot better if it weren't so easy to do so.

Great, you’ve just convinced every C programmer to use a hand rolled AES implementation on their next embedded device. Only slightly joking.

If the standard had clear algorhitm -> source code, thrn couldnt everyone copy from there though?

AES is actually a good example of why this doesn’t work in cryptography. Implementing AES without a timing side channel in C is pretty much impossible. Each architecture requires specific and subtle constructions to ensure it executes in constant time. Newer algorithms are designed to not have this problem (DJB was actually the one who popularized this approach).

Reconcile this claim with, for instance, aes_ct64 in Thomas Pornin's BearSSL?

I'm familiar with Bernstein's argument about AES, but AES is also the most successful cryptography standard ever created.


Okay, I should've said implementing AES in C without a timing sidechannel performantly enough to power TLS for a browser running on a shitty ARMv7 phone is basically impossible. Also if only Thomas Pornin can correctly implement your cipher without assembly, that's not a selling point.

I'm not contesting AES's success or saying it doesn't deserve it. I'm not even saying we should move off it (especially now that even most mobile processors have AES instructions). But nobody would put something like a S-Box in a cipher created today.


If your point is "reference implementations have never been sufficient for real-world implementations", I agree, strongly, but of course that cuts painfully across several of Bernstein's own arguments about the importance of issues in PQ reference implementations.

Part of this, though, is that it's also kind of an incoherent standard to hold reference implementations to. Science proceeds long after the standard is written! The best/safest possible implementation is bound to change.


I don't think it's incoherent. On one extreme you have web standards, where it's now commonplace to not finalize standards until they're implemented in multiple major browser engines. Some web-adjacent IETF standards also work like this (WebTransport over HTTP3 is one I've been implementing recently).

I'm not saying cryptography should necessarily work this way, but it's not an unworkable policy to have multiple projects implement a draft before settling on a standard.


Look at the timeline for performant non-leaking implementations of Weierstrass curves. How long are you going to wait for these things to settle? I feel like there's also a hindsight bias that slips into a lot of this stuff.

Certainly, if you're going to do standards adoption by open competition the way NIST has done with AES, SHA3, and MLKEM, you're not going to be able to factor multiple major implementations into your process.


This isn’t black and white. There’s a medium between:

* Wait for 10 years of cryptanalysis (specific to the final algorithm) before using anything, which probably will be relatively meager because nobody is using it

* Expect the standardization process itself to produce a blessed artifact, to be set on fire as a false god if it turns out to be imperfect (or more realistically, just cause everybody a bunch of pain for 20 years)

Nothing would stop NIST from adding a post-competition phase where Google, Microsoft, Amazon, whoever the hell is maintaining OpenSSL, and maybe Mozilla implement the algorithm in their respective libraries and kick the tires. Maybe it’s pointless and everything we’d expect to get from cryptographers observing that process for a few months to a year has already been suitably covered, and DJB is just being prissy. I don’t know enough about cryptanalysis to know.

But I do feel very confident that many of the IETF standards I’ve been on the receiving end of could have used a non-reference implementation phase to find practical, you-could-technically-do-it-right-but-you-won’t issues that showed up within the first 6 months of people trying to use the damn thing.


I don't know what you mean by "kick the tires".

If by that you mean "perfect the implementation", we already get that! The MLKEM in Go is not the MLKEM in OpenSSL is not the MLKEM in AWS-LC.

If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable. It's the publication of the standard itself that is the forcing function for high-quality competing implementations. In particular, part of arriving at high-quality implementations is running them in production, which is something you can't do without solving the coordination problem of getting everyone onto the same standard.

Here it's important to note that nothing we've learned since Kyber was chosen has materially weakened the construction itself. We've had in fact 3 years now of sustained (urgent, in fact) implementation and deployment (after almost 30 years of cryptologic work on lattices). What would have been different had Kyber been a speculative or proposed standard, other than it getting far less attention and deployment?

("Prissy" is not the word I personally would choose here.)


I mean have a bunch of competent teams that (importantly) didn’t design the algorithm read the final draft and write their versions of it. Then they and others can perform practical analysis on each (empirically look for timing side channels on x86 and ARM, fuzz them, etc.).

> If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable.

The forcing function can potentially be: this final draft is the heir apparent. If nothing serious comes up in the next 6 months, it will be summarily finalized.

It’s possible this won’t get any of the implementers off their ass on a reasonable timeframe - this happens with web standards all the time. It’s also possible that this is very unlikely to uncover anything not already uncovered. Like I said, I’m not totally convinced that in this specific field it makes sense. But your arguments against it are fully general against this kind of phased process at all, and I think it has empirically improved recent W3C and IETF standards (including QUIC and HTTP2/3) a lot compared to the previous method.


Again: that has now happened. What have we learned from it that we needed to know 3 years ago when NIST chose Kyber? That's an important question, because this is a whole giant thread about Bernstein's allegation that the IETF is in the pocket of the NSA (see "part 4" of this series for that charming claim).

Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers. All of them had the knowhow and incentive to write implementations of their constructions and, if it was going to showcase some glaring problem, of their competitors. What makes you think that we lacked implementation understanding during this process?


I don’t think IETF is in the pocket of the NSA. I really wish the US government hadn’t hassled Bernstein so much when he was a grad student, it would make his stuff way more focused on technical details and readable without rolling your eyes.

> Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers.

That’s actually my point! When you’re trying to figure out if your standard is difficult to implement correctly, that everyone who worked on the reference implementations is a genius who understands it perfectly is a disadvantage for finding certain problems. It’s classic expert blindness, like you see with C++ where the people working on the standard understand the language so completely they can’t even conceive of what will happen when it’s in the hands of someone that doesn’t sleep with the C++ standard under their pillow.

Like, would anyone who developed ECC algorithms have forgotten to check for invalid curve points when writing an implementation? Meanwhile among mere mortals that’s happened over and over again.


I don't think this has much of anything to do with Bernstein's qualms with the US government. For all his concerns about NIST process, he himself had his name on a NIST PQC candidate. Moreover, he's gotten into similar spats elsewhere. This isn't even the first time he's gotten into a heap of shit at IETF/IRTF. This springs to mind:

https://mailarchive.ietf.org/arch/msg/cfrg/qqrtZnjV1oTBHtvZ1...

This wasn't about NSA or the USG! Note the date. Of course, had this happened in 2025, we'd all know about it, because he'd have blogged it.

But I want to circle back to the point I just made: you've said that we'd all be better off if there was a burning-in period for implementors before standards were ratified. We've definitely burnt in MLKEM now! What would we have done differently knowing what we now know?


> What would we have done differently knowing what we now know?

With the MLKEM standard? Probably nothing, Bernstein would have done less rambling in these blog posts if he was aware of something specifically wrong with one of the implementations. My key point here was that establishing an implementation phase during standardization is not an incoherent or categorically unjustifiable idea, whether it makes sense for massive cryptographic development efforts or not. I will note that something not getting caught by a potential process change is a datapoint that it’s not needed, but isn’t dispositive.

I do think there is some baby in the Bernstein bathwater that is this blog post series though. His strongest specific point in these posts was that the TLS working group adding a cipher suite with a MLKEM-only key exchange this early is an own goal (but that’s of course not the fault of the MLKEM standard itself). That’s an obvious footgun, and I’ll miss the days when you could enable all the standard TLS 1.3 cipher suites and not stress about it. The arguments to keep it in are legitimately not good, but in the area director’s defense we’re all guilty of motivated reasoning when you’re talking to someone who will inevitably accuse you of colluding with the NSA to bring about 1984.


In what way is adding an MLKEM-only code point an "own goal"? Exercise for the reader: find the place where Bernstein proposed we have hybrid RSA/ECDH ciphersuites.

Yeah except there are certified versions of AES written in C. Which makes your point what exactly?

if you actually read your parent comment: "typical web image quality"

Typical web image quality is like it is partly because of lack of support. It’s literally more difficult to show a static HDR photo than a whole video!

PNG supports HDR with up to 16 bits per channel, see https://www.w3.org/TR/png-3/ and the cICP, mDCV and cLLI chunks.

With incredibly bad compression ratios.

HDR should not be "typical web" anything. It's insane that websites are allowed to override my system brightness setting through HDR media. There's so much stuff out there that literally hurts my eyes if I've set my brightness such that pure white (SDR FFFFFF) is a comfortable light level.

I want JXL in web browsers, but without HDR support.


There's nothing stopping browsers from tone mapping[1] those HDR images using your tone mapping preference.

[1]: https://en.wikipedia.org/wiki/Tone_mapping


What does that achieve? Isn't it simpler to just not support HDR than to support HDR but tone map away the HDR effect?

Anyway, which web browsers have a setting to tone map HDR images such that they look like SDR images? (And why should "don't physically hurt my eyes" be an opt-in setting anyway instead of just the default?)


> What does that achieve?

Because then a user who wants to see the HDR image in all its full glory can do so. If the base image is not HDR, then there is nothing they can do about it.

> And why should "don't physically hurt my eyes" be an opt-in setting anyway instead of just the default?

While I very much support more HDR in the online world, I fully agree with you here.

However, I suspect the reason will boil down to what it usually does: almost no users change the default settings ever. And so, any default which goes the other way will invariably lead to a ton of support cases of "why doesn't this work".

However, web browsers are dark-mode aware, they could be HDR aware and do what you prefer based on that.


What user wants the web to look like this? https://floss.social/@mort/115147174361502259

That video is clearly not encoded correctly. If it were the levels would match the background, given there is no actual HDR content visible in that video frame.

Anyway, even if the video was of a lovely nature scene in proper HDR, you might still find it jarring compared to the surrounding non-HDR desktop elements. I might too, depending on the specifics.

However, like I said, it's up to the browser to handle this.

One suggestion I saw mentioned by some browser devs was to make the default to tone map HDR if the page is not viewed in fullscreen mode, and switch to full HDR range if it is fullscreen.

Even if that doesn't become the default, it could be a behavior the browser could let the user select.


> That video is clearly not encoded correctly.

Actually I forgot about auto-HDR conversion of SDR videos which some operating systems do. So it might not be the video itself, but rather the OS and video driver ruining things in this case.


Ideally, browsers should just not support HDR.

Well I strongly disagree on that point.

Just because we're in the infancy of wide HDR adoption and thus experience some niggling issues while software folks work out the kinks isn't a good reason to just wholesale forego the feature in such a crucial piece of infrastructure.

Sure, if you don't want HDR in the browser I do think there should be a browser option to let you achieve that. I don't want to force it on everyone out there.

Keep in mind the screenshot you showed is how things looked on my Windows until I changed the auto-HDR option. It wasn't the browser that did it, it was completely innocent.

It was just so long ago I completely forgot I had changed that OS configuration.


If you want to avoid eye pain then you want caps on how much brightness can be in what percent of the image, not to throw the baby out with the bathwater and disable it entirely.

And if you're speaking from iphone experience, my understanding is the main problem there isn't extra bright things in the image, it's the renderer ignoring your brightness settings when HDR shows up, which is obviously stupid and not a problem with HDR in general.


If the brightness cap of the HDR image is full SDR brightness, what value remains in HDR? As far as I can see, it's all bath water, no baby

> If the brightness cap of the HDR image is full SDR brightness, what value remains in HDR?

If you set #ffffff to be a comfortable max, then that would be the brightness cap for HDR flares that fill the entire screen.

But filling the entire screen like that rarely happens. Smaller flares would have a higher cap.

For example, let's say an HDR scene has an average brightness that's 55% of #ffffff, but a tenth of the screen is up at 200% of #ffffff. That should give you a visually impressive boosted range without blinding you.


Oh.

I don't want the ability for 10% of the screen to be so bright it hurts my eyes. That's the exact thing I want to avoid. I don't understand why you think your suggestion would help. I want SDR FFFFFF to be the brightest any part of my screen goes to, because that's what I've configured to be at a comfortable value using my OS brightness controls.


I strongly doubt that the brightness to hurt your eyes is the same for 10% of the screen and 100% of the screen.

I am not suggesting eye hurting. The opposite really, I'm suggesting a curve that stays similarly comfortable at all sizes.


I don't want any one part of my screen to be a stupidly bright point light. It's not just the total amount of photons that matters.

It is not just the total amount.

But it's not the brightest spot either.

It's in between.


I just don't want your "in between" "only hurt my eyes a little" solution. I don't see how that's so hard to understand. I set my brightness so that SDR FFFFFF is a comfortable max brightness. I don't understand why web content should be allowed to go brighter than that.

I'm suggesting something that WON'T hurt your eyes. I don't see how that's so hard to understand.

You set a comfortable max brightness for the entire screen.

Comfortable max brightness for small parts of the screen is a different brightness. Comfortable. NO eye hurting.


It's still uncomfortable to have 10% of the screen get ridiculously bright.

Yes, it's uncomfortable to have it get "ridiculously" bright.

But there's a level that is comfortable that is higher than what you set for FFFFFF.

And the comfortable level for 1% of the screen is even higher.

HDR could take advantage of that to make more realistic scenes without making you uncomfortable. If it was coded right to respect your limits. Which is probably isn't right now. But it could be.


I severely doubt that I could ever be comfortable with 10% of my screen getting much brighter than the value I set as max brightness.

But say you're right. Now you've achieved images looking completely out of place. You've achieved making the surrounding GUI look grey instead of white. And the screen looks broken when it suddenly dims after switching tabs away from one with an HDR video. What's the point? Even ignoring the painful aspects (which is a big thing to ignore, since my laptop currently physically hurts me at night with no setting to make it not hurt me, which I don't appreciate), you're just making the experience of browsing the web worse. Why?


In general, people report that HDR content looks more realistic and pretty. That's the point, if it can be done without hurting you.

Do they? Do people report that an HDR image on a web page that takes up roughly 10% of the screen looks more realistic? Do they report that an HDR YouTube video, which mostly consists of a screen recording with the recorded SDR FFF being mapped to the brightness of the sun, looks pretty? Do people like when their light-mode GUI suddenly turns grey as a part of it becomes 10x the brightness of what used to be white? (see e.g https://floss.social/@mort/115147174361502259)

Because that's what HDR web content is.

HDR movies playing on a livingroom TV? Sure, nothing against that. I mean it's stupid that it tries to achieve some kind of absolute brightness, but in principle, some form of "brighter than SDR FFF" could make sense there. But for web content, surrounded by an SDR GUI?


> when their light-mode GUI suddenly turns grey as a part of it becomes 10x the brightness of what used to be white

I don't know why you're asking me about examples that violate the rules I proposed. No I don't want that.

And obviously boosting the brightness of a screen capture is bad. It would look bad in SDR too. I don't know why you're even bringing it up. I am aware that HDR can be done wrong...

But for HDR videos where the HDR actually makes sense, yeah it's fine for highlights in the video to be a little brighter than the GUI around them, or for tiny little blips to be significantly brighter. Not enough to make it look gray like the misbehavior you linked.


> I don't know why you're asking me about examples that violate the rules I proposed. No I don't want that.

Other than the exaggerated 10x, I don't understand how it violates the rules you proposed. You proposed a scheme where part of the screen should be allowed to be significantly brighter than the surrounding SDR GUI's FFF. That makes the surrounding GUI look grey.

> And obviously boosting the brightness of a screen capture is bad. It would look bad in SDR too. I don't know why you're even bringing it up.

I'm bringing it up because that's how HDR looks on the web. Most web content isn't made by professional movie studios.

The example video I linked conforms with your suggested rules, FWIW: most of the image is near black, only a relatively smart part of it is white. The average brightness probably isn't over SDR FFF. Yet it still hurts.


it actually is somewhat an HDR problem because the HDR standards made some dumb choices. SDR standardizes relative brightness, but HDR uses absolute brightness even though that's an obviously dumb idea and in practice no one with a brain actually implements it.

In a modern image chain, capture is more often than not HDR.

These images are then graded for HDR or SDR. I.e., sacrifices are made on the image data such that it is suitable for a display standard.

If you have an HDR image, it's relatively easy to tone-map that into SDR space, see e.g. BT.2408 for an approach in Video.

The underlying problem here is that the Web isn't ready for HDR at all, and I'm almost 100% confident browsers don't do the right things yet. HDR displays have enormous variance. From "Slightly above SDR" to experimental displays at Dolby Labs. So to display an image correctly, you need to render it properly to the displays capabilities. Likewise if you want to display a HDR image on an SDR monitor. I.e., tone mapping is a required part of the solution.

A correctly graded HDR image taken of the real world will have like 95% of the pixel values falling within your typical SDR (Rec.709/sRGB) range. You only use the "physically hurt my eyes" values sparingly, and you will take the room conditions into consideration when designing the peak value. As an example: cinemas using DCI-P3 peaks at 48 nits because the cinema is completely dark. 48 nits is more than enough for a pure white in that environment. But take that image and put it on a display sitting inside during the day, and it's not nearly enough for a white. Add HDR peaks into this, and it's easy to see that in a cinema, you probably shouldn't peak at 1000 nits (which is about 4.x stops of light above the DCI-P3 peak). In short: your rendering to the displays capabilities require that you probe the light conditions in the room.

It's also why you shouldn't be able to manipulate brightness on an HDR display. We need that to be part of the image rendering chain such that the right decisions can be made.



How about websites just straight up aren't allowed to physically hurt me, by default?

Web sites aren’t made for just you. If images from your screen are causing you issues, that is a you / your device problem, not a web site problem.

I agree, it's not a web site problem. It's a web standards problem that it's possible for web sites to do that.

Note the spec does recommend providing a user option: https://drafts.csswg.org/css-color-hdr-1/#a11y

You asked “which web browsers have a setting to tone map HDR images such that they look like SDR images?”; I answered. Were you not actually looking for a solution?

I was looking for a setting, not a hack.

That's simulated using kwok, not real.

> Unfortunately running 1M real kubelets is beyond my budget.


someone coded it once, everyone else just adds another dependency that fulfills the spec, they don't even have to search for "dark patterns", just "most effective"

How much incompetence do we accept or tolerate, before we deem it negligence? If someone adds a consent popup or similar thing to a website, usually knowing, that there is a reason why this must be done, and that this reason is GDPR, it seems quite incompetent to not know the first bit about what is required, and not doing their due diligence to read up on it when not one doesn't know.

Perhaps it would change things for the better, if this special kind of people were at least temporarily removed from the job, until they have gained basic knowledge about their job and how it affects other people.


if you make it easy to be lazy and panic vs properly handling the error, you've designed a poor language

At Facebook they name certain "escape hatch" functions in a way that inescapably make them look like a GIANT EYESORE. Stuff like DANGEROUSLY_CAST_THIS_TO_THAT, or INVOKE_SUPER_EXPENSIVE_ACTION_SEE_YOU_ON_CODE_REVIEW. This really drives home the point that such things must not be used except in rare extraordinary cases.

If unwrap() were named UNWRAP_OR_PANIC(), it would be used much less glibly. Even more, I wish there existed a super strict mode when all places that can panic are treated as compile-time errors, except those specifically wrapped in some may_panic_intentionally!() or similar.


> make them look like a GIANT EYESORE

React.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED comes to mind. I did have to reach to this before, but it certainly works for keeping this out of example code and other things like reading other implementations without the danger being very apparent.

At some point it was renamed to __CLIENT_INTERNALS_DO_NOT_USE_OR_WARN_USERS_THEY_CANNOT_UPGRADE which is much less fun.


> it certainly works

Not for this guy:

https://github.com/reactjs/react.dev/issues/3896


right and if the language designers named it UNWRAP_OR_PANIC() then people would rightfully be asking why on earth we can't just use a try-catch around code and have an easier life

But a panic can be caught and handled safely (e.g. via std:: panic tools). I'd say that this is the correct use case for exceptions (ask Martin Fowler, of all people).

There is already a try/catch around that code, which produces the Result type, which you can presumptuously .unwrap() without checking if it contains an error.

Instead, one should use the question mark operator, that immediately returns the error from the current function if a Result is an error. This is exactly similar to rethrowing an exception, but only requires typing one character, the "?".


Probably not, since errors as values are way better than exceptions.

How so? An exception is a value that's given the closest, conceptually appropriate, point that was decided to handle the value, allowing you to keep your "happy path" as clean code, and your "exceptional circumstances" path at the level of abstraction that makes sense.

It's way less book-keeping with exceptions, since you, intentionally, don't have to write code for that exceptional behavior, except where it makes sense to. The return by value method, necessarily, implements the same behavior, where handling is bubbled up to the conceptually appropriate place, through returns, but with much more typing involved. Care is required for either, since not properly bubbling up an exception can happen in either case (no re-raise for exceptions, no return after handling for return).


There are many many pages of text discussing this topic, but having programmed in both styles, exceptions make it too easy for programmer to simply ignore them. Errors as values force you to explicitly handle it there, or toss it up the stack. Maybe some other languages have better exception handling but in Python it’s god awful. In big projects you can basically never know when or how something can fail.

I would claim the opposite. If you don't catch an exception, you'll get a halt.

With return values, you can trivially ignore an exception.

    let _ = fs::remove_file("file_doesn't_exist");

    or

    value, error = some_function()
    // carry on without doing anything with error
In the wild, I've seen far more ignoring return errors, because of the mechanical burden of having type handling at every function call.

This is backed by decades of writing libraries. I've tried to implement libraries without exceptions, and was my admittedly cargo-cult preference long ago, but ignoring errors was so prevalent among the users of all the libraries that I now always include a "raise" type boolean that defaults to True for any exception that returns an error value, to force exceptions, and their handling, as default behavior.

> In big projects you can basically never know when or how something can fail.

How is this fundamentally different than return value? Looking at a high level function, you can't know how it will fail, you just know it did fail, from the error being bubbled up through the returns. The only difference is the mechanism for bubbling up the error.

Maybe some water is required for this flame war. ;)


I can agree to disagree :)

Exception is hidden control flow, where as error values are not.

That is the main reason why zig doesn’t have exceptions.


Correction: unchecked exceptions are hidden control flow. Checked exceptions are quite visible, and I think that more languages should use them as a result.

I'd categorize them more as "event handlers" than "hidden". You can't know where the execution will go at a lower level, but that's the entire point: you don't care. You put the handlers at the points where you care.

> don't have to write code for that exceptional behavior, except where it makes sense to.

The great Raymond Chen wrote an excellent blog post on how this isn't really true, and how exceptions can lure programmers into mistakenly thinking they can just forget about failure cases.

Cleaner, more elegant, and harder to recognize https://devblogs.microsoft.com/oldnewthing/20050114-00/?p=36...

(ctrl-f for taskbar to skip to heart of his point.)


I mean his post seems obviously wrong or ill chosen to support his point. Surely you can see that an inner implementation of the icon class requiring a special hidden order on which properties to set first can happen in any language and also really isn't related at all to whether you use try-catch handling or error values as return codes.

What he seems to be saying is that "obviously in C I would be checking the icon handle for being non-null so clearly error value handling is superior" but this is only obvious to someone knowing the API and checking values for validity has to be done in exception based code too. It's just that exception based code doesn't pretend that it cannot panic somewhere where you don't know. The default, better assumption for programming is that you don't know what this code is doing but it should just work. Unchecked exception handling is the best way to fit that paradigm, you should not have to care about every single line and what it does and constantly sort of almost obsessively check error values of all the APIs you ever use to have this false hope that it cannot panic because you did your duty. No, it can still panic and all this error checking is not helping you program better or more clearly or faster. It swamps the code with so many extra lines that it's practically double the size. All this makes it less clear and that is also what his post shows.


> Surely you can see that an inner implementation of the icon class requiring a special hidden order

In practice, programmers don't find it easy to keep in mind that certain functions might throw. This is a real problem with unchecked exceptions and with C-style error codes that sloppy programmers might ignore entirely.

> [...] on which properties to set first can happen in any language

A carefully designed library using a statically typed functional language, especially a pure functional language, might sometimes be able to eliminate such hidden ordering bugs.

Rust used to have a feature to help the compiler detect invalid ordering of imperative operations, called typestates. This feature has since been mostly removed, though, as it saw little use. [0]

> isn't related at all to whether you use try-catch handling or error values as return codes

I guess Chen is assuming a reasonably diligent programmer who makes a habit of never discarding status/error values returned by functions. C++'s [[nodiscard]] can help ensure this.

(Of course, outside of C++, those aren't the only options. Idiomatic Haskell and Zig code forces the programmer to explicitly handle the possibility of an error. Same goes for Java's checked exceptions.)

> What he seems to be saying is that "obviously in C I would be checking the icon handle for being non-null so clearly error value handling is superior"

I don't think he's exactly arguing for the C-style approach, he's more just criticizing exceptions, especially unchecked exceptions. I agree the C-style approach has considerable problems.

> It's just that exception based code doesn't pretend that it cannot panic somewhere where you don't know.

With checked exceptions, you know precisely which operations can throw.

> Unchecked exception handling is the best way to fit that paradigm, you should not have to care about every single line and what it does and constantly sort of almost obsessively check error values of all the APIs you ever use to have this false hope that it cannot panic because you did your duty

You do need to care about every line, or your plausible-looking code is likely to misbehave when an exception occurs, as Chen's post demonstrates. Unchecked exceptions deprive the compiler of the ability to ensure good exception-handling coverage. There is no error-handling model that allows to programmer to write good code by pretending errors won't arise.

(I presume that by panic you mean throw an unchecked exception.)

[0] https://cliffle.com/blog/rust-typestate/


...and you can? try-catch is usually less ergonomic than the various ways you can inspect a Result.

    try {
      data = some_sketchy_function();
    } catch (e) {
      handle the error;
    }
vs

    result = some_sketchy_function();
    if let Err(e) = result {
      handle the error;
    }
Or better yet, compare the problematic cases where the error isn't handled:

    data = some_sketchy_function();
vs

    data = some_sketchy_function().UNWRAP_OR_PANIC();
In the former (the try-catch version that doesn't try or catch), the lack of handling is silent. It might be fine! You might just depend on your caller using `try`. In the latter, the compiler forces you to use UNWRAP_OR_PANIC (or, in reality, just unwrap) or `data` won't be the expected type and you will quickly get a compile failure.

What I suspect you mean, because it's a better argument, is:

    try {
        sketchy_function1();
        sketchy_function2();
        sketchy_function3();
        sketchy_function4();
    } catch (e) {
        ...
    }
which is fair, although how often is it really the right thing to let all the errors from 4 independent sources flow together and then get picked apart after the fact by inspecting `e`? It's an easier life, but it's also one where subtle problems constantly creep in without the compiler having any visibility into them at all.

it's practically always the case that you use a try-catch for more than just one source / line of code. I mean except for database/network calls I don't think I even remember a single case where I ever used a try-catch just for a single line of code. The subtle problems come from errors handling via values. You check but do you check perfectly? What happens when APIs change and the underlying functions add more error cases, then you constantly have more work to do. Nonstop constant error checking that you don't care about. This is exactly where humans are terrible: Really important work that is drudgery and where if you ever mess up once, you fail in very painful ways. Exception handling solves all of this, it fits how humans should be working and it fits the underlying hardware reality as well: We are big picture, we should not be designing languages for describing logic that force us to do drudgery work constantly and care about implementation details of every single thing we call.

Unwrap isn't a synonym for laziness, it's just like an assertion, when you do unwrap() you're saying the Result should NEVER fail, and if does, it should abort the whole process. What was wrong was the developer assumption, not the use of unwrap.

It also makes it very obvious in the code, something very dangerous is happening here. As a code reviewer you should see an unwrap() and have alarm bells going off. While in other languages, critical errors are a lot more hidden.

I hate that it's a method. That can get lost in a method chain easily enough during a code review.

A function or a keyword would interrupt that and make it less tempting


Well, you can request Clippy to tell you about them. I do that in my hobby projects.

> What was wrong was the developer assumption, not the use of unwrap.

How many times can you truly prove that an `unwrap()` is correct and that you also need that performance edge?

Ignoring the performance aspect that often comes from a hat-trick, to prove such a thing you need to be wary of the inner workings of a call giving you a `Return`. That knowledge is only valid at the time of writing your `unwrap()`, but won't necessarily hold later.

Also, aren't you implicitly forcing whoever changes the function to check for every smartass dev that decided to `unwrap` at their callsite? That's bonkers.


I doubt that this unwrap was added for performance reasons; I suspect it was rather added because the developer temporarily didn't want to deal with what they thought was an unlikely error case while they were working on something else; and no other system recognized that the unwrap was left in and flagged it before it was deployed on production servers.

If I were Cloudflare I would immediately audit the codebase for all uses of unwrap (or similar rust panic idioms like expect), ensure that they are either removed or clearly documented as to why it's worth crashing the program there, and then add a linter to their CI system that will fire if anyone tries to check in a new commit with unwrap in it.


Panics are for unexpected error conditions, like your caller passed you garbage. Results are for expected errors, like your caller passed you something but it's your job to tell if it's garbage.

So the point of unwrap() is not to prove anything. Like an assertion it indicates a precondition of the function that the implementer cannot uphold. That's not to say unwrap() can't be used incorrectly. Just that it's a valid thing to do in your code.

Note that none of this is about performance.


> when you do unwrap() you're saying the Result should NEVER fail

Returning a Result by definition means the method can fail.


> Returning a Result by definition means the method can fail.

No more than returning an int by definition means the method can return -2.


> No more than returning an int by definition means the method can return -2.

What? Returning an int does in fact mean that the method can return -2. I have no idea what your argument is with this, because you seem to be disagreeing with the person while actually agreeing with them.


> What? Returning an int does in fact mean that the method can return -2.

What? No it doesn't.

  fn square(n: i32) -> i32 {
      n * n
  }
This method cannot return -2.

Though in this case it's more like knowing that the specific way you call the function in foo.rs will never get back a -2.

  fn bar(n: i32, allow_negative: bool) -> i32 {
      let new = n * 2;
      if allow_negative || new >= 0 { new } else { 0 }
  }
  bar(x, false)

What? Results have a limited number of possible error states that are well defined.

Some call points to a function that returns a Result will never return an Error.

Some call points to a function that returns an int will never return -2.

Sometimes you know things the type system does not know.


The difference is functions which return Result have explicitly chosen to return a Result because they can fail. Sure, it might not fail in the current implementation and/or configuration, but that could change later and you might not know until it causes problems. The type system is there to help you - why ignore it?

Because it would be a huge hassle to go into that library and write an alternate version that doesn't return a Result. So you're stuck with the type system being wrong in some way. You can add error-handling code upfront but it will be dead code at that point in time, which is also not good.

As a hypothetical example, when making a regex, I call `Regex::new(r"/d+")` which returns a result because my regex could be malformed and it could miscompile. It is entirely reasonable to unwrap this, though, as I will find out pretty quickly that it works or fails once I test the program.

Yeah, I think I expressed wrongly here. A more correct version would be: "when you do unwrap() you're saying that an error on this particular path shouldn't be recoverable and we should fail-safe."

Division can fail, but I can write a program where I'm sure division will not fail.

It's a little subtler than this. You want it to be easy to not handle an error while developing, so you can focus on getting the core logic correct before error-handling; but you want it to be hard to deploy or release the software without fully handling these checks. Some kind of debug vs release mode with different lints seems like a reasonable approach.

So… basically every language ever?

Except maybe Haskell.


It's easy to cause this kind of failure in Haskell also.

And Gleam

In Rust, `.unwrap()` is nine characters, whereas propagating the Result via `?` is one.

All languages with few exceptions have these kinds of escape hatches like unwrap


Works when you have the Erlang system that does graceful handing for you: reporting, restarting.

This is untrue. A `?` operator would have done just fine here. I agree with you though that it should be possible to explicitely forbid unwraps.

but also there are so many bikes there already that they didn't need to raise general awareness that much

people think that running nodejs servers are a good idea, and those fall over if there's ever so much as a stiff breeze, so they put cloudflare in front and call it a day.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: