Where does "please make the play/pause button on the android lock screen big like it used to be, for my big thumb" land w/r/t Modernity, Subculture, or Rebelliousness?
Undefined in the standard doesn't mean undefined in GCC. Type-punning through unions has always been a special case that GCC has taken care with beyond the standard.
Reading e.g. the 13% perf regression in simdjson from disabling UB:
A simpler alternative is to compile the program with LTO. We confirmed that LLVM’s inter-procedural analyses can propagate both alignment and dereferenceability information for this function, which allows the LTO build to recover the performance loss.
"can" is doing a lot of heavy-lifting here. Guaranteeing expected optimizations "will" be applied are hard-enough, without leaving it entirely to an easily-derailed indirect side-effect.
This is "can" has exactly the same meaning as in "UB can make your programms faster". You could replace it with "it does, at least with clang". LTO is, in this regard, the same as UB, and unlike guaranteed optimizations, such as the single member optimization, or the empty base optimization.
Concretely, here, the UB-exploitation in question in this case is assuming that the "this" pointer in C++ is aligned and non-null, meaning it's a pervasive annotation throughout C++ codebases, not an edge-case.
Relying on LTO to "discover" this annotation through interprocedural analysis -- based on my experience of looking at LTO in practice -- will not be as comprehensive, and even when it works it accomplishes its task in an achingly-slow and expensive way.
User "Anoneuoid" from the source's own comment thread:
There is another aspect here where those averaged outcomes are also the output of statistical models. So it is kind of like asking whether statistical models are better at agreeing with other statistical models than humans.
You need to compare on both different variables and additionally produce actual error estimates on the comparison.
Say, suppose you're measuring successful treatments. You would have to both use the count, perhaps signed even (subtracting abject failures such as deaths), cost (financial or number of visits), then verify these numbers with a follow up.
See, the definition of success is critical here. OR and NNT are not evaluating side effects negatively, for example.
So it may turn out that you're comparing completely different ideas of better instead of matching models.
>> The mix function is an interpolation function that linearly interpolates between the two input colors using a blend factor between and ( in our case).
>> A mix function for two colors works the same way, except we mix the color components. To mix two RGB colors, for example, we’d mix the red, green, and blue channels.
I agree with the colorspace alert. Lerping red and blue in OKLAB or OKLCH colorspace produces a much nicer effect. Also, the article details linear interpolation, but I think there's a lot of fun to be had by introducing some easing functionality into the interpolation[1] - it's not difficult to achieve in code, even in shader code?
I do disagree with the article about the need to do such work in the WebGL space. Modern CPUs are insanely fast nowadays, and browsers have put in a lot of work over the past few years to make the Canvas 2D API as performant as possible - including moving as much work as possible into the GPU behind the scenes. With a bit of effort, gradients can be animated in 2D canvases in many interesting ways![2][3]
WebGL/OpenGL doesn't use sRGB in the shaders. If you load an sRGB texture, or render to an sRGB surface, the API automatically applies the gamma- (or inverse gamma) curve, so the shader only ever sees the linear values.
Correct, it uses just numbers without any specific information about a colorspace being involved. Decoding and encoding sRGB happen during (texture) read and write stages.
Quite right! I think if the values were linearized (~gamma 0.5) lerp might be mostly ok though, right?
And what about doing rgb->hsv, then lerp, then hsv->rgb? I'm unclear whether that also needs linearization, or whether the gamma can maybe just be done to the 'v' component before lerping?
Color is a surprisingly deep and fascinating topic, that's for sure! :)
Perceptual colors -- both sRGB and HSB -- are nonlinear, so you can't expect linear combinations to produce meaningful results (they often "interpolate through mud").
If you just want optical phenomena, you can just convert to luminescence -- WegGL and other modern graphics APIs actually does this internally when you load or render textures, so all shaders are handling optically-linear data, which is why the shader-produced images in the post look better than the javascript gradients.
Legacy OpenGL APIs used to assume sRGB, so you had to specify GL_LUMINANCE for non-color 'intensity' maps (which couldn't be blitted to FBOs, e.g.).
Modern OpenGL assumes linear color, so instead you have to specify sRGB on texture load to direct the driver to do colorspace conversion (e.g. GL_SRGB8 for typical RRGGBB byte triples).
Mixing of colors in an "objective" way like blur (lens focus) is a physical phenomenon, and should be done in linear color space.
Subjective things, like color similarity and perception of brightness should be evaluated in perceptual color spaces. This includes sRGB (it's not very good at it, but it's trying).
Gradients are weirdly in the middle. Smoothness and matching of colors are very subjective, but color interpolation is mathematically dubious in most perceptual color spaces, because √(avg(a+b)) ≠ avg(√(a) + √(b))
(1,0,0) and (0,0,1) are each twice as bright, in terms of photons, as (0.5,0,0.5).
If you quickly apply gamma=2 so the midpoint is (0.707,0,0.707) your gradient will look much better. Although other commenters suggested mixing in more complicated colour spaces.
Yea as a gardener I enjoyed the themes but the images really threw me off. They don’t add anything. I always thought that stock images in blogs seemed unnecessary but at least they didn’t have glaring errors in them.
Because it works, probably. Blogs (and blog making tutorials) get selected by reach, not content, and images add to that, like arrows and stupid face close ups do on youtube. What you see is a cost-driven transformation of the concept.
Browser makers should straight up remove the JS API for interacting with history. There are legitimate uses for it, but the malicious actors far outweigh the good ones at this point. Just remove it.
That's a biased thing to say, since you're never going to notice the times when the history api is being used appropriately. Just as often I find myself raging when a webpage doesn't rewrite history at times when it should. Good taste is hard to come by.
The difference is that uBlock Origin is an extension you intentionally trust and install, while the JS API we talk about are something any websites (untrusted) can use.
To be fair, uBlock Origin has always been a special case. It's so good and so important and so trusted that it should have access to browser internals that normal extensions can't access.
Honestly, uBlock Origin shouldn't be an extension to begin with, it should be a literally built in feature of all browsers. Only reason it's not is we can't trust ad companies to maintain an ad blocker.
Perhaps the users should be given an option to opt out (enabled by default) for such APIs on a per-site basis. That way, users can intervene when they're abused, while their fair use will remain transparent.
An advertising company controls the user agent everyone uses to access the internet, and wants to shove more ads into your eyeballs. uBlock exists as long as they allow it. Anyone who disagrees with this, works for them or own shares in the company.
So UBO isn't doomed, just UBO on Chrome. While that's significant given Chrome's market share, I and everyone else on the planet have the option to use something else, and will continue to do so.
Ah, I see what you mean. The canonicalization is, whereas redirects after processing forms could be done in JavaScript from and on click or on submit handler.
I mean, without `history.pushState()` and `window.onpopstate` things wouldn't be as nice. Ok, I guess one could do about everything with `location.hash = ...` and `window.onhashchange`, like in the before times. But the server will not get the hash part of the URL so a link to such an page can't be server side rendered and has to fetch the actual page content in JavaScript, evaluating the hash. When I browse things like image search it is really handy that you can use the browsers back button to close an opened image without loosing any dynamic state of the page and that the x button on the page will close the image and remove the history entry just the same way, so a later back won't re-open that image.
For me the back button wasn't hijacked.
But I am for disallowing the use of `history.go()` or any kind of navigation inside of `onpopstate`, `onhashchange`, `onbeforeunload` or similar or from a timer started from one of those.
Like I said: I recognize there are legitimate uses. But unfortunately, they are majorly outnumbered by people doing things like overwriting my history so that when I hit "back", it stays on the site instead of going back to my search engine. I would love to live in the world where malicious dark patterns didn't exist, and we could have nice things. But we don't, and so I would rather not have the functionality at all.
In Firefox, you can prevent this by setting `browser.navigation.requireUserInteraction` via about:config. I've been told that it breaks some stuff, but to date I haven't noticed any downsides
It's already egregious when a site adds history pushState entries for just clicking through a gallery or something, but wow adding them just for scrolling down on a page is simply bizarre, especially on a page about usability.
I actually quite like the emojis they put in the output, it helps alleviate the balance of providing enough context while also giving a clear visual indicator for the actual error message.
They aren't going overboard on it, they just put a warning emoji in front of the error message.
Windows key + dot, then type in warning / if it was among the last ones you used, use the arrow keys, and hit Enter to insert it.
If you use whatever OS other than Windows, I'm sure there are similar flows available if you search for it. And since it's just Unicode, I'm sure there are numpad based keybinds available too.
Emoji selector is fast and works perfectly fine over SSH, it's no different to any other input method that needs to use characters beyond 7-bit ASCII.
grep is a bit more iffy. UNIX command line tools seem to be a bit of a crapshoot in how or if they support Unicode, especially if you switch between different systems like Linux, BSD, Cygwin etc. You might need a bit of experimenting with the LANG variable to get it work (e.g. Git Bash on Windows needs LANG=C.UTF16 to match an emoji). I've also had cases where grep or sed works, but awk doesn't, or vice versa. On the whole it works a lot better nowadays than it used to, though, and that's a win for non-English users of the command line as well as emoji fans.
Didn't you ever grep a text document written in a language other than English? No processing of CSV files in different charsets? Not even encountered a file with a non-English character in the name? Not to mention the folks who deliberately set LANG so that their compiler and everything else will give them localized error messages. This stuff was all much worse 25 years ago, even 15 years ago. Like them or not, I do think emojis have helped drive forward much better Unicode handling across the whole stack, including the command line.
> Didn't you ever grep a text document written in a language other than English?
Yes but it was for humans originally? Not for machine processing.
> Not to mention the folks who deliberately set LANG so that their compiler and everything else will give them localized error messages.
The horror! Who even works on translations for compiler error messages ?!?
It makes absolutely no sense!
Next they'll want to localize programming language keywords. I wonder how well that will work at this current project of mine that has people native to 3 countries, none english speaking ...
How old? I'm old enough that my first Linux experience included a stack of floppy disks and recompiling a 0.9x kernel in late high school, and my first c++ program was done with Borland C++ 3.1. It never occured to me to look for a localized UI for either of them.
It's not a "what I think" thing, these were your literal words:
> How do you grep for it
And then how badly or well this works will depend on your build of grep and your environment variables, as the other user noted. I did not consider this, because I'd expect grep to just work with Unicode symbols like this when my stdin is set to UTF-8, which I'd further expect to always being the case in 2025, but it appears that's not an expectation one can reasonably have in the *nix world.
It was and continues to be unclear to me why you'd want to grep for the warning emoji though, since according to the article these are inserted somewhere deep in the console-visual explanations. They do not replace the slug denoting the compiler message type at the start of these, which as you said, can (still) be found by just grepping for "warning".
Oh but in the real world you vpn into a server that privately tailscales to some boxes that are hard to reach inside a factory and no one has physically touched them since 2018 at best ...
reply