He says he is not planning to bomb any data centers, but he's also said that if he were planning to bomb data centers, he'd lie about it in public, so...
The author and most of his associates generally qualify as techno-utopians, and speak routinely of the "glorious transhumanist future".
Smarter-than-human AGI really is different from all previous technologies, in much the same way that homo sapiens is different from all preceding life on earth.
* A plan exists
* The plan itself predicts non-obvious results in smaller systems, and those tests have passed (prediction written *before* running test)
* A bunch of smart people have looked at it and said, "Yes, this looks plausible"
The closest thing to a plan is RLHF, which has failed every toy problem its been thrown at and made everyone in the field say "even if this worked in toy problems, it wouldn't generalize".
I have yet to see a large static typed program that didn't -- somewhere -- run into the limits of static typing and contain a set of workarounds, using void* or linguistic equivalent. That's code a dynamic language doesn't need.
The only code you can be sure isn't buggy is code that doesn't exist.
void* is usually not a symptom of limits of static typing, but limits of the [type system] design or human brain. You can think of it as "ok, I give up. Anything can be passed here, proceed at your own risk, compiler will not save you here, errors will show up at runtime". Even the memory safe Rust does not do without such unsafe blocks. In dynamically typed languages that is everywhere, though.
I have said this before: safety benefits of static typing show up when you are working with at least data structures, not simple variables. Imagine you have an external endpoint or library call that is specified to return a single object and does exactly that. At some time after release you are the maintenance programmer responsible for implementing spec changes:
* The object returned no longer has member/property x, it is obtained by other means;
* The endpoint returns list of such objects.
How sure are you that tests in dynamic language cover these cases? My experience shows that tests very rarely get designed to anticipate data changes, because data is driving test design. Which is more likely for a test: a) to test whether object returned contains keys x, y and z; b) to check if the object returned is_list() (see appendix)?
Static typing covers such cases. Static typing is not something that magically saves oneself from shooting them in the foot, but is nevertheless a safety tool that CAN be used. It is of course a burden if one does not intend to use it and that is the core of the debate.
Fun thing: in the second case if your code manages to convert input list to a map and assign one returned object to a key that coincides with the removed property and map access looks syntactically the same as property access (a very specific set of assumptions, though), the bug can butterfly quite deep into the code before manifesting :)
I zoomed the image in and out a bunch -- no change. When I zoomed in enough, I could see the checkerboarding in A, but the overall color stayed the same as B.
This technique may not work if you've got a weird DPI that's causing scaling to something like 150%. In that case you'd need to set your browser's zoom level to 66.666% which is probably not one of the levels it supports. My display scales by 200% so I just needed to zoom out to 50% which has worked on every browser I tried.
Another option is to save the image from the webpage to disk and then open that image in a basic image editor (making sure the editor is not zoomed in at all). I'm not sure how feasible this is if you're on a phone though.
What would have been ideal is if the author of the article had included srcset alternatives for these images to cater for some of the more common high DPI devices. Would then have just work automatically for most people and caused a lot less confusion.
I also had trouble calling either "more even". The first one has greater division at the darker end, and the second at the lighter, but they about balance out.
The extreme for me was figure 12. A and B are so similar I can't see the line between them, but C (the "corrected" square) is a completely different shade.
I'm viewing on a data projector. That's probably the reason. Still, it makes me skeptical that there's anything display-agnostic you can do for gamma.
The major point the article makes is not about any particular display. It is about image processing algorithms, such as color blending, anti-aliasing, resizing, etc.
All these algorithms assume they are performing math on linear scale measurements of physical light. However, most image data is not encoded as linear scale samples of light intensity. Instead they are gamma encoded.
What the article gets slightly wrong though is that images are not gamma encoded to deal with the non linear response to intensity of the human eye. Instead, it's to deal with the non linear response of CRT displays to linear amounts of voltage, as produced by camera sensors. The gamma encoding adjusts the image data so that a display will correctly produce linear scales of light intensity to match the physical light measured from a scene.
You are rightly skeptical that Gamma Encoding can't really deal with the broad variety of different displays. However, it is still the case that most images are gamma encoded with roughly gamma 2.2, and that all image processing algorithms on the other hand assume gamma 1.0, and misbehave on data that is gamma 2.2
It is, of course still the case that by chance, human visual response is roughly the inverse of gamma 2.2. But bringing this up while trying to make a point about performing operations on linear gamma data is somewhat distracting.
If the resolution you're sending into the projector matches the native/optimal/recommended resolution of the projector (in at least the long dimension, and it will letterbox the other dimension), and you set the digital keystone to neutral/zero, you will achieve a 1:1 pixel mapping that is required to be in compliance with the author's statement that "These examples will only work if your [browser|projector|etc]* doesn’t do any rescaling on the images below."
* Author only said browser, but actually everything in the chain matters. If you're not at a 1:1 pixel mapping you're resampling, and resampling breaks the checkerboard example. Digital keystone (but not optical keystone with tilting lenses) included.
Last time I manually packed structures I was doing GPU programming. I forget the details, but the CPU and GPU had different alignment requirements so anything other than a manually packed structure broke in weird ways.
Lots of users pick guessable ones unless you have extensive password rules to stop them, and those rules are a huge pain to your users.
Lots of users have terrible password hygiene, leaving them all sorts of places they shouldn't.
Enough users are going to forget their passwords that you need a recovery mechanism. Which means an attacker needs to break either the password or the recovery. The most common recovery method is email, but Google often is the email provider.
He says he is not planning to bomb any data centers, but he's also said that if he were planning to bomb data centers, he'd lie about it in public, so...