The issue is that QR's alphanumeric segments are uppercase only, and while browsers will automatically lowercase the protocol and domain name, you'll have to either have all your paths be uppercase or automatically lowercase paths. On top of that when someone scans the code it will likely be presented with an uppercase URL (if it doesn't automatically open in a browser) and that should alert anyone that doesn't already know that uppercase domains are equivalent to lowercase domains.
Ideally QR codes would have had a segment to encode URIs more efficiently (73-82 characters depending on how the implementation decided to handle the "unreserved marks"), but that ship has long sailed.
Many QR code readers will auto-lowercase URLs that are encoded in alphanumeric encoding. The rest will recognize uppercase URLs just fine. Alphanumeric encoding was basically made for URLs.
A significant downside to the <picture> element, and alternative image formats in general, is that when most users wanna download the image they expect an image format they already know how to work with. To most users an .avif or .webp are an annoyance because they reasonable expect most of their tools to be unable to open these.
It's disappointing that browser vendors haven't picked up on this and offered a "Save as PNG/JPEG/GIF" option when downloading images, but for now if it seems reasonable that if any user would want to download an image you're displaying then you should probably stick to the legacy formats.
Google search result do this weird trick. When you hover on a link the line at the bottom the your browser window shows the actual URL. But if you do "copy link URL" on it, you get a Google tracker URL in your clipboard.
Couldn't one do the same thing to make users get jpegs when they try to save a wepb? How bad would it be?
It could be used but this really seems like websites trying to "fix" browser UX. In cases like this where the problem is generic it seems like it is best for the browser to provide the UX it thinks is best for the users (possibly with preferences to allow the user to decide globally without needing to configure every single site).
A significant downside to the <picture> element, and alternative image formats in general, is that when most users wanna download the image they expect an image format they already know how to work with. To most users an .avif or .webp are an annoyance because they reasonable expect most of their tools to be unable to open these
Certainly not the case with WebP, which was announced by Google 12 years ago. On a recent version of macOS, Preview, the Mac’s default image and pdf viewer can open WebP and AVIF files, making it easy for Mac users to convert to another format if they wish. Also 3rd party graphics apps have supported WebP for years now.
AVIF support isn’t as widespread yet but that will quickly change now.
BTW, the iOS defaults to saving photographs in HEIC, which the average consumer has never heard of.
It's definitely worth mentioning JSONP - which worked by setting up a function in the global scope and using JavaScript inserting a new script tag that would hopefully call that function with the data. It was the ultimate trust exercise, as your target data vendor could execute any JavaScript it desired. Despite the name, JSONP could of course contain non-json data, like functions or class definitions.
TLDR of JSONP for those who are fortunate enough not to have dealt with it: you’d make an API call with
var script = document.createElement('script')
script.src = 'http://api.example.com/foo?bar=baz&callback=myFunction'
document.head.appendChild(script)
and then the server would (hopefully) return a JavaScript response, wrapping the JSON in the (global!) function of your choosing:
myFunction({...JSON here...})
In addition to the risk of a malicious API server being able to execute whatever code it wanted on your page, this also caused architectural headaches: the callback function had to be on `window` so that the JSONP response would have access to it when it loaded. In addition to the immediately obvious problems with globals, you also had to think very carefully about how to structure things so that the callback knew what it was supposed to do when called. (Woe betide you if some important state could change and the response didn’t have enough context to tell whether it was still relevant.)
HTTP has some strange rules about using stale caches so usually you want to add must-revalidate to your cache-control header. That ensures that the browser must revalidate once the cache goes stale.
Personally I've opted for "stale only" caching, so everything is served with Cache-Control: max-age=0,must-revalidate and a Last-Modified header and the browser will always make corresponding If-Modified-Since requests. This means significantly more requests per page, even if the responses are mostly 304 Not Modified, but getting to avoid all forms of cache busting makes developing a lot nicer.
It's definitely cheap to execute, the problem lies in the network overhead. With sub-resources (css, js, images) you can go from 1 request per page to 10 or 100 which is still negligible for fast connections (10 mbps+, http2) and servers with low request overhead - but the worst case scenario is high latency http1 connections where each request really matters.
However if you are serving clients with highly restricted bandwidth you're probably going to want extremely cacheable resources (public, immutable) and perhaps even a completely different site architecture.
If we permit the fairly recent QOI format[0] we can produce a 1x1 transparent pixel in just 23 bytes (14 byte header, 1 byte for QOI_OP_INDEX, 8 byte end marker):
[EDIT] I realized that we actually run into one of QOI's drawbacks if we were to encode the 103 byte png in the article, as we actually need to repeat the pixel 65535 times, so we'd have floor(65535/62)=1057 QOI_OP_RUN bytes followed by another QOI_OP_RUN to repeat the last pixel. Here it's pretty clear that the QOI spec missed out on special handling of repeated QOI_OP_RUN operators, as long repetitions could have been handled in far fewer bytes.
The whole brands aspect has really confused me. If you search for "nature" on unsplash right now the first result is a picture of a person prominently holding a product in the most blatant product placement way possible. So far they're basically working as banner ads, no sensible user would ever want to use a sponsored photo for their work.
I'm more afraid of how they'll modify their existing products to manipulate users into paying for "Premium" Getty stock photos over the free Unsplash ones :/
I think the parent argued that both serve a purpose. When you're dealing with a large selection of fonts a small pangram is more useful to help you narrow the list down - until it becomes small enough that you can switch to one of these font proofs to evaluate the final few fonts.
Websites like Google Fonts should absolutely use one of these font proofs for the dedicated font pages - or when simply comparing two fonts, but use a shorter pangram when comparing several fonts at once.
I think there is a general problem of fonts being treated as if they were all equivalent when in reality there are fonts which can only be sanely used as display fonts, and body text fonts which admittedly you might use as both. The computer has lumped them all together in a way a type setter would never do. So, yes, different approaches for different uses.
As to choosing a font, having had a typographer, a graphic designer and a book dealer as 3 of my brother's, it is a job which I can hardly find the courage to do. I also regularly wish others shared my self doubt and left it to a professional, or at least just followed conventional wisdom. A font you notice is a bad font.
These visualization are very satisfying and provide some wonderful insight.
A few minor bugs:
- The gray area is always too small, so there's plenty of area of the canvas that never gets cleared.
- Perhaps this is intended, or maybe it's caused by by the demo running at 144Hz, but the drone is always swept away by the wind for me. It doesn't even stand a chance to fight it.
Same issue at 120 Hz, the drone had no chance against the wind. Confused me up until reading your comment.
Works fine after setting refresh rate to 60 Hz. (Btw, amazing how bad scrolling looks at 60 Hz after getting used to 120 Hz! Never noticed it before nor did I think it could matter...)
Having access to Cliqz' crawler would help reduce that dependence.
To be clear, though, DDG does have a crawler of its own, and has historically used Yandex data as well (though I don't know if they still do). IIRC they also used Yahoo, but now that's basically indistinguishable from Bing.
Ideally QR codes would have had a segment to encode URIs more efficiently (73-82 characters depending on how the implementation decided to handle the "unreserved marks"), but that ship has long sailed.