Hacker Newsnew | past | comments | ask | show | jobs | submit | more csande17's commentslogin

The article mentions that pinning is often used by mobile apps and IoT devices. In those contexts, you have an easy out-of-band mechanism to deliver the cert: baking it into the app/firmware binary. (The user generally receives this from the App Store, or in a cardboard box from a retailer, so no initial TLS connection to your server is required.)

If you're pinning the leaf certificate this way, really the only benefit I see of using a WebPKI cert is if you want to reuse the same API endpoint for a web app. Otherwise you're mostly getting a bunch of restrictions and downsides (information leaks from CT, revocation drama, etc) that don't make sense if the cert is hard coded in the client.


Distros have historically been very concerned with "real development"!

Debian, for example, packages all of the tools and libraries needed to build any package in Debian, so users can easily modify and recompile any package. Because there are a lot of packages in Debian, it's become a great, stable, vetted source for general-purpose compilers and libraries.

Rust really is an outlier here -- its marketing has managed to walk a delicate tightrope to be considered "stable" and "mature" enough to use for important projects like Linux, while also still being new and fast-moving enough that it's unreasonable to expect those projects to use anything but the most recent bleeding-edge nightly build. And that will create problems, if only for distros trying to build the kernel binaries that they ship to their users.


That's still user-focused. I actively avoid debian as a development distro because things are so out of date and so customised. Arch is a much nicer development experience because they for the most part just take the up-to-date upstream projects and build them without fiddling with a bunch of stuff. (OTOH, if I'm standing up a box to run critical network services, debian is strongly preferable)


If I'm writing new software I'm necessarily developing something that's not yet a package in any distro, so I don't necessarily want to be using distro tools to build it.

I also strongly disagree with the characterization that it's "easy" to modify and recompile "any" package in a given distro - typically, someone would prefer to modify the upstream and build it (which may not be possible with the distro's supplied tools) and use the modified version. Distributions in my experience are quite bad about shipping software that's "easy" to be modified by users.

It's a gross mischaracterization of the ecosystem to suggest that many Rust projects require "bleeding-edge nightly" to build. Kernel modules have a moderate list of unstable features that are required but many (all?) have already been stabilized or on the path to stabilization so you don't need a "bleeding edge" nightly.

In my opinion the lagging nature of distros illustrates one of the fundamental problems of relying on them for developing software, but hey, that's an ideological point.


> Kernel modules have a moderate list of unstable features that are required but many (all?) have already been stabilized or on the path to stabilization so you don't need a "bleeding edge" nightly.

https://github.com/Rust-for-Linux/linux/issues/2 lists the "unstable features" required by the Rust for Linux codebase. It's a long list!

One of the features in the "Required" section was "added as unstable in 1.81", a version released three weeks ago. Presumably that means you need a nightly build that's newer than (or at least close to the release of) Rust 1.81, which seems pretty bleeding-edge to me.

I sure hope none of those "paths to stabilization" involve making any changes to the unstable features, because then release versions of the Linux kernel would be stuck pinning random old nightly builds of the Rust compiler. That seems even worse than depending on bleeding-edge ones.


https://purgecss.com/ does this, kind of -- it used to be recommended by Tailwind back when Tailwind shipped a giant zip-bomb stylesheet of every possible property/value combination by default. I don't think it does the more complicated browser-like analysis you mention, though; it might just check whether class names appear in your HTML using a regex search.

The AMP WordPress plugin also does something like this IIRC (to try and fit stylesheets into AMP's size limit) but the tooling for it might be written in PHP.


How do you remove the unused CSS?

https://purifycss.online/

Above is a nice online version of Purify. But it just seems to minimize the CSS, and doesn’t delete the unused CSS.


I wonder if SVG filters would let you do the "manual" approach without JavaScript. IIRC they're hardware-accelerated by default in Chrome, and they usually still work in browsers that disable WebGL for security/privacy reasons.


The folks at Wix experimented with this (although I couldn't get their demo working) https://twitter.com/YDaniv/status/1820558358648435020

They say it works in Chromium, it's buggy in Firefox, and slow in Safari.


The article suggests Apple is using its buying power to push component prices down, so that it can keep more of the profit on its expensive smartphones for itself.


I appreciate you and the other commenter for correcting me, thank you. That makes much more sense.


Also, the homomorphic encryption is a requirement for third-party caller ID providers, not Apple themselves. Apple's first-party "Contact Photos" caller ID feature operates primarily on the "trust Apple" security model AFAIK.


Yeah, the if statements are definitely a little weird-looking to me. It seems like the goal is to allow for more traditional infix syntax while still defining most of the language using the macro system; it seems unfortunate if their macro system can't handle more traditional-looking "if (x} block; else block" conditional statements.


It would be pretty hard for the attacker to precisely arrange a hundred tiny sprinkles on the surface of a pill to exactly match a known-good pattern. (At least compared to just throwing a bunch of assorted sprinkles on the pill randomly and taking a photo of the result, which is what legitimate manufacturers would be doing.)


yeah, this is one common claim about sprinkles - that the pattern can't be reproduced. Is that so true? Manually, sure, probably, perhaps. But if sprinkles signing is common enough, or the attacker has enough budget - and they do - then sprinkles matching deserves a machine. A sprinkles printer.

And if you have a standard algorithm which converts a sprinkles picture or three into a hash. Then now you have a precise target for the machine to benchmark against.


> (in b4 it's Actually Indians.)

Bad news:

https://gizmodo.com/amazon-reportedly-ditches-just-walk-out-...

> Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off-site, and they watched you as you shopped.


Lol when people are cheaper than AI...

Must be a soulcrushing job though


If Cloudflare really is serious about this for R2, that's going to be difficult for them to communicate in a trustworthy way. Their other products (in particular their flagship DDOS mitigation service) definitely seem to operate on a "free, but if your site is large enough you'll get an email from salespeople threatening to disconnect you unless you pay $$$" basis, using the boilerplate "don't overburden our servers" clause in their ToS.


> don't overburden our servers

Does this effectively mean DDoS protection that's free until you actually use it?


Speaking as a bystander, a more-charitable option might be: "DDoS protection is free for targets which fit a generic risk-profile and are targeted randomly or only for small-scale spite."

P.S.: In contrast, imagine a website dedicated to journalistic content about how {dictator} is a crook and a parasite that has made {country} weak and a joke to the rest of the world. If there's a regular nation-state level DDoS from nodes in {country}, Cloudflare might say: "Hey, uh, your situation is not normal anymore, and it's not exactly your fault be we are a business that needs to recoup costs..."


> Hey, uh, your situation is not normal anymore, and it's not exactly your fault be we are a business that needs to recoup costs...

In exactly none of the posts about these debacles did CloudFlare say anything of the sort. It’s always comic book villain levels of communication.


Thats basically for everything. Do you think home insurance will pay if you burn down house every year. Or car insurance if you trash it few times a year. I see DDOS protection as kind of insurance for event happening once in a while, if it happens all the time one definitely needs something more than a free service.


Yeah, but they aren't the ones, presumably, who are DDOSing their own site.

Warning: Insurance is not going to pay if YOU burn down your home /// even a little bit ///!!!


"Insurance companies hate this one weird trick."

That reminds me of an NPR piece [0] discussing how a worrying number of people seem to be getting wrong/illegal tax-advice from "influencers" on TikTok etc. I'm sure a similar thing with insurance fraud either has happened or will happen eventually.

[0] https://www.npr.org/transcripts/1197958760


"Unmetered Mitigation: DDoS Protection Without Limits"

https://blog.cloudflare.com/unmetered-mitigation/

"Cloudflare mitigates record-breaking 71 million request-per-second DDoS attack"

https://blog.cloudflare.com/cloudflare-mitigates-record-brea...

"Mitigating a 754 Million PPS DDoS Attack Automatically"

https://blog.cloudflare.com/mitigating-a-754-million-pps-ddo...


You've posted these links without much context, but yes, I do think Cloudflare does a pretty good job of backing up the specific claim that they'll absorb a large, random DDoS attack for you. (Although it doesn't seem like the attack in your last link could even have been attributed to any specific customer in the first place.)

Where things start to get shaky is if your site uses a lot of bandwidth for legitimate traffic, or otherwise uses the service in an unusual way. Personally I see it like an old shared hosting plan that will probably let you use some burst capacity if you get Slashdotted, but operates under a vague shared understanding that the service is only for "normal websites". (Which includes a lot of policies and content guidelines that only become problems if you show up on someone's "sort by usage descending" dashboard.)

I think publishing a specific amount of bandwidth that customers are allowed to consume would go a long way to putting R2 in the former category. Maybe that number is your maximum object size multiplied by your GET request limit, maybe it's your current total network capacity, maybe it's eight octillion zottabits per second.


That's cloudflares CTO.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: