It should, it has some issue rendering some sites (Google's AI overview acted funny but if you still use google you aren't their intended audience anyway) but otherwise works fine on content focused sites. Youtube works iirc, idk about adblock though (theres uBlock legacy but that hasn't been updated in a bit)
From my understanding, the code is MIT, but the model isn't? What consitutes a "Software" anyway? Aren't resources like images, sounds and the likes exempt from it (hence, covered by usual copyright unless separately licensed)? If so, in the same vein, an ML model is not part of "Software". By the way, the same prohibition is repeated on the huggingface model card.
That's one thing, the other is you find out you were optimizing for the wrong thing, and now it takes more effort and time to reoptimize for the right thing.
The reason is floating point precision errors, sure, but that check is not going to solve the problems.
Took a difference of two numbers with large exponents, where the result should be algebraically zero but isn't quite numerically? Then this check fails to catch it. Took another difference of two numbers with very small exponents, where the result is not actually algebraically zero? This check says it's zero.
Yeah, at the least you'll need an understanding of ULPs[0] before you can write code that's safe in this way. And understanding ULPs means understanding that no single constant is going to be applicable across the FLT or DBL range.
You can add the guideline, but then people would skip the "I asked" part and post the answer straight away. Apart from the obvious LLMesque structure of most of those bot answers, how could you tell if one has crafted the answer so much that it looks like a genuine human answer?
The point still stands. The human body still isn't going change. That's why insulin pump can afford to have all kinds of rigorous engineering, while web-facing infrastructure on the other hand needs to be able to quickly adapt to changes.
> That's why insulin pump can afford to have all kinds of rigorous engineering, while web-facing infrastructure on the other hand needs to be able to quickly adapt to changes.
The only reason we have a web in the first place is because of rigorous engineering. The whole thing was meant to be decentralized, if you're going to purposefully centralize a critical feature you are not going to get away with 'oh we need to quickly adapt to changes so let's abandon rigor'.
That's just irresponsible. In that case we'd be better off without CF. And I don't see CF arguing this, in fact I'm pretty sure that CF would be more than happy to expend the extra cycles so maybe stop attempting to make them look bad?
The best thing about gccgo is that it is not burdened with the weirdness of golang's calling convention, so the FFI overhead is basically the same as calling an extern function from C/C++. Take a look at [0] and see how bad golang's cgo calling latency compare to C. gccgo is not listed there but from my own testing it's the same as C/C++.
Isn't that horribly out of date? More recent benchmarks elsewhere performed after some Go improvements show Go's C FFI having drastically lower overheard, by at least an order of magnitude, IIUC.