tbh i get you ; but one has to realize this has nothing to do with that company and everything to do with the current nature of technological business where "everyone wins it all
What I say is that instead of hating on cloudflare one can look up how a DNS server works and start getting into DDOS mitigation ; but even after a couple of month anybody would still just have scratched the surface of it.
I don't think it's Cloudflare "goal" to centralize the internet, neither it is to set up captcha everywhere ; but it's definitely frustrating
It's every internet company's goal to centralize the part of the internet that aligns with what they so; cloudflare's part happens to be most of the internet, since they provide low-level infrastructure services.
CRDTs are also crazy slow due to their architecture ; even the best alg out there are costly by design ; so adding homomorphic encryption is even more of a challenge ; tough it really is impressing I'm curious if this can be usable at all;
edit so i bring some "proof" of my claim: from this very page : `To calculate the new map, the server must go through and merge every single key. After that, it needs to transfer the full map to each peer — because remember, as far as it knows, the entire map is different.`
(And even these optimizations are nascent. It can still get so much better.)
The section you quoted describes an effect of homomorphic encryption alone.
There is the problem that both CRDTs and encryption add some overhead, and the overhead is additive when use together. But I can’t tell if that is the point you are trying to make.
Yep. Author here - that article is out of date now. I should really do a followup. Performance of CRDTs has improved again through a new grab bag of tricks. I’ve also been told the beta of automerge 3 uses a lot of the optimisations in that post, and it’s now much faster as a result.
A crdt library should be able to handle millions of changes per second. If it’s the bottleneck, something somewhere has gone wrong.
The overhead is usually multiplicative per-item. Let's say you're doing N things. CRDTs make that O(Nk) for some scaling factor k, and adding encryption makes it O(Nkj) for some scaling factor j.
Give or take some multiplicative log (or worse) factors depending on the implementation.
> CRDTs are also crazy slow due to their architecture ;
You must back up your extraordinary claim with some extraordinary evidence. There is nothing inherently slow in CRDTs.
Also, applying changes is hardly on anyone's hot path.
The only instance where I saw anyone complaining about CRDT performance, it turned out to be from very naive implementations that tried to spam changes with overly chatty implementations. If you come up with any code that requires a full HTTPS connection to send a single character down the wire, the problem is not the algorithm.
Is it the CRDT that's slow there, or is the problem that they've made it one party's job to update everybody?
By having a server in the mix it feels like we're forcing a hub/spoke model on something that wants to be a partial mesh. Not surprising that the hub is stressed out.
The whole point of Conflict-free Replicated Data Types is that you don't need an authoritative server. You're thinking of Operational Transform which does require an authority.
While it is true that CRDTs don't require an authoritative server, the hub and spoke model (which could also be thought of as having a well-known always-online super peer) is more efficient and provides a better user experience. In practice most products that are built with CRDTs today use this model.
> CRDTs are also crazy slow due to their architecture
What kinds of CRDTs specifically are you referring to? On its own this statement sounds far too broad to be meaningful. It's like saying "nested for loops are crazy slow".
I did have some bad experiences with OTEL and have lot of freedom on deployment ; I never read of Signoz will definitely check it out ; SigNoz is working with OTEL I suppose ?
I wonder if there are any other adapters for trace injest instead of OTEL ?
Jaeger collector perhaps but then you'd have to use the Jaeger UI. Signoz has a much nicer UI that feels more integrated but last I checked had annoying bugs in the UI like not keeping the time selection when I navigated between screens.
This is akin to re-implementing the complete language.
Writing programming languages before AI was bit of a daunting task ; now it's way easier to grasp a first good principles and dive trough ; would still take time to get something production ready ; but that's definitely something that could happen
The majority of the innovation here is in building enough rails (specifically around lowering Python's language features to native code) so that LLM codegen can help you transform any Python code into equivalent native code (C++ and Rust in our case).
Yes, but presumably dealing with just a couple of systems like this has to be a quick matter. These things are probably off right now, so it's just a matter of finding them, shooting the people guarding them and either destroying or taking them.
Formatting is like a dot on the i; there is 200 other small details that are just completely off putting to me :
- naming conventions (ias are lazy and tent to use generic names with no meaning) such as "Glass" instead of "GlassProduct" ;
- error management convention
But the most troublesome to me is that it is just "pissing" out code and has no after-tough about the problem it is solving or the person it is talking to.
The number of times I have to repeat myself just to get a stubborn answer with no discussion is alarming. It does not benefit my well-being and is annoying to work with except for a bunch of exploratory cases.
I believe LLM are actually the biggest data heist organized. We believe that those models will get better at solving their jobs but the reality is that we are just giving away code, knowledge, ideas at scale, correcting the model for free, and paying to be allowed to do so. And when we watch the 37% minimum hallucination rate, we can more easily understand that the actual tough comes from the human using it.
I'm not comfortable having to argue with a machine and have to explain to it what I'm doing, how, and why - just to get it to spam me with things I have to correct afterwards anyway.
The worst is, all that data is the best insight on everything. How many people ask for X ? How much time did they spend trying to do X ? What were they trying to achieve ? Who are their customers ? etc...
reply