Hacker News new | past | comments | ask | show | jobs | submit | 6r17's comments login

I wonder if AI could help you translate those or write a translater ? Idk if the language is complex tough

tbh i get you ; but one has to realize this has nothing to do with that company and everything to do with the current nature of technological business where "everyone wins it all

What I say is that instead of hating on cloudflare one can look up how a DNS server works and start getting into DDOS mitigation ; but even after a couple of month anybody would still just have scratched the surface of it.

I don't think it's Cloudflare "goal" to centralize the internet, neither it is to set up captcha everywhere ; but it's definitely frustrating


It's every internet company's goal to centralize the part of the internet that aligns with what they so; cloudflare's part happens to be most of the internet, since they provide low-level infrastructure services.

CRDTs are also crazy slow due to their architecture ; even the best alg out there are costly by design ; so adding homomorphic encryption is even more of a challenge ; tough it really is impressing I'm curious if this can be usable at all;

edit so i bring some "proof" of my claim: from this very page : `To calculate the new map, the server must go through and merge every single key. After that, it needs to transfer the full map to each peer — because remember, as far as it knows, the entire map is different.`


CRDTs are not inherently “crazy slow”. Researchers just don’t succumb to the appeal of premature optimization.

See: https://josephg.com/blog/crdts-go-brrr/

(And even these optimizations are nascent. It can still get so much better.)

The section you quoted describes an effect of homomorphic encryption alone.

There is the problem that both CRDTs and encryption add some overhead, and the overhead is additive when use together. But I can’t tell if that is the point you are trying to make.


Yep. Author here - that article is out of date now. I should really do a followup. Performance of CRDTs has improved again through a new grab bag of tricks. I’ve also been told the beta of automerge 3 uses a lot of the optimisations in that post, and it’s now much faster as a result.

A crdt library should be able to handle millions of changes per second. If it’s the bottleneck, something somewhere has gone wrong.


Please do! There are too few current articles about CRDTs around!

The article was a great read, many thanks! Looking forward to the next!

> additive

The overhead is usually multiplicative per-item. Let's say you're doing N things. CRDTs make that O(Nk) for some scaling factor k, and adding encryption makes it O(Nkj) for some scaling factor j.

Give or take some multiplicative log (or worse) factors depending on the implementation.


> CRDTs are also crazy slow due to their architecture ;

You must back up your extraordinary claim with some extraordinary evidence. There is nothing inherently slow in CRDTs.

Also, applying changes is hardly on anyone's hot path.

The only instance where I saw anyone complaining about CRDT performance, it turned out to be from very naive implementations that tried to spam changes with overly chatty implementations. If you come up with any code that requires a full HTTPS connection to send a single character down the wire, the problem is not the algorithm.


Is it the CRDT that's slow there, or is the problem that they've made it one party's job to update everybody?

By having a server in the mix it feels like we're forcing a hub/spoke model on something that wants to be a partial mesh. Not surprising that the hub is stressed out.


The whole point of Conflict-free Replicated Data Types is that you don't need an authoritative server. You're thinking of Operational Transform which does require an authority.

While it is true that CRDTs don't require an authoritative server, the hub and spoke model (which could also be thought of as having a well-known always-online super peer) is more efficient and provides a better user experience. In practice most products that are built with CRDTs today use this model.

I may have worded that poorly but my intent was to suggest that it would work better if there wasn't a server.

I didn't know the name for the serverful alternative though, so thanks for that.


> CRDTs are also crazy slow due to their architecture

What kinds of CRDTs specifically are you referring to? On its own this statement sounds far too broad to be meaningful. It's like saying "nested for loops are crazy slow".


> CRDTs are also crazy slow

compared to what? c'mon


I did have some bad experiences with OTEL and have lot of freedom on deployment ; I never read of Signoz will definitely check it out ; SigNoz is working with OTEL I suppose ?

I wonder if there are any other adapters for trace injest instead of OTEL ?


There are a few: I've played with https://uptrace.dev and https://openobserve.ai/ . OpenObserve is a single binary, so easy to set up


Jaeger collector perhaps but then you'd have to use the Jaeger UI. Signoz has a much nicer UI that feels more integrated but last I checked had annoying bugs in the UI like not keeping the time selection when I navigated between screens.

Definitely should look up the tech more ; i lazily commented as Signoz clearly state it ingest most than 50 different sources ;

yep, SigNoz is OpenTelemetry native. You can instrument your application with OpenTelemetry and send telemetry data direclty to signoz.

so you don't have to look it up every time you need to change the DNS configuration.


This is akin to re-implementing the complete language.

Writing programming languages before AI was bit of a daunting task ; now it's way easier to grasp a first good principles and dive trough ; would still take time to get something production ready ; but that's definitely something that could happen


Spot on!

The majority of the innovation here is in building enough rails (specifically around lowering Python's language features to native code) so that LLM codegen can help you transform any Python code into equivalent native code (C++ and Rust in our case).


Defensive measure are also enjoyable as they give an increased tactical field - as to put it, they increase the luck area.


Yes, but presumably dealing with just a couple of systems like this has to be a quick matter. These things are probably off right now, so it's just a matter of finding them, shooting the people guarding them and either destroying or taking them.


First time I read about interning and this is really cool ; kudos for the article and the usage. Thanks for sharing !


Formatting is like a dot on the i; there is 200 other small details that are just completely off putting to me : - naming conventions (ias are lazy and tent to use generic names with no meaning) such as "Glass" instead of "GlassProduct" ; - error management convention

But the most troublesome to me is that it is just "pissing" out code and has no after-tough about the problem it is solving or the person it is talking to.

The number of times I have to repeat myself just to get a stubborn answer with no discussion is alarming. It does not benefit my well-being and is annoying to work with except for a bunch of exploratory cases.

I believe LLM are actually the biggest data heist organized. We believe that those models will get better at solving their jobs but the reality is that we are just giving away code, knowledge, ideas at scale, correcting the model for free, and paying to be allowed to do so. And when we watch the 37% minimum hallucination rate, we can more easily understand that the actual tough comes from the human using it.

I'm not comfortable having to argue with a machine and have to explain to it what I'm doing, how, and why - just to get it to spam me with things I have to correct afterwards anyway.

The worst is, all that data is the best insight on everything. How many people ask for X ? How much time did they spend trying to do X ? What were they trying to achieve ? Who are their customers ? etc...


I have implemented OTEL over numerous projects to retrieve traces. It's just a total pain and I'd 500% skip it for anything else.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: