I think that is a bigger impact on writes than reads, but certainly means there is some gap from optimal.
To me a 4k read seems anachronistic from a modern application perspective. But I gather 4kb pages are still common in many file systems. But that doesn’t mean the majority of reads are 4kb random in a real world scenario.
So if WhatsApp had an outage, but you needed to communicate to someone, you wouldn't be able to? Don't you have contacts saved locally, and other message apps available?
In most of Asia, Latin America, Africa, and about half of Europe?
You’d be pretty stuck. I guess SMS might work, but it wouldn’t for most businesses (they use the WhatsApp business functionality, there is no SMS thing backing it).
Most people don’t even use text anymore. China has it’s own Apps, but everyone else uses WhatsApp exclusively at this point.
Brazil had many times a judge punished WhatsApp by blocking it in Brazil, and all the times that happened, Telegram gained hundreds of thousands of new users.
Really? Please indicate your source for that claim that "most people don't even use text anymore" because I have never once in my life been asked about WhatsApp, but have implemented a few dozen SMS integrations after all the annoying rules changes where you have to ask "mother may I" and submit a stool sample to send an SMS message from something other than a phone.
Rare for people who don't deal with encoding and decoding maybe.
To be clear the codec implements the compression (or other encoding) algorithm. So when talking about codec's we mean the implementation. But when talking about the algorithm, we are talking about the standard of encoding the encoder or decoder implements.
The vaccines were all made for early varients. Once the omicron varient came along, it had so many changes from the original strain that the effect of the vaccines were essentially unproven.
The vaccines were definately useful, and had a big impact, but unfortunately corona viruses change too quickly.
> Modern computer systems are complex systems — and complex systems are characterized by their non-linear nature, which means that observed changes in an output are not proportional to the change in the input. This concept is also known in chaos theory as the butterfly effect,
This isn't quite right. Linear systems can also be complex, and linear dynamic systems can also exhibit the butterfly effect.
That is why the butterfly effect is so interesting.
Of course non-linear systems can have a large change in output based on a small input, because they allow step changes, and many other non-linear processes.
The S3 API allows requests to read a byte range of the file (sorry , object). So you could have multiple connections each reading a different byte range. Then the ranges would need to be written to the target local file using a random access pattern.
You can spawn multiple connections to S3 to retrieve chunks of a file in parallel, but each of these connections is capped at 80MB/s, and the whole of these connections, while operating on a single file, to a single EC2 instance, is capped at 1.6GB/s.
I believe that Netware had NET SEND before Microsoft had any networking at all. But maybe I’m wrong. Certainly NT had a netware compatible stack, but this was way after netware blazed the trail.
To me a 4k read seems anachronistic from a modern application perspective. But I gather 4kb pages are still common in many file systems. But that doesn’t mean the majority of reads are 4kb random in a real world scenario.
reply