I think it's a very interesting design choice. I haven't read the code, so maybe you already thought of this, but one idea that comes to mind is that instead of reallocing new_size, you realloc f(new_size) where f is some function that rounds up to discrete steps. This should ensure good asymptotics as realloc can then realize that the requested allocation size is identical to the current one and nothing needs to be done.
However one possible issue is if someone pushes and pops repeated just at the boundary where f increases in value. To address that you would have to use more advanced techniques, and I think "cheat" by inspecting internal structures of the allocator.
Edit: malloc_usable_size could be used for this purpose I think.
> You have to be of the right genetic origin because of the source data
Can you explain what this means? After thinking it over, the most plausible reading to me is that they think the results will not generalize to other origins than the ones they have data for?
I think I would make a color triangle, except I would remove blue from it, and instead have the corners be three different colors in the red-green part of the spectrum. I guess 530, 545, and 560nm.
I think they could have used another alternative if they were given a generous timeline to switch over. The lack of this seems to indicate that the goal was to sabotage them rather than an amicable separation.
This doesn't match what I experienced growing up in the 90's. What I saw saw was girls not wanting to involve themselves with computing because it was seen as nerdy and boring and for weird people. Only when people started realizing how much FAANG's paid and how nice the benefits were did they start feeling they wanted to get in on that. Those are just my personal observations though.
True enough for the 90's, but the 90's came long after they were pushed out in the 60's & 70's. That's an entire generation of seperation. It's en entirely different world.
Html tables are very different from excel tables. Fundamentally html tables were used because rows are a good way to split up the page vertically (you could have one row for the header, one for the content one for the footer). And columns are good for splitting it up horizontally (you could have one column for the side panel, and one for the content, and the last one for an obnoxious blinking ad).
While saying "hardware issue not my fault not my problem" is a valid stance, I'm thinking that if you hear it again and again from your users, maybe you should consider if you can do more. Verify the file was written correctly is a low hanging fruit. Other possibilities is run some s.m.a.r.t. check and show warning, or adding redundancy to recover from partial failure.
I think the failure mode that is happening for users/devs here is bit rot. It's not that the device won't report back the same bytes, even if you disable whatever caching is happening, it's that after T amount of time it will report the wrong bytes. Some file systems have "scrubs" and stuff they do to automatically find these and sometimes attempt to repair them (ZFS can do this).
I'm the author of HashBackup. IMO, silent bitrot is not really a thing. I say this because every disk sector written has an extensive ECC recorded with it, so the idea that a bit can flip in a sector and you get bad data without an I/O error seems extremely unlikely. Yes, you could have buggy OS disk drivers, drive controllers, or user-level programs that ignore disk errors. And yes, you could have a bit flip on magnetic media causing an I/O error because the data doesn't match the ECC.
I believe that that using non-ECC RAM is a potential cause of silent disk errors. If you read a sector without error, then a cosmic ray flips a bit in RAM containing that sector, you now have a bad copy of the sector with no error indication. Even if the backup software does a hash of the bad data and records it with the data, it's too late: the hash is of bad data. If you are lucky and the hash is created before the RAM bit flip, at least the hash won't match the bad data, so if you try to restore the file, you'll get an error at restore time. It's impossible to recover the correct data, but at least you'll know that.
The good news is that if you backup the bad data again, it will be read correctly, and be different from the previous backup. The bad news is, most backup software skips files based on metadata such as ctime and mtime, so until the file changes, it won't be re-saved.
We are so dependent on computers these days, it's a real shame that all computers don't come standard with ECC RAM. The real reason for that is that server menufacturers want to charge higher prices to data centers for "real" servers with ECC.
I would expect that to activate the immune system. "the unique components of mitochondria, when exposed, reveal their prokaryotic history and are recognized as foreign by innate immune receptors triggering an inflammatory response." https://pmc.ncbi.nlm.nih.gov/articles/PMC6218307/
Maybe if you suppress the immune system, introduce working mitochondria, and then stop taking the immunosuppressants, any mitochondria that are still outside cells get cleaned up and the ones that got absorbed are shielded and can do their job.
However one possible issue is if someone pushes and pops repeated just at the boundary where f increases in value. To address that you would have to use more advanced techniques, and I think "cheat" by inspecting internal structures of the allocator.
Edit: malloc_usable_size could be used for this purpose I think.