And if you time travel to the 90s, this is what amiga owners with 1M ram said about PC/Win users needing 8,16,32M of ram to paint a few icons on the monitor.
But noone listened then, because ram was cheap and you should not stand in the way of "progress".
So here we are, needing gigs to paint a single pixel. Congratulations everyone that chose bloat, you won.
For some people, when you are not taking over the whole machine (as you would do in demos and games), then replacing the OS with something that gives you memory protections and virtual memory, uses all ram for caches, talks ipv6 and things like that is kind of neat. It will be a somewhat slow unix box, but it is still that same machine doing both kinds of tasks for you.
It does, you make certain claims in your text, and the parent questions how to test you alternate theory against the perceived reality to see which of those two are true.
I understand why you consider his question relevant. At the same time, it is worth making a clear distinction: OP does not formulate an alternative empirical explanation of physical reality, but rather a philosophical reflection on the consequences of the simulation assumption itself. In this context, the question of experimental testability is generally meaningful, but it misses the point here because it presupposes a scientific hypothesis that OP does not even propose. His objection would be justified if OP were to claim truth in the scientific sense — but he does not.
If you are writing HelloWorld-webscale daemon from scratch, then counting +lines is probably "ok", but considering some existing large project like Linux (for instance), you would be well off keeping people who has managed to retain functionality while removing lines. Old projects have a tendency to get a lot of old cruft in which tends to stick (chestertons fence and all that) but someone clever enough to rewrite and remove old useless code is a net win for you, so I agree that if you fire some percentage on most-committed-lines you either had a very recent project from scratch or the measurement is stupid.
How can we make programs this slow in 2025 when people have ssd and nvme drives, people have 12-24-64 threads in the cpus, memory that is lightning fast and still we wait for crap to load?
How is it that noone creates pushback on applications being this bad in the first place? We get to read articles on how someone made something 10x faster, but seldom anyone complaining about it being 1/10th of decent speed to begin with.
I think it was MS Access 2.0 that had some text on the back about being up to 100x faster than the previous versions, and to me that reads as "old was crap as hell", but some marketing person thought that was a super quote to put on the packaging. Perhaps it works, perhaps not.
I think I would not. When you move from 160x200 with few colors to 4k with 16M colors it places new demands on the graphics which I don't think this solution would give you. It could, but would probably require tons of people to make it happen.
Lets take the pilot walking at the start of Raid over Moscow, it was super well animated and designed for the sprite limitations a C64 has, but I am not so sure it would be upscaled to represent a walking pilot, since some of the oddly placed pixels might grow into something vastly different.
> Currently with an SSD, when there’s a power cut, there’s about a 20% chance my router will require me to walk downstairs and plug in a keyboard, type “fsck” manually and press y at all the prompts.
> I’d settle for a default “boot anyway, press y for all fsck questions” mode on boot. I just don’t want to have to physically touch the thing.
Look up where fsck is run in /etc/rc and add the -y there.
If you can get an Edgerouter Lite 3 it will run fine() on that, serial console, three gig ports, fanless and not-x86 and probably available for cheap if you look at used hw sites.
) as far as its hw goes, that is. Will not be competing in speed competitions, but cheap SBCs just never will, do they?
Then again, the sentence "tcp is outside of global lock" is very generalized, there are so many parts that got out of the kernel lock in pieces, like ip input, routing lookups and device packet handling that it is hard to talk about it as one singular thing that you just flip a switch on to make it MP-performant.
You could make filesystem code mp, disk device drivers mp and then still run on an IDE-disk which forces all IO to be one at a time and serialized first-come-first-served at which point all the work was for 'nothing'.
Same goes for networking, there are many many layers and places that all need code that actually allows for MP processing to improve its performance, fine grained locks (which reduce perf at this stage), then prove that the fine grained locks are sufficient for ALL use cases, all kinds of layering violations that could possibly happen, then you can unlock this single layer, and move to the next if nothing acts up on any machine.
So here we are, needing gigs to paint a single pixel. Congratulations everyone that chose bloat, you won.
reply