For a lot if what I do, the CPU on an older laptop is just fine. So, I too use really old laptops.
That said, I am upgrading laptops that sold with 2 - 4 GB of RAM to 16 GB, or at least 8 GB. Often I am replacing on old HDD ( maybe as slow as 5400 RPM ) with a faster SSD. That SSD also juices the battery life. So, what I am using is a lot nicer than what the original owner had to deal with.
Many laptops today will not age as well as, in ten years, that soldered on 8 GB is really going to be a bummer.
I use Linux a fair bit which makes a difference. Linux is fantastic on older hardware and the software itself is totally up to date. I use old Mac laptops a fair bit. The version of MacOS they support would be unusable as would the applications that would still run on it. The same is becoming true of Windows.
That soldered ram is indeed a killer, and the transition can be quite sudden. We have a 2014 macbook air with 4gb of soldered ram, and my kid was fine using it for office and browsing, until at one point he suddenly wasn’t. The ram footprint of that software together with macOS updates passed some threshold where it just became too slow, so now he’s using one of my other old laptops, a windows machine with an even slower cpu, but with 8 gb ram, and that one is still ok, for now.
Apple’s talk about sustainability is just talk as long as they bake in poison pills like soldered ram and soldered storage. Those machines are unupgradable and unrepairable.
The thin and light market is almost entirely soldered ram now, so now I just price in the 16 gb upgrade as the base config price when looking at those.
The real scandal though, is that you need more than 4GB of ram to do some office and browsing. I mean, you update the software and it just becomes worse?
Honestly I feel bad for most people who ended up with a 4gb macbook, and Apple should feel some shame for continuing to sell them for so long, not terribly unlike the Vista debacle. 4gb was an awful choice already in 2014, and since 16gb was (I think) the max and still kinda mid, that was the obvious choice; 8GB would have still been fine for an average user. I have no idea if 64gb is the obvious choice now, but the upgrade is so obscenely expensive that I'll be holding onto my intel thing for a while longer. Not as long as I can, but until the cost to value ratio becomes more compelling, or something like a cellular modem comes around.
It's about price detection. If you offer several versions of a product to discover the maximum reach consumer is willing to pay, then the cheapest versions need to be meaningfully worse to drive those who can pay to the me expensive versions.
True, but at a certain point you're just selling a shitty product with no longevity and justifying with behavioral manipulation for profit. Same with the 16gb iPad or meager iCloud drive storage tiers. They were fine I guess when they first came out, whatever, but the legitimacy of that tier shouldn't have lasted more than a small number of years
They finally catered to devs in the last few years but it’s already too late for many of us because macOS is garbage now. The ARM transition makes it worse. I use Linux now.
No idea what you mean. The M1 MacBook Air is the best machine Apple has produced for years and finally convinced me to update my old personal laptop. The battery life is insane despite the performance being better than any of the high end Windows laptop I have used for work during the past decade. Plus it was correctly priced at launch which is very rare for Apple which usually extremely overprices everything.
Ya I'm not sure where people are coming up with this extreme opinion. If they go to linux, maybe they're just trying to follow a path that's been restricted through layers of system protection or something. One of my only gripes with new macbook pros is just that the ram feels insultingly expensive, and to go past 32gb you have to upgrade the CPU (all in it's over $1000 to get there).
Yeah it’s top-tier hardware, but the software limits it. My main issue has been with Docker, which has been discussed to death here on HN in various threads.
Very much disagree, there's not a chance in hell I'd choose linux over macos and imo—with a few exceptions—it's better than ever, and gradually becoming a more approachable platform to build things for.
Although I'm fine with linux for server, when something breaks it can be a massive pain in the ass to track down why, until you've spent years day-in-day-out troubleshooting particular software to the point you know immediately what the issue is. This just doesn't happen at all for me personally on macOS in everyday use.
I was listening to a Python podcast a few months ago where they were hating on macOS/Macs and it really resonated with me. They were saying it was becoming a common sentiment to move away from Macs, and this is after their throttling issues seen in Intel macs.
Anyway they’re just opinions. I can list the reasons macOS makes my life harder, or doesn’t do things it used to do, for example display scaling on 1440p displays, but ultimately it doesn’t matter, because it works for you!
What do you mean by display scaling on 1440p displays? I'm using a 2560x1600 display atm but it's just running at native resolution.
I'm not here to vehemently defend macOS, and ultimately it is just my opinion and personal experience, but I see a lot of "macOS isn't what it used to be" or complaints about some specific gripe. They do happen rarely, but I've used other OSes and don't see how they'd be more compelling. What's on the list for you?
So you won’t be able to scale the display, meaning 2x all UI elements. You used to be able to, but Apple removed that feature in the last couple of years. There was a sort of hackey workaround you could do on x86 macs, but it doesn’t work on ARM macs. It’s just dumb man. All I can fathom is they want to sell more “5K” displays.
I find it harder than ever to just do stuff. SIP makes things harder, and I get that it improves security, but software engineers don’t care for stuff like that that just gets in our way. Gatekeeper used to be just a mild inconvenience.
If I want to downgrade a Mac that’s on a beta version of macOS, I have to have another Mac, connect that Mac to it, and run software to “restore” it. I can’t just plug in a flash drive with the macOS version I want.
Night Light doesn’t work on DisplayLink displays. No real reason for that.
It’s 2023 and I still can’t adjust the brightness or volume of external displays (sometimes it works with something like Lunar). This is shit that Windows and Linux have been doing for 15+ years.
Not a lot of people know this but DisplayPort Daisy Chaining doesn’t work on macOS. Not supported. Never will be.
Screen recording doesn’t include audio. I’m sure there’s a way around it but come on.
Yeah it seems decent enough for non-devs but iCloud has always been buggy for me and everyone I know that uses it. I don’t have a Mac anymore but I do have an iPhone.
Yeah I'm so glad I paid top dollar for the 8GB version of the OG X1 Carbon back in 2012 - and got the i7 with the biggest NVME, a 256gb.
I also have a newer Nano for work, but prefer the keyboard of the old. If I had picked the 4GB model then, it would have been pretty useless today.
Today I wouldn't buy a personal laptop with less than 32GB of memory. The X1 Nano was hard locked at 16GB, and I guess I didn't quite anticipate how much I'd be using Docker. :(
(It's super lightweight though, to the point where it's impossible to know if I've packed the laptop in my bag by weight alone. Hopefully it'll see an upgrade soon.)
Compute power has never been an issue for me, but I generally try to max out the RAM as much as my budget allows. This is why I still have a 2010 Macbook that is more than fine. Started with 4GB and was barely usable for heavier even when it came out, but dropping 16GB into and it has been smooth sailing for over a decade now.
That said my daily runner is now T420 from (2011-ish) that came with 6GB of RAM that is in near mint condition I bought for $100. I swapped the HDD for an SSD and just threw Linux Mint on here - I disable the swap file so that it doesn't hammer the drive. So when it is 6GB of RAM - that is all it can work with but it has never been an issue.
But the original HDD was running Windows 7 and oh my gosh the bloat they had installed! This poor thing was in pain with that junk on board! It is amazing how many times I have had an old laptop come up, unusable because of how the OS treats the system and the users just keep dumping more on it that is reasonable. But swap the disk drive and OS and you would never recognize some of these things.
I once came across some netbook Toshiba Presario (?) - 1.4Ghz Core Duo with 3GB of RAM running Windows Vista. The poor thing! But do the old one two on it and it was a brilliant little machine for plucking away on, could easily get 6-7 hours on battery as well.
> I swapped the HDD for an SSD and just threw Linux Mint on here - I disable the swap file so that it doesn't hammer the drive.
If that's for the longevity of the drive, I don't think that's really even necessary on a personal device, based on my limited experience. Unless you're aiming for a lifespan measured in decades or would expect to be swapping heavily if you had swap.
I have a basic consumer-grade Samsung 850 EVO SSD that I've been using daily in my laptop since ~2015 or 2016 or so. I've had zero problems with write performance so far, and the SSD reports a wear leveling count of ~140 which seems to indicate the (average?) number of writes per block so far. That value normalizes to a SMART value of 93 out of 100. While SMART may not be much of a reliable indicator of anything, if the number of blocks written is anywhere near realistic, that value seems to make approximate sense if TLC NAND lifetime expectancy is rated at ~1000 writes per block.
Samsung also seems to have a warranty of up to 5 years or 150 TB written for this SSD (depends on drive capacity). The drive reports a total of 23.7 TB written so far.
I haven't been doing a lot of data-heavy work on the device but I haven't really been particularly careful with the SSD either. I've got a swap that's seen some actual use (8 GB RAM), and I also hibernate semi-regularly. I've been trying to keep at least ~15 percent of the capacity free to help with wear leveling but that's about the only conserving I've been doing.
Of course it might be that my drive is just waiting to suddenly start failing writes but I'm not really expecting that.
And your disabling swap might be for some other reason, of course.
But if this is par for the course for consumer-grade SSDs in general, I wouldn't really be worried about the effects of swapping on life spans unless there's some particularly heavy hammering planned.
> Unless you're aiming for a lifespan measured in decades or would expect to be swapping heavily if you had swap.
This one. My VERTEX3 is running fine, though it's not TLC, of course.
And with auto-leveling all you really need (if you care that much) is to.. increase the swap size, so there would be less evictions from the swap => less writes.
But anyway, even modern TLC drives would be fine for a decade if this is just a machine used for a couple of hours everyday. You need like a constant 3MB/s 24/7 to even be close for their rated TBW.
> And with auto-leveling all you really need (if you care that much) is to.. increase the swap size, so there would be less evictions from the swap => less writes.
How does that work out? To where would data be evicted from swap?
Increased swap size might of course help with wear leveling if that means there's just more unallocated space on the device.
> But anyway, even modern TLC drives would be fine for a decade if this is just a machine used for a couple of hours everyday. You need like a constant 3MB/s 24/7 to even be close for their rated TBW.
Mine hasn't even been just a couple of hours every day. Sure, during a regular work week it might not be used a whole lot, but I've used it for entire days e.g. when I was doing my master's full-time.
Again, not a whole lot of memory intensive work, but not intentionally conservative for a general-purpose laptop either. Running just about any kind of a game often means some of the memory of the umpteen browser processes get swapped out.
This is actually an interesting question, but this is quite obvious if you know how it works.
Consider this scenario:
Some app loads data (it goes to RAM), sometime later this data is moved to the swap. Now the app is trying to access it, so there are two ways:
there is no free RAM (eg other apps locked the physical memory), so the access is going through the swap
there are free memory blocks so the OS copies the accessed data to the RAM, app is working fast... but till the data is modified, there is no need to mark the data in the swap as a stale, so until it happens the OS can wipe the RAM and redirect the app to the swap.
Now consider another app is pressured to moved to the swap at the same time:
If there is not enough swap space, then the data of the first app, which has [a valid, synced] data both in the RAM and the swap is just left with with the data in the RAM and the swap space is freed to accomodate the data of the second app. If turntables (sic) the process would repeat, just with the both apps trading places.
But if there is enough swap space available, the second app data is just written to the spare swap blocks, and both apps' data could be read from the swap at any time.
Quite simple.
> Increased swap size might of course help with wear leveling if that means there's just more unallocated space on the device.
This is a thing too, ofc, but the main point is having less 'overwrites' per usable space.
> Mine hasn't even been just a couple of hours every day.
It's even 'worse' than that. I've seen a server with 850/860 Evos and LUKS (so the worst case: cheap drives and all writes are new), even after the 3 years they were only 50% of 'ssd health'.
TLC is the worst thing happened to SSDs as a technology, but people are forgetting what 3MB/s * 3600 * 24 is whooping 259GB a day, and you need to write ~259GB a day for 3 years to totally deplete the [official] write endurance (YssdMV).
Right. So, when swapping pages back into RAM, a typical OS will not actually remove the pages from swap despite them now being in RAM as well? So if the memory needs to be freed again for another purpose before the pages in memory have been modified in the interim, the OS can just drop the pages from RAM without having to write them into swap again?
And thus thrashing causes more swap writes when there's limited swap space.
That's the only scenario I can think of. I didn't think about pages not being removed from swap when swapping in.
> I've seen a server with 850/860 Evos
Well, that's desktop-grade hardware in a server. Might work, but if it doesn't, you get what you asked for.
> and LUKS (so the worst case: cheap drives and all writes are new)
How does LUKS affect that?
> people are forgetting what 3MB/s * 3600 * 24 is whooping 259GB a day, and you need to write ~259GB a day for 3 years to totally deplete the [official] write endurance (YssdMV).
Right, this was my point. That's not an impossible amount of writes but it is a lot more than almost anybody does on a laptop/desktop.
> OS will not actually remove the pages from swap despite them now being in RAM as well
Now the hard part: I don't have an idea, because I'm no that versed in OS VMM, but it does make sense even if it is 1993. But this is the basics and I doubt it works some other way. I would be happy if someone more familiar would chime in (and the reason why or not), but in my experience this is what happening.
> Well, that's desktop-grade hardware in a server. Might work, but if it doesn't, you get what you asked for.
Extremely often and works not just 'quite well', but well enough. Similiar servers (literally, just without LUKS) were fine, with > 90% of 'SSD health'
> How does LUKS affect that
Every write is different, because the bytes on the storage are already encrypted. Ie you write the same bytes to the same blovk pn the FS, but the underlaying, encrypted, bytes are not the same => new write.
If the OS didn't keep the pages around in swap after they were swapped back into RAM, I don't see how having more space in swap could reduce writes to it.
> Every write is different, because the bytes on the storage are already encrypted. Ie you write the same bytes to the same blovk pn the FS, but the underlaying, encrypted, bytes are not the same => new write.
How are they not the same as the previous encrypted version of the same bytes if the encryption key stays the same?
Are you sure TRIM/discard just wasn't enabled on the LUKS?
> If the OS didn't keep the pages around in swap after they were swapped back into RAM
Well, why it shouldn't? Don't forget, while the RAM can be prepared (zeroed) fast, the on-disk swap can't be prepared that fast (compared to RAM ofc), and you already under some memory and disk constraints (that's how you got to be using the swap in the first place) so adding another workload for cleaning up the swap is... not a good thing.
If you only would do 'zero on allocate' then sooner or later you would be in a position where you would need to stall the whole system until enough pages in the swap are available. And nor the users nor the programs like that.
> I don't see how having more space in swap could reduce writes to it.
Swap is just 'slow memory' part of your overall virtual memory allocation of the whole OS. If you have a small swap then you would be pressured to evict the data from it more often. If you have a big swap then less pressure => less evictions => less overwriting the same LBAs assigned to the swap file/partition => less writes overall.
> How are they not the same as the previous encrypted version of the same bytes if the encryption key stays the same?
Even if you write 00000000 to the filesystem block (a typical situation would be deleting a file and on the flash/SMR drives you would just call TRIM on those bytes^W LBAs) it's not 00000000 down there, it's some ciphertext.
> Are you sure TRIM/discard just wasn't enabled on the LUKS?
See above, no such thing as TRIM on an encrypted storage device.
Yeah, maybe it should. I don't know how exactly that's typically implemented, hence the question mark.
> Don't forget, while the RAM can be prepared (zeroed) fast, the on-disk swap can't be prepared that fast (compared to RAM ofc), and you already under some memory and disk constraints (that's how you got to be using the swap in the first place) so adding another workload for cleaning up the swap is... not a good thing.
I doubt any of that would typically involve the OS specifically zeroing anything apart from just marking the corresponding parts of the swap space as free. And that's probably in the in-memory data structures keeping track of pages in swap. No need to zero any of the page contents, just mark those areas of swap as unallocated.
An OS might want to keep the pages around for the sake of having thrm still around but unless otherwise shown, I kind of doubt there's a great cost to the deallocation itself.
> If you have a small swap then you would be pressured to evict the data from it more often.
I get that if indeed the OS keeps pages available (and in the books) in swap after swapping them back into RAM, having a larger swap can reduce writes in case those same pages end up getting swapped back out again before they've been modified. It may be that's how it works.
I'm not sure I follow the logic if that's not the case. I don't understand why the OS would "evict" pages from swap just because swap is getting full -- there's nowhere to evict them to except RAM.
If you've got thing A currently in swap and you're needing to "evict" it to make space in the swap for thing B, that means you're wanting to get rid of B in RAM. That means you're already needing more space in RAM for some third thing C, so why would you solve that by swapping pages of thing A from swap into RAM?
My understanding is that swapping pages back in would be initiated by needing those pages back in RAM, so that's not really a case of eviction.
Anyway, since it doesn't seem like either of us knows for an actual fact how it works in any particular OS, I doubt speculation will lead to anything better.
I believe what you've seen as a phenomenon, I'm just not sure the explanation of that phenomenon makes quite enough sense to me.
> See above, no such thing as TRIM on an encrypted storage device.
TRIM may have security implications for encrypted drives, e.g. in terms of plausible deniability for forensics, that's true, and I didn't think of that. Though I'm not sure if that's what you meant.
No TRIM - no info about free blocks what could be safely and fully reused - more overall wear for SSD.
> I doubt any of that would typically involve the OS specifically zeroing anything apart from just marking the corresponding parts of the swap space as free
I'm not sure about the swap but memory nowadays is definitely zeroed before being allocated[0], not only it's more safe on security side of this, it's way more stable. Imagine a buffer overrun bug which wouldn't manifest itself if the PC was freshly booted (ie all memory is zeroes) but after a couple of hours it writes 0x10 blocks but reads 0x100 and executes them? With all the garbage what was left there by previous processes...
Obviously[1], zeroing the swap is expensive and questionable practice, if you are swapping from the RAM then you overwrite the corresponding parts anyway...
Guess I was over(under?)-thinking the process.
> Anyway, since it doesn't seem like either of us knows for an actual fact how it works in any particular OS, I doubt speculation will lead to anything better.
Of course, but I do remember how the things worked back in the day (when the memory were constantly at the constraint because 16MBs isn't enough) and you definitely see the process with A and B swapping in and out, without any C, because you just used A and B, occasionally switching between them.
> No TRIM - no info about free blocks what could be safely and fully reused - more overall wear for SSD.
It's definitely possible to enable TRIM on an encrypted device, at least on LUKS. It requires specific support from the LUKS layer, though, and that requires software versions from the last decade or so and still isn't enabled by default:
I originally hadn't remembered about the security implications or that it's not enabled by default, I just remembered it was possible, so "LUKS, therefore no TRIM" didn't seem to make sense. But yeah, it's not enabled by default.
> I'm not sure about the swap but memory nowadays is definitely zeroed before being allocated
Yeah, good point about switching between processes. Not sure it applies to swap, though, and I doubt that's zeroed, exactly because of what you said.
> Of course, but I do remember how the things worked back in the day (when the memory were constantly at the constraint because 16MBs isn't enough) and you definitely see the process with A and B swapping in and out, without any C, because you just used A and B, occasionally switching between them.
Yes, absolutely, that can happen. But if it's just between A and B without any third C, the entire need to move stuff between RAM and swap is initiated by the need to get A's pages into RAM in the first place (because A is trying to access them). Not by needing more space in swap for B's pages.
Having to write B's pages into swap is caused by memory pressure and the need to fit A's pages into constrained RAM rather than A's pages being "evicted" from swap being caused by a need to get B's pages into swap. You're already needing to get A out of the swap and into the RAM -- that's the initiating reason for the whole deal.
Once the OS switches between the processes again and some of B's pages need to be accessed in RAM, the OS may then need to write A's pages into swap again in order to make room for B's pages in RAM. But again it's due to memory pressure.
I don't see how swap size affects the amount of writes required in that scenario.
Unless, of course, the OS actually also keeps A's originally swapped copy of a page still around in swap after reading it back into RAM. In that case having a large enough swap to fully contain all the dirty pages of both A and B might reduce the amount of swap writes. If A, in the original scenario, only reads data from the accessed pages between the switches and doesn't modify them in the meantime, and the OS has kept its a copy of A's pages around in swap even after swapping the pages back into RAM, then of course when B runs again and its pages need to be read from swap back into RAM, the OS might be able to skip physically writing A's unmodified pages into swap again because their copies are already/still there.
If, that scenario, the swap is not large enough to contain both A and B's dirty pages, the OS had to drop the copies of A's pages from the swap to make room for B's, thus causing the need to physically write A's pages into swap again when switching back.
That's exactly what I was speculating about. Maybe that's what you meant by swap evictions?
I don't know if that's how OSes do it, though. It's possible but it would require some additional tracking of pages and their statuses.
If the OS does do that, then the size of the swap may potentially affect the amount of writes required to swap over time. Although it's only beneficial for pages that aren't modified in RAM between the swaps in and out.
If there is no such mechanism of keeping copies of pages around in swap even after reading them back into RAM, then I don't see how the size of the swap affects the amount of reads and writes.
You could buy them with both crappy panels or good high resolution panels. Obviously most companies opted for the cheaper crappier version, that's why most second hand thinkpads have them
But you can often buy a LCD upgrade kit for cheap on amazon/aliexpress
I have a T420s that really struggles with video e.g. Youtube. Watching any h.245 via VLC or mplayer is impossible. Also the battery is in a bad state and I haven't been able to find a good replacement that isn't equally poor or worse right from the start. It actually has two batteries, one being in what used to be the cd tray. Still, usage time is 3 hours tops.
It had been my daily driver for years, running Linux, but I had to replace it with something newer.
How is your T420 doing with all this? Did you update the CPU? Does it still have the stock battery?
It is still running a 2.4ghz 2nd gen i5. The battery is in decent condition but because it was a second hand machine, I am not sure what its life was like before it.
I do find it odd that it is struggling with video content, it doesn't seem to really push this thing to the limit even at 1080p but it does drive the CPU's a fair bit. It is fun looking at the video state of video decoding on Intel chips during that time period.
2nd gen are still fairly CPU heavy. 3rd gen the load is cut in half. 4th Gen it almost looks like your system is idle running that stuff. I have a Dell Optiplex Mini that runs a 2Ghz 4th Gen i5 and you can barely tell the CPU is doing anything do the media acceleration. Cool to see in action.
On aliexpress, there is an usbc adapter for the t420 power cable that will allow it to take charge from a 65w powerbank (maybe even 20w). No need to search for battery replacements anymore and can also throw away the original power brick.
Yeah, I replaced the battery on my t540p but it's still sub 1 hour life.
Replaced hard drive and upgraded the ram so now it works pretty well as a desktop PC running win 10.
It was running super slow for ages, fixed after I banished Google drive backup. Maybe a compatibility issue with newest Google software thrashing constantly on older OS / hardware.
I have a t420 in my drawer I've been considering doing this with. Years ago before it was old I used to dual boot it with mint and the power management wasn't great. How's your experience been? mind you that was with original spinning drive. what spec SSD did you install? sorry for all the questions.
The only thing that has aged, other than HDD that can be replaced, is video cards, which you can't upgrade. 4k monitors have become common and you will need something fairly recent to drive that. 4k videos also, my laptop struggles with certain 4k hevc iphone videos.
Actually my ZBook has an upgradable graphic card but I can upgrade it to a better model of the same age, so it's basically useless for compatibility with newer drivers. Nvidia and Noveau are going to end supporting cards from 2014 sooner or later.
I've been having a hard time getting Linux to work on older hardware ever since the majority of distros dropped 32-bit builds.
Then again, "older" for me means stuff from the Pentium through the Pentium 4 era. Seeing as "older" today means Sandy Bridge and the like, the moral of this little tale is I am a fucking old man angry at the kids on my lawn.
The x86-64 architecture was introduced in 1999, with the first processor introduced in 2003. Comparatively, the 8088, introduced in 1979, was barely older at the time than the 64-bit Opteron is now.
Not as long as i386 devices still exist that are superior in at least one way to any amd64 or arm alternative. And what "superior" means depends on the use case. Some Mega Ryzen Uber Speed CPU with a Googol FLOPS that can't connect to anything is worthless to me.
I won't take any distro seriously that doesn't support at least i686, amd64, arm and arm64.
Compiling for four architectures can't be too much to ask. Shit software that can't be written portably enough to run on more than amd64 must be kicked off the repo. We need this pressure on developers if we're to maintain a modicum of code quality.
> Shit software that can't be written portably enough to run on more than amd64 must be kicked off the repo.
Convince me, why should I spend my time supporting i686? Just to accommodate a handful of people still running 32bit hardware?
Calling other people's work "shit" just because they don't spend their free time supporting the wishes (not needs!) of 0.1% of their users is rude and extremely entitled.
FWIW, I agree with you. But I think there is an argument to be made: the OpenBSD team has indicated that cross-architecture ports help them find bugs that otherwise might not be noticed if they were just targeting the usual suspects.
That said, I am upgrading laptops that sold with 2 - 4 GB of RAM to 16 GB, or at least 8 GB. Often I am replacing on old HDD ( maybe as slow as 5400 RPM ) with a faster SSD. That SSD also juices the battery life. So, what I am using is a lot nicer than what the original owner had to deal with.
Many laptops today will not age as well as, in ten years, that soldered on 8 GB is really going to be a bummer.
I use Linux a fair bit which makes a difference. Linux is fantastic on older hardware and the software itself is totally up to date. I use old Mac laptops a fair bit. The version of MacOS they support would be unusable as would the applications that would still run on it. The same is becoming true of Windows.