A lot has changed in the 25 years since gbit wired ethernet was rolled out. While wired ethernet stagnated due to greed.
Got powerlines? Well then you can get gbit+ to a few outlets in your house.
Got old CATV cables? Then you can use them at multiple gbit with MoCA.
Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.
And frankly just calling someone who wires houses and getting a quote will tell you if its true. The vast majority of houses arent that hard, even old ones. Attic drops through the walls, cables below in the crawlspace, behind the baseboards. Hell just about every house in the USA had cable/dish at one point, and all they did was nail it to the soffit and punch it right through the walls.
Most people don't need a drop every 6 feet, one near the TV, one in a study, maybe a couple in a closet/ceiling/etc. Then those drops get used to put a little POE 8 port switch in place and drive an AP, TV, whatever.
> Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.
Depending on the age of the house, there's a chance that phone lines are 4-pair, and you can probably run 1G on 4-pair wire, it's probably at least cat3 if it's 4-pair and quality cat3 that's not a max length run in dense conduit is likely to do gigE just fine. If it's only two-pair, you can still run 100, but you'll want to either run a managed switch that you can force to 100M or find an unmanaged switch that can't do 1G ... Otherwise you're likely to negotiate to 1G which will fail because of missing pairs.
Gigabit ethernet "requires" 4 pairs of no-less-than cat5. The 100mbps standard that won the race -- 100BASE-TX -- also "requires" no-less-than cat5, but only 2 pairs of it.
Either may "work" with cat3, but that's by no means a certainty. The twists are simply not very twisty with cat3 compared to any of its successors...and this does make a difference.
But at least: If gigabit is flaky over a given span of whatever wire, then the connection can be forced to be not-gigabit by eliminating the brown and blue pairs. Neither end will get stuck trying to make a 1000BASE-T connection with only the orange and green pairs being contiguous.
I think I even still have a couple of factory-made cat5-ish patch cords kicking around that feature only 2 pairs; the grey patch cord that came with the OG Xbox is one such contrivance. Putting one of these in at either end brings the link down to no more than 100BASE-TX without any additional work.
(Scare quotes intentional, but it may be worth trying if the wire is already there.
Disclaimers: I've made many thousands of terminations of cat3 -- it's nice and fast to work with using things like 66 blocks. I've also spent waaaaay too much time trying to troubleshoot Ethernet networks that had been made with in-situ wiring that wasn't quite cutting the mustard.)
> Neither end will get stuck trying to make a 1000BASE-T connection with only the orange and green pairs being contiguous.
They can get stuck, because negotiation happens on the two original pairs (at 1Mbps), and to-spec negotiation advertises the NIC capabilities and selects the best mutually supported option. Advertising fewer capabilities for retries is not within the spec, but obviously helps a lot with wiring problems.
The key thing with the ethernet wiring requirements is that most of the specs are for 100m of cabling with the bulk of that in a dense conduit with all the other cables running ethernet or similar. Most houses don't have 100m of cabling, and if you're reusing phone cabling, it's almost certainly low density, so you get a lot of margin from that. I wouldn't pull new cat3 for anything (and largely, nobody has since the 90s; my current house was built in 2001, it has cat5e for ethernet and cat5e in blue sheaths for phone), but wire in the wall is worth trying.
TIL that they can get stuck in no-man's-mand with 2 pairs. That seems stupidly-incompatible, and it isn't something I've witnessed myself, but it makes sense that it can happen.
My intent wasn't to dissuade anyone from trying to make existing cat 3 wire work (which I've never encountered in any home, but I've not been everywhere), but to try to set reasonable expectations and offer some workarounds.
If a person has a house that is still full of old 2- or 4-pair wire, and that wire is actually cat3, and is actually home-run (or at least, features aspects that can usefully-intercepted), then they should absolutely give it a fair shot.
I agree that the as a practical matter, the specifications are more guidelines than anything else.
I've also gone beyond 100 meters with fast ethernet (when that was still the most commonly-encountered) and achieved proven-good results: The customer understood the problem very well and wanted to try it, so we did try it, and it was reliable for years and years (until that building got destroyed in a flood).
If the wiring is already present and convenient, then there's no downside other than some time and some small materials cost to giving it a go. Decent-enough termination tools are cheap these days. :)
(Most of the cat3 I've ran has been for controls and voice, not data. Think stuff like jails, with passive, analog intercom stations in every cell, and doors from Southern Steel that operate on relay logic...because that was the style at the time when it was constructed. Cat3, punch blocks, and a sea of cross-connect wire still provides a flexible way to deal with that kind of thing in an existing and rather-impervious building -- especially when that building's infrastructure already terminates on 25-pair Amphenols. I'll do it again if I have to, but IP has been the way forward even in that stodgy slow-moving space for a good bit now.)
Can confirm on the gigabit because I've got my gigabit internet running over old phone line right now. I'm not sure exactly how long the run is, but it goes to this floor's electrical room where the ONT is housed into a closet in my apartment where it's then spliced into CAT-5 to reach the router. I really didn't expect it to work but speed tests report that I'm getting 900+ Mbps.
This is a mixed bag. As someone who worked in the storage industry for ~10 years, there are a lot of poorly defined behaviors that are vendor/model specific and I can see how its easier to just pick a particular model, test it and declare it the blessed version having done similar stuff myself.
Ex, SMART attributes, mode sense/caching behaviors, etc. Which can all be used in conjunction with RAID to determine when a disk should be replaced, or the user warned about possible impending doom, to simple things like how one sets cache WT/WB and flushes the caches (range based flushing is a thing, doesn't always work, etc) for persistence.
OTOH, much of this is just 'product maturity' because it is possible to have a blessed set of SMART/etc attributes that are understood a certain way and test to see if they exist/behave as expected and warn the user with something like "this drive doesn't appear to report corrected read errors in a way that our predictive failure algorithm can use". Or "This drive appears to be a model that doesn't persist data with FUA when the caches are set to write back, putting your data at risk during a power failure, would you still like to enable writeback?"
And these days with the HD vendors obfuscating shingled drives or even mixing/matching the behavior in differing zones its probably even worse.
So initially I wanted to give you a knee-jerk response about how Synology could have gone with a warning rather than an outright ban. Then I read the article...
It seems that this was never an outright ban, but non-blessed drives either generated a warning or they had reduced functionality. What TFA fails to mention is what this "reduced functionality" is.
If it's something like RAID rebuilds take longer because other drives might not have the requisite SMART attributes or some other function that's required is one thing. But halving the drive speed just because it's not a Synology drive is another. This knowledge would put me in a better position to know if I should harshly judge them or not.
I think it's totally fair to raise a warning that a particular drive has not been tested/validated and therefore certain guarantees cannot be met. I can fully respect how challenging it must be to validate your product against a basically infinite combinatorial collection of hardware parts. I've learnt long ago that just because a part fits does not mean it works.
I don't know the details of the warnings either, but from the original articles it sounds like they had moved to a QVL list that didn't include 3rd party devices, only their rebranded ones. Which is possibly because they got seagate/wd/etc to tweak something in the firmware. Which isn't unheard of for large vendors. And it is somewhat fair, qualifying drive persistence is probably some ugly unit test that takes hours to run, and requires being able to pull power on the drive at certain points. So the warning ends up being the equivalent of "we don't know if this drive works, lots of them don't we are going to disable this aggressive cache algorithm to assure your data is persisted" and that kills the performance vs the qualified drive. But because some non technical PM gets involved the warning shown to the user is "This drive isn't qualified".
The other take though, was that it was just a $ grab by rebranding and charging more for drives that were functionally the same. Which for logical people made sense because otherwise, why not say why their drives were better. But sometimes the lawyers get involved and saying "our rebranded drives are the only ones on the market that work right when we do X, Y, Z" is frowned on.
Hard to really know without some engineer actually clarifying.
No, it was a pretty complete ban. From a reputable reviewer[0]:
> New Installations Blocked for Non-Verified Drives
> As discussed in our NASCompares coverage and testing videos, attempting to initialise the DS925+ with hard drives that are not on the 2025 series compatibility list will block you from even starting DSM installation.
and
> Expanding Existing Storage Pools with Unverified Drives is Blocked
> Another key limitation to note is that you cannot expand an existing storage pool using unverified drives — even if your system was initialized using fully supported drives.
and
> To test RAID recovery, one of the three IronWolf drives in the migrated SHR array was removed, placing the system into a degraded state. We then inserted a fresh 4TB Seagate IronWolf drive.
> Result: DSM detected the new drive but refused to initiate RAID rebuild, citing unsupported media.
You could pull all of your drives from an older Synology and put them in the new device, but you couldn't add drives to the volume or replace crashed drives. And if you were starting with a brand new NAS, you couldn't even initialize it when using 3rd party drives.
I'm OK with a warning notice. I'm not even remotely OK with this.
By the way, their official drive compatibility list for the DS923+[1] shows dozens of supported 3rd-party drives. The same guide for the DS925+[2], an incremental hardware update, shows 0. So if you bought a bunch of drives off their official support list, they're useless in newer models. Apparently a Seagate IronWolf was perfectly fine in 2023 and a complete dud in 2025.
Oh, and Synology only sells HDDs up to 16TB in size[3], and they only have up to 12TB drives (for $270) in stock today. That price will get you a 16TB IronWolf Pro off Amazon. If you have cash to spend, you can buy a 28TB IronWolf Pro there, which is 2.3x bigger than the largest Synology you can order from the first-party store today.
Which because they have gotten away with it for so long, everyone else has been copying (or well I guess this has been going on for decades in various forms) them.
Ex, netflix, which has decided to pop up a 'we noticed there are people who don't live with you using your account, click here to pay us another $9/month' every time it starts on my TV, presumably because my underage child, who legally lives with me, uses it on her phone when she is away at school for 5 months a year.
And then when someone clicked the default pay us button, I was unable to figure out how to remove the charge without actually calling and telling them I was canceling after 20+ years. (the whole extra member thing wasn't showing up in the web ui, no idea why, maybe its because of the TV clicking process).
Its worse. The day I discovered that std::array is explicitly not range/bounds checked by default I really wanted to write some angry letters to the committee members.
Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around? I promptly went into my standard library and reversed that decision because if i'm going to the trouble to use a C++ array class, it better damn well give me a tiny bit of additional protection. The .at() call should have been the version that reverted to C array behavior without the bounds checking.
And its these kinds of decisions repeated over and over. I get its a committee. Some of the decisions won't be the best, but by 2011 everyone had already been complaining about memory safety issues for 15+ years and there wasn't enough politics on the comittee to recognize that a big reason for using C++ over C was the ability of the language to protect some of the sharper edges of C?
>Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around?
Because the point was not to make an array type that's safe by default, but rather to make an array type that behaves like an object, and can be returned, copied, etc. I mean, I agree with you, I think operator[]() should range-check by default, but you're simply misunderstanding the rationale for the class.
Which goes to the GP's point, which is that security and robustness are not on the radar.
And my point in providing a concrete example, where a decision was made to prioritize unsafe behavior in a known problematic area, when they could just as well have made a half dozen other decisions which would have solved a long standing problem rather than just perpetuating it with some new syntactic sugar.
I didn't dispute that, I was simply addressing the point about std::array. The class is not meant to be "arrays, but as good as they could possibly be". It's "arrays, but as first-class objects instead of weird language constructs".
That said, making std::array::operator[]() range-checking would have been worse, because it would have been the only overload that did that. Could they have, in the same version, made all the overloads range-checking? Maybe, I don't know.
Good news! Contracts were approved for c++26 so they should be in compilers by like 2031 and then you can configure arrays and vectors to abort on out-of-bounds errors instead of corrupting your program.
Let no one accuse the committee of being unresponsive.
Which is probably exactly why this was added. The cheap way to usually tell if someone is using a 3rd party UI toolkit, is to start tweaking the system theming and see if the application follows some scaling/color changes correctly.
In this case some subset of apple provided apps weren't following the theme and they fixed it by adding a private css property.
Vs some other OS vendor that likely removed most of the theme controls so they didn't have to keep fixing a huge pile of 1/2 baked abandoned toolkits scattered across their product portfolio.
Austerity has repeatably been shown to kill economic growth and trigger recessions/depressions. Yet for whatever reason there is one party in congress who's entire economic policy is basically austerity measures. Repeatedly, the ax man comes in and cuts spending that generates multiples of economic activity in return. This happens all the time cut a $1 here and it costs $4 over there kinds of things. Infrastructure is frequently in this category, delay maintenance until it can't be repaired, or simply fail to invest until a road/bridge is gridlock 18 hours a day, or the train is to scary to ride. The current admin is doing this 10x because they aren't being smart about what they cut, just doing it for political points.
Is there government waste? Sure, but that requires micro tweaks, aka instead of hiring more TSA agents maybe decide they shouldn't be randomly selecting every 3rd precheck user for additional screening/etc, or maybe decide that investing in even slower scanners isn't the right choice. Plus, in TX having seen some of these contracts the city/state gives out, the idea that private industry is more efficient is laughable. In some cases they are basically contracting out for millions of dollars a year a job that could be handled by one or two actual government employees paid less than $100k a year.
A lot of current conservative grousing about the deficit is priming the electorate for destroying social security. That's why they always highlight it as the main cause of the deficit, even though it has a totally separate income stream from the rest of the budget.
Republicans have done the exact opposite of balancing the budget at least since Reagan (inclusive) non-stop, zero R presidents have even done as well as democrats at pursuing a balanced budget since then, so they're not serious about it, including and perhaps especially Trump—but they really, really want to get their hands on social security, it's so much money out of the hands of rich people that they just can't stand it.
Expect this to get much worse soon. Even left-leaning youth will probably start to agree with the idea of destroying it out of spite, and even if they know it won't actually help them. At some point we'll have to admit, it's not easy to explain why youth need to subsidize things for an older generation that enjoys housing, healthcare, and stocks when they themselves have not had the disposable income for any of that. Scott Galloway has a ted-talk where he calls it generational theft: https://www.youtube.com/watch?v=qEJ4hkpQW8E
No party's platform is based on ending or destroying social security, especially not the GOP who gets a broad base of support from retirees. That being said, I've never had much faith in it being around for myself, though I'm 30 years from eligibility). It's possible that productivity boosts from automation and AI could overtime make SS actually sustainable in spite of overwhelming boomer population to support.
The federal tax revenue as percentage of GDP is remarkably stable regardless of widely varying income tax rates over the decades (ranging 15-20% since the 1940s): https://fred.stlouisfed.org/graph/?g=ockN. This holds up no matter which regime (D or R) is dominant, because the economy reorganizes itself around incentives or disincentives created by various tax policies.
Republicans, including some of the current crop, have been making noise about putting the money in the stock market, one way or another, for a long time, and if anyone’s gonna actually do it it’s these guys.
A reminder to everyone: Social Security is NOT a retirement plan, it is an insurance plan. When your 401(k) is cratered by the stock market or your pension goes the way of Enron, SS is supposed to be there to hopefully keep you from being dumped in the gutter. Tying SS to the stock market would not be a smart move.
You're drawing a semantic distinction that doesn't really exist. Insurance for retirement, retirement funding -- it's all the same thing.
A 100% equities (or 100% Enron) portfolio is not the only or best option available to the SSA. And the SS portfolio can't be panic-sold by the individual retiree in a market downswing. Using equities to achieve some additional upside for SS in one way or another is plausibly a reasonable idea.
> It's possible that productivity boosts from automation and AI could overtime make SS actually sustainable
Social Security is funded entirely by payroll tax deductions and invested only in treasury bonds. Bonds which have had very low yields in this century. Guess what you collect less of as automation and AI ramp up? Payroll taxes.
> This happens all the time cut a $1 here and it costs $4 over there
The thing that makes these policies work, of course, is the $1 is cut from a billionaire's taxes, and the $4 is paid by the rest of us. Voters seem to like this policy, for reasons that are beyond me.
"Austerity" lays the political groundwork for tax cuts for the rich. Austerity never affects military or police budgets though, we're happy to increase the debt to finance those.
Well at least the billionaire class got their massive tax cut, that's what is important. Buying food may get more difficult for the rest of us, but at least Jeff Bezos can buy another megayacht, and I'm happy for him about that.
Me too. Since he migrated his wealth from the place he earned it for thirty years to a tax haven in Florida the realities are different. With warming tropical waters, he will need to spend more money on a yacht that can handle the bigger storms. It's not a problem you or I need to worry about, but he does, and that's probably why he is a billionaire and I'm not.
Gonna shill Venstar's colortouch line despite not having any financial relationship with them. Only because I don't see anyone else mentioning them, and I've had good luck with them for the past ~10(?) years in multiple houses. They are AFAIK the only "smart" wifi thermostat that both provides a standalone local JSON API (https://developer.venstar.com/documentation/) , and have a pretty wide ecosystem of devices and supporting infra that covers as much if not more than what some of the larger players do.
They are pretty niche, so much so that it took me maybe a year of half-hearted searching before I found a smart thermostat that wasn't dependent on an internet connection. And of course there is a home assist integration/etc https://www.home-assistant.io/integrations/venstar/
Read some reviews before you buy them, or you might be surprised by the resistive touch on some models (which works fine with the UI), audible click when the relays go on/off (this is actually a big advantage in failure modes), and other features some people might not like.
Batteries are just an extension of the thinking that PV or Wind are 'cheap'. There is slight U shape in the price where an initial fraction lowers the cost, but then it just additional cost overhead. Cost shifting some of the excess solar into the evening to reduce the peak load there is fine, but then your still paying for a pile of excess backup capacity to sit around idle for those days when the sun doesn't shine, and adding more batteries beyond a certain point is the same. They just sit around idle most of the time adding to the cost. I've posted here napkin math for how much a W of solar or reliable PV actually costs and been down voted but the math is easy when you stop believing that a W from PV/Wind is the same as a W from your local Gas/Coal/Hydro/Nuke plant.
All those gas plants and batteries sitting around idle soon start dominating the cost structure because the price of their produced watts starts to go exponential.
> All those gas plants and batteries sitting around idle soon start dominating the cost structure because the price of their produced watts starts to go exponential.
When they're barely used, price per watt stops mattering. It's just a fixed cost.
If the fixed cost of an idle gas plant dominates, that's a good thing. An idle gas plant doesn't cost that much, so if the final power price is that plus 50% I'm pretty happy.
If batteries are expensive for long rare gaps then don't use batteries for those gaps. Easy.
And there is _NOTHING_ worse than being locked out of an account because without asking they reverse the password and second factor authentication while your traveling and don't have access to a phone/etc.
Nevermind. that pretty much all services treat the second factor as more secure than my 20 character random password saved in a local password safe. And those second factors are, lets see, plain text over SMS, plain text over the internet to an email address, etc, etc, etc.
Got powerlines? Well then you can get gbit+ to a few outlets in your house.
Got old CATV cables? Then you can use them at multiple gbit with MoCA.
Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.
And frankly just calling someone who wires houses and getting a quote will tell you if its true. The vast majority of houses arent that hard, even old ones. Attic drops through the walls, cables below in the crawlspace, behind the baseboards. Hell just about every house in the USA had cable/dish at one point, and all they did was nail it to the soffit and punch it right through the walls.
Most people don't need a drop every 6 feet, one near the TV, one in a study, maybe a couple in a closet/ceiling/etc. Then those drops get used to put a little POE 8 port switch in place and drive an AP, TV, whatever.
reply