Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Run Ethernet for everything you care about; problem solved.

If only wired ethernet was considered standard wiring like coax and power outlets, and installed during construction.



And make sure a low-voltage electrician installs it. Half my ethernet are daisy chained. Some of it is stapled to the studs of the wall. Of the 5 drops in my condo only 3 lead back to the IDF. Fixing it is going to be damn near impossible.


This is what happens when electricians treat cat5e/cat6 UTP like a fancy new fat POTS cable. I bet it's spliced together with these connectors too.

https://www.ebay.com/itm/splice-connector-ideal-85-925-jelly...


They didn't even do that. They double punched the RJ45 jacks. They used the proper Cat5e everywhere (cables, jacks, and punch blocks) but the wiring was done terrible enough that I can't really use it.


I recommend just running conduit to everything during your next renovation, that way you could easily upgrade down the road. I wish conduit was used for all of these coax lines in my house.


20 years ago, I heard people online saying they were running their cat5e cable in conduits for ease of upgrading. Reasonable enough, after the 10-megabit to 100-megabit to 1-gigabit upgrades of the preceding decade.

Not once, since then, have I heard of any of them pulling new cables.


I still want a mainframe in my basement and HDMI ports in my walls for thin-client laptops...


there is a pretty good chance that 20-30 meters of cat5e utp in a house will test successfully for 2.5GBaseT today, not that even one percent of consumers will have such a switch.


Officially 2.5G can run up to 100m on Cate 5e:

* https://en.wikipedia.org/wiki/Ethernet_over_twisted_pair#Var...

So I wouldn't be surprised if 5GigE, and maybe even 10, could be done on shorter runs.


Yeah, I wish. I wanted to do this in a townhouse where we have RJ-45 ports in every room, but for some reason, they aren't actually internally connected to each other. Only I found the walls are all completely filled with insulation and running any new wiring at all requires pulling some of that out first and then putting it back in after, otherwise the nice noise barrier I currently enjoy from my walls shared with neighbors suddenly goes away. This means instead of making one hole upstairs, dropping a weighted cable, and making a second hole downstairs, I now need to tear out entire sections of wall.


They make small fiberglass sections that are threaded. You can assemble them together (even in tiny spaces) and push them through wall cavities until you get to your destination. Then you use the rod to pull the new wire (or a piece of string) backwards. I've done this to pull through walls with insulation quite often. I've even used them to fish through all sort of crazy places. It's fast and easy once you get the hang of it.


Electricians have tools and methods to run wires in walls with insulation without tearing them out. You should call one!


I thought it was. My house was built in 2003 and has CAT-5 to every room with RJ45 jacks on the walls. When I sold my previous house to a builder, he planned to open the walls and run CAT5 also, because "everyone expects that these days."

I guess newer houses might be expecting everything to be WiFi now, though.


In 2021, if you’re running network in the walls, run fiber to switches with SFP adapters. We maxed out twisted pair copper years ago (you can squeeze more over shorter runs, but that hits diminishing returns quickly) and home internet speeds are going above 1gbps. If you install in-wall cat6 it’ll probably be obsolete by the end of the decade.


If you install in-wall cat6 it’ll probably be obsolete by the end of the decade.

No. Most devices will still work just fine on Cat 5e for another 10-20 years, running at 1gbps. Cat 6a, running at 10gbps, will be fine for residential for another 25-30 years.

Putting fiber in the walls is an expensive overkill. If you are worried about future proofing, just install conduit with cat 6a.


100 Mbps ethernet is unimaginable luxury for people struggling with congested 2.4 GHz networks. Let’s not let the perfect be the enemy of the good.

The vast majority of homes, at least in the US, are nowhere near 1 Gbps internet service, and of those, many are coping with poor WiFi.


imagine calling a country with majority of homes nowhere near 1 Gbps a first world country in 2021...

P.S. us citizens need to acknowledge that the reason they don't have such a luxury is a lack of competition in ISP space, make that issue nonpartisan, discuss it with everyone who cares, find out why other countries are in a better state? what are the structural differences? are there laws preventing competition? etc


> If you install in-wall cat6 it’ll probably be obsolete by the end of the decade.

Would it? "Regular" Cat5e has been around for 20 years or so now, and even at "just" 100mbps to 1 gigabit, would be a ridiculously huge upgrade for any home I've ever been inside of, and is still reliably faster than any WiFi device invented thus far.

Unless you have some really esoteric requirements, most homes could just run CAT6a to every room without thinking about it, and sleep soundly knowing they'll reliably get 2 to 10 gigabit (depending on distance) into each room for the next 30 years or so.


Cat6a supports 10gbps. That should be good enough for the foreseeable future. Fiber is significantly more difficult to pull through the wall since it is more fragile.


Fiber is not significantly harder than Cat6 and the turns can be quite a bit tighter. But terminating can be very expensive; the best price I have for a commercial fiber tech is $100/end.


Yes, if you ignore the difficulty of terminating fiber, the more expensive switches, the fact no motherboard, laptop or TV accepts fiber, and the lack of PoE there's really no reason not to use fiber.


Fiber it is!


If you're doing a basic project where it makes economic sense and are willing to teach yourself, it takes a $950 fusion splicer and about $300-400 of hand tools/supplies to terminate single mode these days. It's not super hard to learn how to do as an amateur if you can watch some youtube videos and look at reference documents. This particular model which can be found for 900-950 from China is popular for FTTH last mile work:

https://toolboom.com/en/fusion-splicer-kit-signalfire-ai-9/

It's not something I'd use to splice a very important cable carrying long haul DWDM circuits, but more than good enough for its purpose.

As to whether a house needs fiber to each room? I'm not really sure, at the loop lengths involved, recent cat6 cable has a high chance of working successfully at 2.5GBaseT and 5GBaseT speeds, even if it doesn't qualify and test successfully for 10GBaseT. If you have a really high end 802.11ax 4x4 dual band AP with 2.5 or 5GBaseT interface on it and the switch to support it, in real world use it's unlikely you'll ever get much beyond 1000BaseT speeds to it with real wifi traffic.


The turns in fiber can be tighter than in Cat 6, really? Granted my experience pulling any kind of cable is most of two decades stale by this point, and my one experience with installing fiber considerably older still - I'm still surprised to hear fiber could be easier to pull.


g.657.b3 ultra bend loss insensitive can take a remarkable amount of abuse, it's designed for difficult FTTH installs.

https://www.youtube.com/watch?v=UBt00CVvMBA


Thanks for linking the video. That is wild to me - if I'd tried that staple gun trick with Cat 6 on a worksite, my boss would've kicked my ass all the way back to the office, and rightly so. I'd have never dreamed of seeing fiber that could hold up to it!


At a certain points (generally 6 gbs), you're getting network that is faster than internal SATA, so unless you're using pure NVME storage, your network is faster than your filesystem, which means your network isn't limiting you. That happens well before we max out speeds that can be accomplished with twisted pair copper. Even with pure NVME, PCIe as far as I know still maxes out under 100 gbps and no actual storage devices support read or write at full PCIe speed anyway. I mean, not consumer grade at least.


Fiber can solidly hit 10x the price of installation over CAT6a/7, between the more expensive cabling, ethernet conversion on the room terminals (ok, maybe you have one computer with a PCI-E fiber adapter? nothing else does), and the networking switch in a closet/basement (price a switch with more than 4 SFP+/fiber channels. they approach five figures. so, you'll probably have to convert back to ethernet at the source as well).

And the benefit is tenuous. CAT6a/7 can hit 10Gbps, as long as the run length isn't insane. Even the 11th gen Intel NUCs ship with 2.5Gbps ethernet LAN ports, on-board; outfitting your endpoint devices to breach 1Gbps is far cheaper, especially considering most won't ever breach 1Gbps due to hardware limitations (PS5/Xbox? Ikea Tradfri Gateway?).

Even in the "local network upload/download" case; you've got a server, and you want 40Gbps to that server. Building a file server capable of sustained 40Gbps transfer rates is... insane. Its not easy, nor cheap. It requires multiple PCI-E attached NVME drives in RAID-0, on the latest-gen TR/EPYC platform (for their PCI-E lane count, maybe Xeon is good enough nowadays as well). In 2021, this is still in the realm of "something Linus Tech Tips does as a showcase, with all the parts donated by advertisers, and it still sucks to get going because Linux itself becomes the bottleneck". Remember: A Samsung 980 Pro NVME Gen4 ($200/TB) can sustain somewhere around 6Gbps read; you'd need 6-8 of them, in a single RAID-0. And, realistically, you'd want 12-16 of them in RAID 0+1. A server capable of this is easily in the mid-five-figures.

(and, fun fact, even after you build a server capable of this; Windows Explorer literally cannot transfer files that fast. you have to use a third party program.)

If you're a millionaire outfitting your mansion, then sure, maybe fiber makes sense (due to both upfront cost and length of the cable runs, where sustaining 10Gbps on CAT6a/7 is more tenuous). But I think the assertion that Cat6a/7 will be "obsolete" by 2030 is pretty crazy. Yes, technology will continue to get cheaper and more accessible, and I do think we'll see more fiber providers in tier 1 and 2 metro US areas offer wider 2Gbs and 5Gbs connections, but CAT6a/7 is perfectly capable of saturating this. Just ask yourself: Do you really predict that the PlayStation 6, maybe 2028, will have a duplex fiber port on the back, instead of ethernet? Its 2021, and Microsoft's Xbox download servers can't even download game data at gigabit speeds; they rarely breach 250-500Mbps.

Given the niche that fiber lives in, even taking the position that "its just dual-channel light, one up, one down, nothing can travel faster than light, its the perfect future-proof tech" is tenuous. Whos to say that, in the next twenty years, a consumer standard for fiber is developed which runs quadplex (2 up 2 down)? Or simplex (because its "good enough")? Or the connectors are totally different (which would be the easiest to switch because it may not need new cable runs. maybe).

Oh, also: PoE! PoE is freakin fantastic for prosumer setups. and only available on copper. You can run copper to areas around your house where you want security cameras or other smart devices, and not have to worry about also running power.


I agree with pretty much 100% of what you've said there - but fiber doesn't need to have the mystery of being really expensive... Not for houses, but for commercial use, if you spend the money one time to buy a fusion splicer and good tool kit, some basic consumables, two strand singlemode is actually 1/3rd the price per meter of cat6. Due to it being so cheap to manufacture and the cost of copper being high right now. Done correctly you have a guaranteed hassle free upgrade path as far as 100GbE and 400GbE on the same fiber, patch cables, patch panel, etc.

But for residential use, one of the primary needs to run an ethernet connection to different places in the house is for an 802.11ac/ax (or whatever next generation AP), so fiber doesn't really solve the problem because you still need electrical power for the AP. Obviously one cable and 802.3af/at/bt PoE is a better idea than running fiber powering each AP off AC power wherever it's mounted. Aside from the fact that APs except for very, very expensive enterprise ones don't come with SFP/SFP+ ports, and are generally designed around the concept of being powered from the switch they're connected to anyways.

One of the reasons why i really strongly agree with your points is that in a residential environment it's going to be very, very difficult to really move throughput through an 802.11ac/ax AP that gets anywhere near stressing the speeds of a 2.5 or 5GBaseT connection in the future. I'd be fairly confident in saying that a house wired today with cat6a at sub 50 meter lengths, that tests OK for 5GBaseT, will probably be good for the next 25-30 years.


The big thing for me is, its easy to say "oh, fiber is future proofing". Alright, can't argue with that; just as its impossible to predict the future to say fiber is the correct choice for the future, its also impossible for me to say that it isn't. But, I strongly suspect it won't be necessary in our lifetimes.

The primary reason I suspect this is on both ends of the internet delivery spectrum:

First; I think the broad resource allocation focus over the next 10-15 years in the US will be getting "the bottom 80%" up to 100Mbps+ speeds; not getting the "top 20%" beyond 1Gbps. Many of the traditional ground-line companies who would be doing this work (Comcast, Spectrum, etc) are going to be experiencing pressure from emerging wireless technology that can meet these speeds, with beyond-adequate latency, at a fair price, and require far less infrastructure work (Verizon/AT&T/T-Mobile 5G, Starlink) (Starlink is a wild one; you're competing against the gravity well of the planet at that point; what can any of these companies who are "good at digging holes in the ground" do?).

Sitting in my new apartment here, I have AT&T home internet. Averages ~50/10 @ 25ms. I was told on the phone it would be 200 down. "Well, the lines in this building are so long, very old, we ran some tests and we can sell you the 100 plan, but you probably wont get those speeds reliably, you'd be better off on the cheaper 50 plan". Ok, fine. Let me run a speedtest on my phone here, Verizon 5G, 125/50 @ 10ms. The cell companies can just put up a tower, cover hundreds of people with really freakin' good internet, sell it as home internet, what are the cable companies supposed to do against that? Spend thousands of dollars re-tooling the wires in this old building to get "just as good" internet to six people, half of whom won't pay for it?

And the key thing there, these emerging wireless internet technologies won't breach gigabit for decades. Its difficult enough getting them to gigabit.

Part of the reason they won't is on the other end; we're hitting the point, very quickly, where Bill Gates' old misattributed "64k should be enough" quote is becoming true; just not 64k, more like "4K video". Would having 5Gbps internet, instead of 1Gbps (which I had just a few days ago) actually fundamentally change how I interface with content online? Not even close. Even 100Mbps doesn't; there's a point where internet just hits "yup, that's good enough". Cool, I can download Warzone in an hour instead of four hours; its the same thing at the end of the day.

An argument could be made that continuing to push internet forward will open up more innovation in content delivery; whether that's game streaming, 8K video, actually decent quality 4K video, whatever. I think this is tenuous as well, because a big bottleneck for many content providers is networking costs on their end. So much money has been (rightly!) dumped into making our (mostly privatized) nationwide internet backbone "resilient", that suddenly its gotten very expensive to egress data from most hosting providers (big cloud certainly, but even small cloud and colo providers). A high quality 4K video stream can saturate a 100Mbps line; as an end user, that sounds great, I've got a 100Mbps line! But as a service provider, you multiply that 100Mbps by XXX,XXX users, and the numbers start looking really scary. That situation will not improve in the next 1-3 decades; the focus right now is in algorithms to get the same quality in lower bandwidth, not just pushing more bandwidth.

Plus, applications like Game Streaming are both bandwidth intensive and latency intensive. So, double-edged sword, and one that the emerging wireless home internet technologies won't solve well. Having whole-home 40Gbps fiber or a 5Gbps uplink won't help you with Stadia.

Point being, I think arguably for the rest of our lifetimes, the internet as a whole is going to enter a holding pattern while we catch everyone else up with acceptable speeds, improve the width of the backbone (not just the "depth" e.g. fallovers and resiliency), which includes 10-100xing edge distribution, and improve underlying algorithms to reduce the size of content while maintaining quality. All of this will be prioritized above widespread 10Gbps to the home.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: