Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How engineers at Digital Equipment Corp. saved Ethernet (ieee.org)
264 points by hasheddan on April 8, 2024 | hide | past | favorite | 102 comments


> Mark’s idea didn’t replace Ethernet—and that was its brilliance. By allowing store-and-forward switching between existing CSMA/CD coax-based Ethernets, bridges allowed easy upgrades of existing LANs.

This reminds me of John Carmack's philosophy of great things coming from thinking locally and taking small steps.

> Carmack subscribes to the philosophy that small, incremental steps are the fastest route to meaningful and disruptive innovation. He compares this approach to the "magic of gradient descent" where small steps using local information result in the best outcomes. According to Carmack, this principle is proven by his own experience, and he has observed this in many of the smartest people in the world. He states, "Little tiny steps using local information winds up leading to all the best answers."


But network-layer bridges were not a tiny step, they were a fundamentally new type of device with many enabling technologies. The point of the article is that it was revolutionary, not incremental.


That last paragraph is literally the same basic sentence about "little steps" written three times in slightly different ways.

Did you get this quote from an AI written SEOspam site?


Ha ha, it came from Wikipedia so possibly.


As a barely relevant point,

Using google, I can't find references to the quote (or John Carmack) by searching 'Little tiny steps using local information winds up leading to all the best answers'.

Other search engines seems to do a bit better on retrieving the Wiki article.

What gives, is google really becoming worse?


Kagi search shows this HN post, a link to kids encyclopedia page on Carmack, and his Wikipedia page as the first three links.

I started using Kagi last month, and am really enjoying it - it's like Google at it's prime.


> Using google, I can't find references to the quote (or John Carmack) by searching 'Little tiny steps using local information winds up leading to all the best answers'

Mr Carmack references "gradient descent" in interviews[1][2] and his own posts on X[3].

[1] https://dallasinnovates.com/exclusive-qa-john-carmacks-diffe...

[2] https://transcript.lol/read/youtube/@lexfridman/6522e9950331...

[3] https://twitter.com/ID_AA_Carmack/status/1773391295538442445

[3]


The Wikipedia article has a reference to where the quote came from. It’s a podcast, whose audio presumably has not been transcribed and indexed.

https://lexfridman.com/john-carmack/


It's been at its worst for like 6 years.


What a great story. The spanning tree algo is under appreciated: this allowed people who didn’t understand networking to plug networks together the way you would plug extension power cables together,* making networking simple ( or alternatively insanely broken, when people had 400 computers on a single LAN with a rat’s nest of bridges and hubs…but unlike the extension cord case, nothing would catch literal fire).

* don’t try this at home or work!


Occasionally people don't understand how to plug extension power cables together, either, especially during times of high stress and low sleep.

Once upon a time, when I was in IT support, I got a call from someone in a satellite office across town saying that their computer wouldn't turn on. A new production had begun and everyone was a bit frantic, so this was an urgent request. After asking them to hold the power switch in for a few seconds and try to turn it on again, I asked them to make sure the power cable was secure and that the computer was plugged in. It was, of course, but the computer still wouldn't turn on, so it was time to jump on the bicycle and ride across town with a new power supply in tow, figuring it would be a quick fix.

When I arrive, I see that, indeed, the computer was plugged in to a power strip. And that power strip was plugged in to itself. From then on, I always made sure to ask, "Is the computer plugged into the wall?" Saved myself a few bicycle trips that way.


My Audio Video buddy has a very similar story. A school called because their newish PA system wasn’t working. Turns out the usual lady was out. The usual lady would rip the cord out of the wall every evening instead of pressing the power button. The stand in announcer was just trying to press the on off button like most normal people.

They ended up having to replace expensive equipment because the person wouldn’t stop ripping the cord out of the wall to turn it of.


I think American houses don't have a power switch per outlet like the rest of the world. The only way some (most?) devices can be turned off is by pulling the cord.


Not it’s not at all. Do you turn a computer or tv off by pulling the cord? No, you turn it off by a switch/button or in the GUI if it has one. I’m from the US.

If you had a stero system at your house you would not pull the cord to turn it off. Wow.


Some places/people have a concern (complex?) about standby or parasitic power draw.


European power outlets don't have switches either.


I was really hoping for some never before known knowledge of how to connect Extension Cables and Power Boards together!

But no, turns out someone didnt even manage the basics! LOL

There's a reason the IT Crowd have the running joke of 'have you tried turning it off and on again...'! :-)


In all fairness, the person had obviously been awake for over 24 hours and was on their 1,001st cup of coffee. And since earlier in the summer I had crashed the entire ticket scanner network the night before the opening of the weekend festival we had put on by creating a network loop between a couple of non-spanning-tree-speaking network devices, I didn't feel I was in a place to be snarky about it!


> was on their 1,001st cup of coffee

9 cups is a lot but shouldn't cause cognitive disorder.


There are 10 types of people


heh, with power strips that have very long cables, I've seen them plugged into themselves a few times as well.


Like tying a shoelace: the long grey cord goes around the backside of the desk, turns and comes around the front, and back into its own powerstrip.

..Wait why won't it turn on?


> people who didn’t understand networking

A couple of decades ago I witnessed a classic demonstration of Weinberg's Corollary¹ when a spanning tree misconfiguration on a freshly-installed BlackDiamond disabled LINX for hours, throwing half of Britain's ISP peering into chaos. The switch in question was evidently cursed: it'd been dropped on my foot earlier that week, thus disabling me for hours, and everyone else involved in that project was dead within two years.

__

[1] "an expert is a person who avoids the small errors while sweeping on to the grand fallacy"


everyone else involved in that project was dead within two years.

That's ... quite a legacy.


The error of a wise man is equal to the combined blunders of a thousand fools.


That's why the new Ethernet protocol 802.11aq is recommending Shortest Path Bridging (SPB) as a robust alternative to Spanning Tree Protocol (STP). Not only it's more robust, but it is more secure with extra resilience against broadcast storm. It also can support multi-cast function more easily and intuitively at data-link layer [1],[2]. Compared to the new SPB, STP looks like an immature hacks and why it took so long to be replaced is beyond me.

[1]IEEE 802.1aq:

https://en.m.wikipedia.org/wiki/IEEE_802.1aq

[2] 802.1aq Shortest Path Bridging Design and Evolution: The Architect's Perspective:

https://ieeexplore.ieee.org/book/6381532


> That's why the new Ethernet protocol 802.11aq is recommending Shortest Path Bridging (SPB) as a robust alternative to Spanning Tree Protocol (STP).

This is a dead letter standard: most folks who would 'need' SPB are probably using BGP EVPN to reduce the Layer 2 'blast radius'.

It should also be noted that the IEEE was dragged kicking and screaming towards SPB: originally TRILL was proposed, but the IEEE rejected it, and so the IETF published:

* https://en.wikipedia.org/wiki/TRILL_(computing)

IEEE realized their mistake and published SPB, so now there are two L2 standards.

Not many folks who either though, with anyone really needing 'large scale' stuff moving towards L3 solutions.


> but unlike the extension cord case, nothing would catch literal fire

Somewhere, out there, is a story of an overburdened network setup literally catching fire, and it’s hopefully making its way to us.


I saw one sorta catch fire.

But it wasn't over burdened.

It was in the kitchen, cause that was also where the utility closet is. and our 100+ year old building has crappy/dirty power, frequent brown-outs.

So, a few weeks of splashing coffee, tea and other crap around, and our crap power the thing started humming (60Hz). Then one day POP! Cap blew out and there was a little smoke.


Seen more than my fair share of magic smoke from lightning or near lightning strikes.

Also fried more than a few ports on devices when passive POE used to be more prevalent :p


C'mon HN, I'm counting on you...


Not recommended but these bad boys make a lot of things work, and cause a lot of damage! https://m.media-amazon.com/images/I/61A5WvzcgsL._AC_UL960_QL...


Too late: Tried this at work; Did a marvelous job sharing satellite internet with the whole barracks.


Sounds like things worked out, even if all you shared was your porn.


> but unlike the extension cord case, nothing would catch literal fire

I might be taking this too literally (I fell down the Kunsberg Bridge rabbithole recently), but with only one input on nearly all extension cords, I don't see how you'd get into a position of "fire" rather than "nothing happening".

Even if you did have two ways of putting power into your extension lead, it wouldn't necessarily lead to fire; household "ring" circuits (still very common in UK houses) are by definition a loop.


STP is basically magic - and being able to reliably optimize it to reduce reconvergence time makes you a magician.


I feel like STP is basically magic when you first learn it in a course (usually sponsored by Cisco) but the illusion is broken when you learn computer science principles like a graph data structure and how to build an acyclic graph.

STP handles this through sending little messages with every node receiving the message appending themselves after whoever sent it. Then a node just checks if it sees itself in the message. If I DO see myself in the chain, I know now I need to not send information to the previous node that sent me the message.

The message just needs to be something that'll never succeed but flow through the network. If I have three nodes labeled "A", "B", and "C" then I could send a message intended for "D" (which does not exist in my network). A has a path to B and A has a path to C. B has a path to C and vice versa. Each node can talk to another.

A send to B. B checks if its in the chain and if not, it appends itself in the chain. A -> B. B knows it received the message from A so it will ignore this connection and send the message to all of its neighbors. C receives the message from B. C checks if its in the chain and if not, it appends itself in the chain. A > B -> C. C got the message from B so it ignores this path and sends the message to all of its other neighbors (in this case, A).

A receives the message and checks if its in the chain (it is). A knows it received the message from C. A now knows that a cycle has been detected, so A disables its connection to C.

Congrats! You've designed a tree! Its a graph that contains no cycles. All we had to do was send a message and check if we see ourselves in the chain. If we do, we disable our path to the last node that sent us a message.

Of course STP gets more complicated from here as we factor in path costs as weights to measure the decision to axe a connection. Maybe A -> B is a 1G connection and A -> C is a 10G connection, in this case A may disable the path to B.

I do agree that reducing reconvergence time is magic. I don't understand that one.


This is all very well but DEC's greatest contribution to my own networking happiness was perhaps the quad-port Tulip/21143 cards that were my go-to for building reliable FreeBSD-based white-box routers back in the day


I always wondered why it took so long to go from 10 Mbit/s to 100Mbit/s. For sure in that time they invented things like Ethernet Switching. But still, it seems like that standard took forever to move on. And then once it did it went from 100Mbit/s to 1000Mbit/s per second pretty damn fast.


100BASE-T marked the transition from Cat 3 to Cat 5, and a lot of sites were actually still making use of coax. That was necessary but was an impediment because it meant a high cost of adoption. This coincided with the fact that, at the time, it was largely unnecessary for the broader market of users: these were the days of IDE/ATA-1 disks, acoustic modems and 300 DPI black and white printers. You watched video with VCR.

I recall using a business desktop back in the day with a common IDE disk, and also attached to a NetWare share. The NetWare share was much faster than the local drive although the network was only 10BASE-T. Around that time meetings began: how much was it going to cost to replace all the cable (half of which was still coax) and buy 100BASE-T switches? (Answer: a couple hundred dollars per drop.) That process took two years and another six months to do the work: a quarter of a decade.

1000BASE-T was specified for Cat 5: the floorboards didn't have to be torn up to replace all that cable everyone just paid for. The cost of PHY/SERDES components fell greatly during that time and made 1000BASE-T hardware affordable. Moving from 100BASE-T to 1000BASE-T was, therefore, cheap and easy and took correspondingly less time.


10 Mbps was fine for, frankly, just about anything under 100 MB. And since 100 Mbps hubs were rare beasts, most being switches, they stayed a lot more expensive for a long time. Hell, my first home network was 10 Mbps on thinnet because it was so much cheaper than a 10 Mbps hub, and that was ca. 1997. I'd had 100 Mbps available in the dorms the year before, but you were looking at $250 for 100 vs $40 or less for 10 - which, in college student dollars, is a lot.

When we had hard drives that barely topped 1 GB, which could be copied in under 17 minutes even on 10 Mbps, there just wasn't a huge incentive to upgrade speed. But once 100 made sense for homes - supporting 802.11g - the price of switches fell rapidly because it was no longer just businesses buying them. And so 1 Gbps fell more quickly, because the background hardware was getting cheaper.

My 1998 computer with 100 Mbps and a 6.4 GB drive got used a few times as the piracy data exfiltration machine because it could pull down the 5 GB of stuff we'd accumulated so much faster, then plug it up to the three-apartment network we had set up and let everyone pull it to their own machines on the 10 Mbps we had there. Our outgoing connection was a Linux box on a 56k connection shared among six of us; it's not like 10 Mbps slowed any of that down.


The really crazy thing is that the consumer space is still stuck on 1gbps nearly twenty years after 10G base-t.


Is it really that crazy? 1 gig is overkill for the vast majority of people as it is.


You (well, perhaps not specifically you, depending on your address) can easily get a consumer internet package that'd fully saturate 1gbps. You can buy consumer wireless access points that can serve over 1gbps. You can buy consumer network drives which are physically capable of saturating 10gbps. Modern USB already has transfer rates higher than 10gbps. It's far less outlandishly fast than 1gbps was when that was rolling out.


Yes, the things it's connected to can saturate a 1 Gbps link. But few people feel any pain points that would lead them to want to upgrade. I've only ever saturated mine with speed test sites.


I have gigabit internet and the only time it even remotely comes close to saturation is when Steam kicks off a large update. And then it's done, in minutes at a most.

Unless you are copying large media files around all day, gigabit is complete overkill for the vast majority of people.


One side effect of the network switch was that you needed to buy switches. With the old way everyone could be on one wire. With the new way you needed one port per host.

Switches also allowed for centralized management.

Note that cable works on what essentially is token ring, at least conceptually; channel 0 (i believe, it's been a while) is the heartbeat.


Switches didn’t require one port per host. They required one port per layer-2 network segment. The original switches I deployed had a handful AUI ports to which we attached to 10BASE5 transceivers. Hosts still connected via 10BASE2 or 10BASE5 to the “single wire” of their layer-2 network segment.

Over the years the cost of switches lowered to the point where it became cost effective to directly connect hosts and reap the benefits of reducing layer-2 segments to a single host. In between there was also a time where most hosts connected to hubs which then connected to switches.


If you were ever responsible for keeping a thicknet or thinnet Ethernet network running, you would gladly accept that tradeoff. If you've experienced a floor of a building or a whole department go offline because someone removed a terminator, or (this is real) decided to stick the thinnet onto their radio antenna to see if it improved reception, you'd value the increase in reliability a lot more than the extra cost.

There were actually these connectors you could get from AMP that would hide away the thinnet connection behind a wall plate, with proprietary connectors to the machine, just to keep people's hands away from the shared network segment.


When switches first arrived, most of them were unmanaged, or only had basic snmp counters, but they solved the collision problem.

I remember my first Internet company, where after getting an ARN for free. Coordinated with uunet over IRC to get them to pull routes from me, turning up our second T1.

The second traffic started to flow the collision light on the 24 port 3com fast Ethernet hub went solid.

I thought I had routed the Internet over my link.... but no, two T1s through an Alta Vista firewall running on a DecStation was enough to do it with a basic three their web app while backups were running.

I had lots of experience with 10base2/5 with more traffic, but the stocastic nature of web traffic was problematic.

If you lived through the growth of the Cisco chassis switches, the ASIC improvements that allowed for management was quite obvious.

I remember fighting with packet engine engineers, advocating for jumbo frames as default. They wanted to support gig-e hubs, which would have never been useful due to collisions.

History proved me right there, nothing simpler than a store and forward bridge made it to mass market for Gig-e


This was such an interesting time to start working in IT - I worked for one of the BUNCH at the trailing end of the 1980s and we sold other peoples networking hardware as the company had none of it's own.

The rapid change from coax ethernet to twisted pair was really something to witness. The original SynOptics LattisNet and StarLAN switches sparked a rapid standardisation effort (incompatible with either). We mostly (tried) to sell 3Com switches and routers as most of our customer base at the time heavily invested in XNS based Lan Manager networks. Within 18 months everything had changed, Cisco suddenly became the hottest networking vendor as all those early networking protocols (XNS, IPX, NETBEUI, DECNet etc) disappeared from local lans and entire companies got obliterated seemingly overnight.

I mean, who even remembers Ungermann-Bass? And DEC as referred to in the article started out a major player in the networking space and became an also-ran in no time. All the while the world had gone crazy ripping out coax and madly installing twisted pair.

There was a period there where my job involved juggling network drivers in DOS, so that you'd have j-u-s-t enough memory to start Windows. Any customer that needed two network protocols (not uncommon) tore their hair out with memory extenders and carefully crafted config.sys and autoexec.bat files and hoped the BIOS didn't get too radically changed when the next batch of PCs showed up. Horrible, funny in hindsight though.


[flagged]


I think the great architectural challenge would be - how does one add that byte to the IP header in a non-breaking way?


There's a surprising number of people who think you could magically expand the IPv4 address space in a backwards compatible manner.


> There's a surprising number of people who think you could magically expand the IPv4 address space in a backwards compatible manner.

There is a (somewhat) backward-compatible way to do this at least for TCP and UDP:

Instead of assigning an IPv4 network prefix (or one single IP address as a special case), assign a tuple (IPv4 network prefix, port prefix), i.e. by firewall rules not every source port can be used from every IP address for TCP and UDP connections.

Is this a better solution than IPv6? I clearly don't think so. But it is an ugly, hacky solution that is theoretically possible if one really insists that one wants to keep using IPv4 (only), and needs to magically expand the address space.


But this was what literally happened. All routers today support NAT and most of them actively use it. Isn't it a magical form of extended IPv4 address space? Could have been easily done in less band-aid fashion instead of chasing IPv6.


It is impossible to address hosts behind NAT. Only the public IPv4 address is visible.

It might have been possible to extend IPv4 by having each NAT hop add the internal IPv4 address as option header. Then it would be possible to refer to inside host directly with a list of addresses.

That isn't worth doing now because it would require rewriting everything to deal with the new protocol. For one thing, lots of NAT boxes remove all the option headers. The new protocol wouldn't be reliable.


Not a counterexample exactly, but your remark reminded me of this eldritch horror: https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-...

TLDR: Cloudflare is using five bits from the port number as a subnetting & routing scheme, with optional content policy semantics, for hosts behind anycast addressing and inside their network boundary.

Filed in my bookmarks under "abominations".


From the link: "A lot of Cloudflare's technology is well documented. For example, how we handle traffic between the eyeballs (clients) and our servers has been discussed"

Already 1 sentence in and we've already got Eldritchian horrors with customers described as just their watery orbs in their skull.


NAT doesn't change IP Header size, just the contents - while staying the same size. The header size is a requirement for compatibility.


No, it's not a magical form of extended address space. It's a magical (clever) way of extending the networking without extending the address space.


How?


There are semi-compatible ways. My favourite proposal (if we couldn't have IPv6) was recursive IP: you transmit a packet to 12.34.56.78 and inside that packet is a packet addressed to 192.168.1.5. If you have more layers of NAT, inside that can be a packet addressed to 10.1.2.3.

If both endpoints do this you can directly establish connections. If only one does it, fall back to NAT like before. Core internet routers don't have to be updated. Addresses at endpoints are effectively variable-length.



It would be great if its true! Perhaps its really there and we're all just blind to it? Unlikely, but I'm willing to be surprised.


Well I don’t see how you could do it without upgrading equipment and software to support it. And then to start using the new ip scheme.

Which starts to sound an awful lot like an IPV6 migration.


Unfortunately none of those people have explained how it could be done in enough detail that I could try it. Most walk away when pressed but a few press on telling me it is easy so shut up and do it .


Similar to how it has been done with phone numbers. I saw this done in Brazil for example. You add a digit to the front and put all existing address on 0.*. Short number dials are assumed to be 0.*. Update OS and hardware. Then you allocate across the new digit much later as time goes on.

The thing with phone infrastructure though is that it is centralized. So may happen in a reasonably coordinated rollout. Global internet is a lot more distributed so it would take a very long time.

The open question is, would it take longer than IP6 has? Maybe not. Part of the reason I didn't care to use it early was because of the long addresses. If we could get a five byte address written in hex it would be somewhat user friendly.


The IPv4 address field is fixed size. You can't simply add a digit and deal with it at the telephony company premises like you can with a phone number. You have to rev every piece of equipment and software that can ever touch a packet. At that point, why are you not also fixing other architectural flaws and ensuring that the address space is large enough to accommodate any future needs?


One interesting thing about this idea is that, from a higher level, that's exactly how IPv6 works; the "leading zero" is the IP protocol version field. If that leading zero, er IPv4, is there, the fields of the IP packet are interpreted using IPv4 semantics. If the version field is IPv6, then the IP packet is interpreted using IPv6 semantics.

This is how you configure telephone dial plans. Earlier dialed numbers influence the interpretation of later numbers. You dial a 1, an area code is expected next. You dial 9, you get an outline line. Dial plans are a pretty decent setup and allow scoped dialing, but are limited in their extensibility (you can't have a local number start with 1). In IP, the IP protocol version field influences the interpretation of later fields, logically similar to dial plans.


> The IPv4 address field is fixed size.

So?

Make it bigger, call it IPv7, then enjoy.

IPv6 has 128 bits for addresses, why can't IPv7 have 128 bits as well, but still use more familiar patterns/techniques borrowed from IPv4?

IPv6 threw just about everything out the window... for what reason? Two decades of confusion and resistance...

I think at this point in time, people are afraid to say "ya, we overthought the hell out of IPv6".


Nothing in the world can work with IPv7. That means need to update all the software and replace all the hardware. This takes years of effort, and years of time.

What more familiar techniques? How are they going to be worth the millions of man-hours to implement? How are they going to be so much better that people will abandon IPv6 and switch from IPv4? Could this be accomplished by changing part of IPv6?

It is quite possible that you are using IPv6 to access Hacker News without knowing it.


> Nothing in the world can work with IPv7. That means need to update all the software and replace all the hardware. This takes years of effort, and years of time.

Uh, you mean like IPv6?

IPv6 once was new, and it was radical at the time (still mostly is). IPv4 should have just been extended to a 128 bit address space, breaking changes implemented around that, and then everything else would have been easier to adopt. No relearning everything - just rationalizing about larger address space.

> It is quite possible that you are using IPv6 to access Hacker News without knowing it.

No I am not, because our IT Dept. disables IPv6 on all workstations and doesn't support them at our gateways. IPv6 was the culprit in a lot of networking issues that just magically "go away" when disabled... so, they disable.

The decades and decades of knowledge built around IPv4 is immense. IPv6 asked everyone to forget almost all of it and start over. It's really not surprising IPv6 is still not well adopted...


>IPv6 was the culprit in a lot of networking issues that just magically "go away" when disabled... so, they disable

Why do you think your IPv7 will magically just work? There will be problems with it and IT departements will still disable it.


> You add a digit to the front and put all existing address on 0.*

How do you do that? I have read the IPv4 protocol spec, there isn't any space to put that byte can still be IPv4. That is what I mean when nobody has proposed anything that I can implement: I have read the spec and it doesn't allow room for what you want to do. Sure conceptually you can describe any number of ideas - but they are not IPv4 and no existing computer or router will work with it - so we may as well go with IPv6 which many smart people spent who understand the real problems of the internet have a lot of effort creating to solve existing problems to the best of their ability.

Brazil didn't just add that leading 0. They planed this well in advance and forced everyone who connects to the phone system to update their systems to support it: you must apply a software update or buy new hardware; otherwise your phones stop working. Of course most of the software and hardware was controlled by the Brazil phone company (s?) and so they could ensure this was all done.

I first found out about Brazil doing this from your comment - yet I can say the above with all confidence because those are things that have to happen behind the scenes to make it work. (I have no doubt people who actually know something about what Brazil did can tell you things I didn't think about)


Well, you asked for a design and I gave one. Any change is ultimately going to require a software rollout. (I suppose you could squeeze a byte into an underused header in the short term.)

The argument upthread was that v6 was too big a change, resulting in a slower than anticipated rollout. And perhaps merely shoehorning another byte or two into v4, say v4.1 would be easier and more quickly accepted by the world.

Possibly—I'm not strongly arguing for either, besides the fact that v6 didn't solve any problems I was having besides the world running out of addresses. It's a lot harder to grok at a glance though.

Also, a smaller change wouldn't break existing networks, just like the existence of v6 didn't break v4.

(This is basically the Python 2 to 3 transition argument in global form.)


so we did what you proposed. Ipv6 is simpler than ipv4. You can handwave details different but they all suffe, from the same problem of rollout.


You have a unique definition of simple.


I have actually seen the protocol specs for both IPv4 and IPv6. IPv6 is simpler than IPv4 - but you probably never looked into how to do source address routing which you are required to support when you implement IPv4 even though nobody uses it (If you try I'm sure that the large backbone providers will block it)


> Similar to how it has been done with phone numbers.

Phone number lengths are defined by ITU E.164:

* https://en.wikipedia.org/wiki/E.164

They can be up to 15 digits long: so as long as the format change that goes from length x to length x+1 doesn't break that limit, then no changes to code or equipment needs to be done. The routing just needs to be tweaked so that when a number is read the signal is sent to the correct destination.

This is different from IPv4 where the digit length needed to be changed. It would be like if telephones went from 15 digits to 20+ digits: all the telephone gear would have to be changed to deal with the larger numbers.


If you had to deal with a hundred million different kinds of dialing software, that method wouldn't work out so well.

> If we could get a five byte address written in hex it would be somewhat user friendly.

For local addresses, you can use fec0::zzzz.

Do you need to memorize or hand-type global addresses very often?


> that method wouldn't work out so well

How well did IPv6 turn out by that criteria?

> memorize or hand-type

We read things many more times than we type them. Yes, gibberish at 4x the length is substantially harder to deal with. I do write addresses occasionally and with IP4 it is at least possible, if not desirable.


> How well did IPv6 turn out by that criteria?

About the same. So it's good we made it big for the future.

> Yes, gibberish at 4x the length is substantially harder to deal with.

I find copying and pasting to be pretty easy, and if you're looking at your own machines you can organize them all with the last 4 characters.

I don't have to compare random internet servers very often.


I believe it's more along the lines of the concepts in IPv4 are easier to grasp than IPv6, starting with the actual addresses themselves.

IPv6 breaks all backwards compatibility. So, it's not unreasonable for people to ask why we can't just break it in a more familiar way?

Extending IPv4 into say, IPv7 and using familiar addressing schemes, well understood routing/NAT/DHCP techniques, etc, while providing the same usable address range as IPv6 is possible.


IPv6 breaks less backwards compatibility than many people think. Many get caught up in all of the other changes you can (and often do) do because it makes life better and conflate that with IPv6 not letting them do things the same way. About the only generally true "it breaks backwards compatibility" is the format of the address being longer for the longer address. Netmask, gateways, DHCP, static assignment, and neighbor discovery are all about as 1:1 as one could ask if that's all you care about but it's just a dumb way to do things if you're upgrading everything so then you have SLAAC, a more present link local, DHCP-PD, and so on to hear/think about as well.

E.g. you can still NAT the massive IPv6 private address space with static and/or DHCP assignments without having to change your understanding (addresses would even be darn short too!) but... it's just silly to do.


What are you exactly asking for? Sure it's not recommended but you can have NAT66 for IPv6, and DHCPv6 for IPv6. You can choose to configure your own IPv6 network in a way that's familiar to IPv4. Not exactly best practice but doable.


At this rate, why not just use UTF-8 and combing IP and DNS into one.


There are a bit more than a hundred unused protocol numbers, and the options field has more than enough room to stuff some more address space into it. You'd need enough buy-in to get the protocol number accepted as valid, and could introduce better routing gradually: as long as the OG destination address knows how to send the packet on to the extended-address computer, it'll get where it's going.

The problems, and they are real, show-stopping problems, are more social than technical.


I know nothing of networking, could you explain why you can't use 4 bytes of option to extend the address space by 8 bits without invalidating existing addresses or require routes to them to understand the new option? Do routers not reject unknown options?


Why didn’t Douglas Comer, Bob Metcalfe, Vint Cerf and John Postel (in no particular order) think of that?


Has IPv6 been a failure?

All I know is that IPv4 is still around.

And without knowing the IP Addresses of devices on my LAN, by home network would be harder to manage!


Hacker News added IPv6 support recently. It is quite possible people are using currently using it to access the site.

Google has IPv6 support at 50%. Most of the major sites I use have IPv6. Google, Facebook, Amazon, Youtube.


You can know the IP addresses on your LAN. If you'd like you can assign multiple addresses per interface, so that you have global and local addresses. e.g. local addresses fc00::1 for your router, fc00::2 for your desktop, fc00::3 for your phone, etc. If you want, you can use your global prefix with ::1 ::2 and ::3 for your global addresses too. You don't have to use privacy extensions if you don't want to.

Things like peer to peer calling can actually work though without NAT.


Addresses starting with fc are reserved.

I would suggest using either fec0 or fd00. Which is still not best practices, but it's a lot closer.

(fec0 is deprecated, and fd is supposed to be followed by a randomly chosen 40 bit site ID but you can lie and say your random number was 0)


fc00::7 is the ULA range. It is not reserved. It is the right range to use for an internal network. There is no reason to prefer fc00 over fd00.

fec0 is the site-local addresses. Nobody should be assigning addresses in that range since they are made automatically. Using site-local addresses makes sense for single subnet network like most home networks.


> fc00::7 is the ULA range. It is not reserved. It is the right range to use for an internal network. There is no reason to prefer fc00 over fd00.

fc00::/7 is the ULA range.

It is made out of two halves. fc00::/8 and fd00::/8

fc00::/8 is reserved. People are only allowed to use fd00::/8

> fec0 is the site-local addresses. Nobody should be assigning addresses in that range since they are made automatically. Using site-local addresses makes sense for single subnet network like most home networks.

None of my adapters have automatic fec0 addresses, they only have global and fe80. So can you elaborate on that?


I don't know that about ULA range.

I got fe80 and fec0 confused. It looks like the latter is the deprecated site-local range. It is unlikely that will be reassigned, but lots of people said the same with IPv4 and got burned. It is safer to use ULA.


Ah, yeah, fe80 is automatic but it's link-local so you can't do very much with it.


I use mDNS for hosts on home network. They have IPv6 addresses, multiple ones, but I don't care what they are because I have never used them.


No, IPv6 has not been a failure, just a newer version of IP. IPv5 was skipped, so will be IPv7. Maybe IPv8 will unify IPv4 & IPv6.


> All I know is that IPv4 is still around.

So what? IPv6 coexists with IPv4 just fine. As has been made abundantly clear over the years, there's no need to have a flag day for a big cutover in order to adopt IPv6.


Why the downvotes!

It was an honest question!


IPv6 == the new ISDN: I Still Don't Need it... although I will say ISDN was a godsend in the years before my community was finally wired for cable.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: