> Why isn’t IPv6 more popular? ... and an arguably overly complex design.
I really don't know where this ridiculous claim comes from. Yes, IPv6 addresses look more complicated but various other things about the protocol are drastically simplified — no more on-path fragmentation, simpler header formats and fewer required header fields, correctly implemented link-local scopes, previously separate ICMP+ARP+IGMP protocols consolidated into ICMPv6 (which handles neighbour discovery, router advertisements, path MTU discovery and multicast group membership amongst others), no more broadcast, and in many cases clients will quite happily get along without DHCP. If anything, it is considerably less complex.
In my experience, IPv6 is often more complex. The main exception is that by and large IPv6 doesn't have NAT, so that saves a few headaches in that area.
No more on-path fragmentation is not a benefit. IPv6 and large DNS replies is an endless source of problems.
Moving fragmentation to an extension header similarly creates problems. Dealing with extension headers is just more code complexity.
Link local does not work (reliably) in browsers: https://[fe80::1]/ doesn't work on most platforms.
ICMP, ARP, and IGMP perform completely separate functions. Putting then all in ICMPv6 doesn't help. In contrast, having ND in ICMPv6 leads to code complexity. In IPv4, ICMP logically uses IP to send packets with uses ARP. In IPv6, ICMPv6 logically uses IPv6 to send packets, which uses ICMPv6 for neighbour discovery.
IPv6 created a lot of flexibily by having multiple addresses per interface created automatically from router advertisements. And multiple routers on a subnet that can each support different prefixes (poor man's multihoming). Net result, certainly with devices that frequently connect to different networks, such as phones and laptops), is way too complex.
That said, the only way forward is IPv6. Putting everything behind multiple layers of NAT is ultimately going to fail.
> No more on-path fragmentation is not a benefit. IPv6 and large DNS replies is an endless source of problems.
I thought this was the other way around, IPv4 only guarantees reassembly up to 576 bytes so DNS avoided issues with split UDP datagrams by limiting the payload to 512. Ends stuff got added on once the defacto internet mtu became 1500 and there was more room. Things like 4G have a 1482 MTU though so it may seem frag!mentation helps but in reality most IPv4 routers don't fragment and reassemble anymore they just drop. In practice with DNS this has meant either keeping the packet size closer to 1k or using TCP which negotiates miss and handles correcting/merging lost split payloads.
If anything IPv6 has made the situation cleaner with a minimum supported MTU of 1280 vs IPv4s 68 guaranteeing the 1kish UDP DNS payloads can make it through without relying on pmtud.
That's two separate issues. The default (maximum) IPv4 reassembly buffer is 576. This issue is solved in DNS with the EDNS udp buffer size option.
For IPv4, you can just send a 1500 octet DNS reply and it will be fragmented as needed. For IPv6, you have to fragment at 1280 or do path MTU discovery (which doesn't work very well, certainly not for DNS over UDP). You can always fragment at 1280 but many firewall will drop fragmented packets, also because IPv6 extension header parsing is complicated.
> For IPv4, you can just send a 1500 octet DNS reply and it will be fragmented as needed
As mentioned in theory yes, in practice most hardware base IPv4 routers don't actually implement fragmentation anymore.
> You can always fragment at 1280 but many firewall will drop fragmented packets, also because IPv6 extension header parsing is complicated.
Many of the same firewalls drop fragmented DNS packets as well because of cache poisoning attacks and other issues.
All that isn't to say people haven't tried/used fragmentation for UDP DNS packets but rather it's historically never worked reliably or securely anyways which is why all of the current BCPs RFCs are to avoid it at all costs.
All of that is why EDNS0 specified the min max to be 1220 bytes and dnsflagday last year focused on 1232 of payload bytes instead of 1500 (minus change).
> Link local does not work (reliably) in browsers: https://[fe80::1]/ doesn't work on most platforms.
1. You have to specify an interface, since fe80::1 may be in use on more than one link (so that becomes https://[fe80::1%en0]/ for instance), 2. that IP address may not be assigned to any devices on the link-local network.
That breaks significant assumptions of the WWW. Specifically, it means that devices have different addresses when accessed by different hosts, which breaks all hyperlinks the Server may send back, unless the User Agent also sends the scope ID to the Server. However, the scope ID is meant to be meaningful only in the context of the host that originated it, so RFC6874, which introduced this concept officially in URLs, prohibits sending it.
Overall, this means that, in practice, WWW on IPv6 does not support link-local addresses. This is especially true given that none of the major browsers support them.
On POSIX systems (including MacOS), just 'fe80::1' doesn't work. You need something like fe80::1%eth0. The 'eth0' is in general unknown, because it is the name of the outgoing interface, which varies from OS to OS and even between Linux distributions.
Then in URLs you have the question whether it is 'http://[fe80::1%eth0]' or 'http://[fe80::1%25eth0]' ('%'s escaping). And by and large browsers have decided that the whole '%eth0' is complex from a security point of view, so they don't support it.
In some cases Windows does allow just a 'fe80::1'. But I don't know under what circumstances.
It's not supported in any browser, so good luck using it on the web. Even more, the semantics for such a URL as defined by RFC6784 means that common HTTP features like redirects can't work: client sends a request to http://[fe80::6%7], the User Agent sends a request to server with "Host: http://[fe80::6]", and receieves a 302 response with "Location: https://[fe80::6]" . What can client do next?
I'm curious what you see as the reason that putting everything behind multiple layers of nat can't work? It seems to me like it has worked pretty well so far, and we're nowhere close to running out of (ip, port, ip, port) tuples.
1 layer of nat on each side hasn't bad, the 2 layers of nat on each side carriers have been moving to has been a godawful mess of complexity for any conversation where at least one side isnt a public IP (e.g. p2p chat/calls).
While routing at global scale is much easier, running v6 in a local network has more moving parts than v4 had:
- broadcasts for address discovery have been replaced by multicast which is much harder for switches to handle correctly
- address discovery is now mostly handled via SLAAC which is different from how it worked via DHCP and also doesn't universally allow setting name servers which then will still require DHCP to actually get a working network (if you run v6 only), so now you have two daemons running when in v4 you only needed one.
- hosts are multi-homed by default and rely heavily on multi-homedness which might invalidate some assumptions you had when configuring hosts.
- for a network to be meaningfully useable, you need working name resolution because while you can remember v4 addresses and v4 address assignments, this is impossible for v6 addresses (yes, you can of course manually assign addresses in your prefix and you can just make them low numbers and hide everything else behind a ::, but you still have to remember your prefix which still is impossibly hard and there's no cheating there even if you know somebody at your ISP because it's not entirely under their control either)
- and in a similar vein: Subnetting is harder because the addresses are much less memorable. If you want to subnet a 10.- v4 network, in many cases, you can do this in very memorable full-byte chunks.
- also subnetting: due to many ISPs still doing /64 allocations to their customers, and due to the way how SLAAC works, you often have to decide between subnetting or SLAAC (which still is the default in many OSes). Worse, some ISPs only do a /128 assignment (one address), so now you're back in NAT territory, only that's really, really murky waters because next to nobody is doing this ATM. If your ISP only gives you a single v6 address, you are practically screwed about running v6 internally. If you're given a single v4 address (which is common practice), you can do NAT/RFC1918 addressing and you're fine.
- v6 relies on ICMP much more heavily but this fact has not propagated to default firewall settings, so in many default "let me turn on the firewall" configs, your v6 network will break in mysterious ways.
- in home networks where you want devices to be reachable directly (for P2P usages like video calls or gaming), there's no widely-supported equivalent to UPNP or NAT-PMP yet to punch holes into your firewall to make clients reachable. Yes, you don't have to do NAT any more, so clients are potentially reachable, but you really don't want that, so your firewall is still blocking all incoming connections, but now there's no way for an application to still punch temporary holes through which is a solved problem in v4 (where a hole is punched and a temporary port-mapping is created)
There are more issues as your network grows bigger, but this is what I had to deal with in my small networks (<50 hosts) where I can say with certainty that v4 was much more straightforward to get up and running than v6 (though I was much older when I was learning v6 than when I was learning v4, so I might also just be getting old and slow)
Yes. These are all solvable issues, but they are huge ergonomic downsides that are now pushed on local network admins to the point that for them it's still much easier to just disable ipv6 rather than learning about all these small issues and working around them.
So while v6 is much easier to handle on a global scale, it's at the same time much harder to handle at your local site, but, the internet is as much about the global scale as it's about the local site and when the new thing is much harder to use than the old thing, inertia is even bigger than in the normal "everything is mostly the same" case (where inertia already feels like an insurmountable problem)
> These are all solvable issues, but they are huge ergonomic downsides that are now pushed on local network admins to the point that for them it's still much easier to just disable IPv6 rather than learning about all these small issues and working around them.
So that's me in a nutshell. I've read various things on IPv6 over the years, and I think I might even have the O'Reilly book laying around somewhere. I understand some of the basics, but I don't really "get" IPv6. I'm still at a bit of a loss on how my local network should be configured, and what services are needed for what.
Though I'm really at a loss as far as network security and firewalls go. I've been setting up firewalls with NAT for 20 years, but I'm still not sure how its all going to work with IPv6. In the mean time, I just disable all IPv6 stuff on the firewall machines, and try not to worry about it.
In the age of constant probes, a simple mistake can compromise our entire network, which sounds... unpleasant.
I suppose I'll just keep putting off learning about IPv6 until we get to the point where I can't order a cloud instance from our provider that comes with an IPv4 address.
> I've been setting up firewalls with NAT for 20 years, but I'm still not sure how its all going to work with IPv6.
The same way: tracking of state.
An IP connection is started from the 'inside' to the 'outside', and the source-destination tuple is recorded. When an outside packet arrives the firewall checks its parameters to see if it corresponds with an existing connection, and if it does it passes it through. If the parameters do not correspond with anything in its table it assumes that someone is trying to create a new connection, which is generally not allowed by default, and therefore drops it.
The main difference is that with IPv4 and NAT the original (RFC 1918?) source address and port are changed to something corresponding to the 'outside' interface of the firewall.
With IPv6 address/port rewriting is not done. Only state tables are altered and checked.
New connections are not allowed past the firewall towards the inside with either protocol, and only replies to connections opened from the inside are passed through.
There's no magical security behind NAT: tuples and packet flags read, looked up in a state table, allowed or not depending on either firewall rule or state presence.
But as soon as you don't try to apply IPv4 conventions to IPv6, it really clicks:
- RA packets don't have default gateway - default gateway is always on fe80:: and is the actual host that sent the RA
- You can configure hosts via RA not to send packets directly to other hosts with the same prefix (instead sending them through the gateway) by disabling On-Link flag
- You can use RA and DHCPv6 over any link, not just Ethernet
In theory, we can reserve `169.254.1.1` as default gateway (and default DNS server) for IPv4, to get rid of DHCP protocol. I'm doing so in my embedded project for network connection via USB, because it makes network configuration static.
Just a heads up for anyone confused about 169.254.1.1: 169.254.0.0/16 is the link-local address block for IPv4, but link-local addressing is rarely used in IPv4. OTOH, the fe80::/10 address block on IPv6 is widely known since link-local addresses are mandated by the standard.
If you already know v4 then there's not much to learn. There are only really three differences:
a) You use a /64 from the subnet your upstream assigns you, instead of a /24 from RFC1918.
b) You don't use NAT.
c) You run an RA daemon on the router instead of a DHCP server.
Firewalling is exactly the same as in v4 -- you block inbound connections and permit outbound connections by default. A firewall without NAT is no different to a firewall with NAT (since NAT only helps with address space exhaustion and contributes nothing to securing the network).
One advantage of v6 is that you don't receive constant probes. Any v4 address will see a steady stream of them, but that's not true on v6. (v6 is so big that randomly scanning addresses in the hopes that they're assigned to something that will respond is unviable.)
You'll get v6 just fine if you spend some time using it on a real network.
> Though I'm really at a loss as far as network security and firewalls go. I've been setting up firewalls with NAT for 20 years, but I'm still not sure how its all going to work with IPv6. In the mean time, I just disable all IPv6 stuff on the firewall machines, and try not to worry about it.
Hopefully you've heard this before, and I'm sorry if I'm beating a dead horse, but NAT is not a firewall. It does render hosts behind the NAT not connectable from the Internet by default, but that's because they're unroutable not a security feature.
I.e. there was a bug a while ago that let people send UPnP requests over WAN to your router, which makes your hosts suddenly routable. NAT won't stop that from happening and your hosts are basically internet-accessible. A firewall configured to only allow outbound connections would have stopped that.
So if you consider NAT a routing feature, it works the same it always did. You configure the firewall to only allow outbound connections, unless you have a specific reason to allow inbound connections. I don't actually know if it's less secure. NAT required kind-of targeted attacks to exploit, but the IP space for v6 is large enough I would expect a dramatic drop in probe traffic. There are 3.4 * 10^38 addresses. It's just too large of a space to casually scan.
- A reminder that Ethernet doesn't do broadcast, only multicast. (And L2 switching is broken by design anyway, but that's a story of hysterical raisins for another day)
- network names have been steadily been getting better with mDNS and related tech
- SSDP (the tech underlying UPNP) already covers IPv6, there's no need to add new one (your bigger possible issue is incomplete implementation on v6 side on CPEs)
- home router/CPE vendors are converging on "standard v6 default firewall" ruleset (it's actually something I encountered in random bought AP/router combos from random electronics store, not something techie-oriented). It establishes basic filtering that resembles what people think they get from NAT, and couples well with UPNP's support for IPv6. This also includes proper handling of ICMPv6
- subnetting is a problem, yes. Especially due to SLAAC vs DHCPv6 issues in some OSes.
> And L2 switching is broken by design anyway, but that's a story of hysterical raisins for another day
Well I'm curious. I can't think of any significant way it is "broken by design", so either this is hyperbole or I'm so used to the brokenness I'm not even thinking about it.
The story as I learnt it goes around this way - hopefully on this forum someone with first-hand knowledge could chime in:
1. Ethernet happens, is designed around bus topology with shared medium and everyone talks by filtering out messages for themselves (with half of the addresses for multicast)
2. Digital works on moving ethernet from bus to star topology, design explicitly disallows connecting stars to each other without L3 router
3. Unfortunately, a non-trivial product range ends up based on LAT - essentially serial port over ethernet - and supposedly because of miscommunication LAT is very... raw-ethernet solution. No way to route it sensibly.
4. Suddenly, there's a need for larger L2 segments, except Ethernet has no way to support them (it finally gained one around starting ~2005 by throwing everything you know about L2 switching out)
5. It's too late to add features to ethernet that would make it work in larger span than single star, and possibility of loops bringing down exists, so do multicast storms (those weren't fixed).
6. The budget doesn't allow to put in a lot of computing power, a z80 gets thrown in. Spanning Tree Protocol gets created in vague hope to mitigate the curse of large L2 ethernet zones. We get stuck with primitive MAC learning
7. Genie is out of the box, and since you can crap out a too-large ethernet network much cheaper than do a proper routed one, the curse continues. Since cheap is the king, you often do not even get STP. Large scale networks fail when interns misconnect cables, multigigabit backbones end up doing 10mbit because STP made an ancient switch in the cleaning closet into root of the tree. Cats and Dogs living together, etc.
8. From around ~2005, proposals to fix it proper show up. Solution? Put routing into ethernet, using IS-IS for routing. On the other side, increasingly crazy centralized "decentralized" SDNs also try to setup L2 forwarding to deal with applications that can't deal with real IP subnetting. Somehow passing ethernet over XMPP over TLS (with BGP involved somewhere) is still better than ethernet's mac-learning.
Ethernet switching isn't broken. STP works fine at reasonable scale (as long as you leave it on) and 1980s history isn't relevant now. Obviously routing is better than switching and we can now do routing to the host with affordable "L3 switches", but switching is still usable.
I knew several colleges that had the entire campus on a flat /16 network. Dozens of buildings, 1000's of computers. It worked fine. Well, except for the "no firewall" part (this was mid 90's.)
That was fine as long as the network was thinnet or thicknet as most universities probably were, because a well planned network would start at the hub and extend out and terminate. When networks became more based on 10baseT and you could add devices by just plugging a very cheap hub into a wall socket, and then plug another hub into that, for cheap, you could get loops more easily, and degraded broadcast quality, and that kills the entire network.
Yes, it was indeed! But the PITA-ness and need for termination meant that once it was planned and implemented, it was rarely monkeyed with for a while.
MIT originally was single /8 network (Class A from before CIDR), however they had it subdivided with routers AFAIK pretty soon.
CISCO had a lot of early customers among universities because dedicated box ran better than random unix workstation pulled out from other duties (or even sharing them) running RIP and the like.
The very reason we're talking about IPv4 vs IPv6 is because of 1980 (and 80s, 1980) history, concerning people getting convinced that the temporary solution that IPv4 was supposed to be will be won't need more than obviously short 32bits and will be replaced with something better before wide adoption.
avoid spanning tree like the plague in large datacenter networks as well. because of the scale and it becomes an impossible black box.
there is a reason evpn exists, and is it to solve this exact issue by making gateways handle all logic normally stretched across to the other side of a l2vpn.
Exactly - your L2 Ethernet shouldn't go beyond immediate connection between end system and first L3 router, in DC conditions it should be to Tor... Or on-Hypervisor router.
Larger L2 spans should be done only when required, and preferably with things like TRILL/SPB.
> (And L2 switching is broken by design anyway, but that's a story of hysterical raisins for another day)
absolutely, but in ipv4, the breakage has the effect of some niche-applications like TV streaming to client-machines breaking whereas in ipv6 it has the effect of the whole network breaking.
The applications broken in v4 are so niche that most people won't notice.
>SSDP (the tech underlying UPNP) already covers IPv6, there's no need to add new one
yes, but it's very badly supported still. I have not seen this work in any home-network yet, be it because of broken OSes, broken applications or broken router software.
>This also includes proper handling of ICMPv6
You're making me hopeful. Back when I was setting things up in 2014, the situation was a minefield of brokenness, sometimes even with UI showing huge warnings about my explicit allow-ICMP-rule I had to add after the default was to block all ICMP.
How switches "handle" multicast is not a new problem — many will treat it as broadcast traffic and flood it across the network segment, leaving clients to work out if they are interested or not. More intelligent switches might perform IGMP snooping to avoid flooding and this will be no more complex with IPv6 than it is with IPv4 today.
Multihoming also isn't new and isn't really IPv6-specific. It might be more likely that you'll have multiple IPv6 prefixes but the majority of source address selection rules that you are used to in IPv4 will still apply, and you might have already ran into these problems in the IPv4 world if you have multiple network interfaces anyway.
Subnetting is probably not easier or harder. The address length doesn't change how subnetting or how routing tables work and I am not really convinced that an IPv6 addressing plan should really be any worse or better than an IPv4 addressing plan. The minimum prefix size of /64 for a network segment is about the only extra consideration there, but if anything, it should be simpler than having to manage globally routable address space and private address space separately given that you can now manage address space as a true single hierarchy.
You're right that SLAAC vs DHCP can add a bit of mental overhead, but for most configurations, configuring DHCP and letting RAs be sent automatically in IPv6 is not much different to configuring a default gateway in DHCP on an IPv4 network.
Finally, as for ICMPv6, it has always been bad behaviour to just outright filter ICMP without consideration for what it will break. The stakes are indeed higher than in IPv4, but it seems worth it if we can eliminate two entirely separate protocols in the process and firewall vendors and admins are just going to have to learn that.
I get there are a lot of cognitive factors involved in why people resist IPv6 but it really isn't as alien as most people think and most of the concerns are easily answered.
> many will treat it as broadcast traffic and flood it across the network segment
and some will silently swallow them until you update their firmware. Broadcasts on the other hand are so common (and required for ipv4 to work) that they are rarely broken.
If multicast is treated as broadcast, stuff works "fine", but because of IGMP snooping and additional "intelligence" the switches often employ, unfortunately, the failure modes I have seen tend to be packet loss rather than overeager packet forwarding.
And with multicast packets dropped in a v4 network, some niche applications will stop working, but with multicasts packets dropped in a v6 network, your network will stop working. Period.
> and you might have already ran into these problems in the IPv4 world if you have multiple network interfaces anyway
of course you have. My point isn't that v6 multi-homing is any different from v4 multi-homing. My point is that multi-homing is rare with v4 but very common and required for v6. So what's often not an issue at all on anybodies radar in a v4 network is something everybody has to deal with in a v6 network.
> most of the problems can be trivially solved.
absolutely. But they don't have to be solved by staying with v4 and thus inertia is even harder to overcome than it normally is. That was my point.
> of course you have. My point isn't that v6 multi-homing is any different from v4 multi-homing. My point is that multi-homing is rare with v4 but very common and required for v6. So what's often not an issue at all on anybodies radar in a v4 network is something everybody has to deal with in a v6 network.
Multi-homing (connecting to multiple different networks) isn't required in IPv6 at all. Having multiple different addresses on the same subnet isn't multi-homing, and the link local address of fe80:: isn't a different network. Operating systems won't even use the link local address to establish a connection unless specifically forced to.
> because while you can remember v4 addresses and v4 address assignments, this is impossible for v6 addresses
A thousand times this (and your other points, great reply thank you) - the ergonomics of using IPv6 at a local scale are atrocious for mere mortals. And you didn't touch on "should I use Stable Privacy or EUI64 for my laptop IP?" and other small cuts and bruises which technologists think everyone should "just know".
All the "but ipv6 is better because... xyz" just don't ring true to me, but I'm not a full time admin.
I still see "Quit remembering addresses - we have DNS!". My consumer equipment all have "192.168.0.1" and "192.168.2.1" type addresses. Relying on my browsers to be able to discover 'cable_modem.dyn' on a local network doesn't work - instructions will just say "go to 192.168.0.1" and put in a password. Good luck trying to get people to go to "[ff00]:0:0" or... whatever the heck you'd have to put in. Having foreign CSRs trying to explain what a square bracket is to people at home trying to set up a new cable modem... way too much headache.
And... there are millions of people that have to do this. There's perhaps tens of thousands of high-level network admins working to route everything through major global networks, but there's hundreds of millions of people that have to deal with and use all the stuff at the end points, and millions of us who serve as defacto "tech support" people for families/friends/neighbors.
People did that because they had no choice - here people are just not opting in to v6 because they can still use v4, which is easier. Very different situation.
You shouldn't remember numeric addresses anyway, and we had reasonable ways to deal with that for decades now. It's really just human unwillingness (ok, and maybe a bit of BSD Sockets shitshow, but as much as I hate them for keeping networking broken it's not their fault this time).
>And you didn't touch on "should I use Stable Privacy or EUI64 for my laptop IP?"
yes. because this has by now been solved by using a non-outdated OS. The defaults have become good enough for this not to be an issue any more, at least in my experience.
I'm literally on Arch using NetworKManager, when creating a new connection it defaults to Stable Privacy in the dropdown, but EUI64 is listed first in the dropdown itself. So, since you didn't actually state which one to use, now what? Point being: don't be condescending claiming "outdated OS", IPv6 is a minefield of footguns and there are many of them just like this choice.
The sane default is Stable Privacy. It's a good thing that NetworkManager agrees if it has offered that to you as the default. Ultimately though any confusion that arises from how that option is presented to the user is really a bug in NetworkManager and not in IPv6. The footgun here is that NetworkManager allows you to change it so easily without offering any explanation as to what changing it will do.
Do ISPs really give out /128s? That's, erm, that's monstruous! Mine gives a /60 but their router doesn't have any way to use it, which is a bit shit. Still, 10 gigs symmetric...
Rogers in Canada gives out a /64 by default, and a /56 if you send a hint.
Bell, on the other hand, gives a big fat /nothing and doesn't support IPv6. I don't understand how they can roll out 1.5Gbit FTTH but refuse to support IPv6. Their mobile network uses it, of course, so it's truly perplexing.
A /128 is a single address and given the state of v6 NAT that means it can’t be shared with other machines in your network which means only your router will be able to access the v6 internet without the router being a proxy and you using it
No it means my router is not routing IPv6 traffic. It doesn't need to though. My router and all of my computers each have /128 addresses. No issues. 19/20 on ipv6-test.com.
I got a /48, and I think it will take me a while to put all those addresses in use. I'm using 9-10 now, so while I've certainly started down the path, the end is not in sight just yet.
You need a /48 (or /56) if you want to do your own subnetting and keep using SLAAC (which is the default way for assigning v6 addresses and detecting address conflicts).
A /64 is not enough for that. You can still create your own subnets, but you will be on your own with address assignment
BellAliant in eastern Canada, the only provider of residential layer 1 fiber in my area, still isn't even assigning a /128 or /64, let alone proper delegation.
Have you heard of unique local addresses? Most of the LAN "problems" you describe are solved by using ULAs. Yes, even name resolution - the hosts file becomes useful again. PCP (the 2013 replacement for NAT-PMP) supports IPv6 port opening; UPnP has supported it since IGDv2 (2015). Any ISP that does not do IPv6 prefix delegation ("your ISP only gives you a single v6 address"), might as well stop claiming IPv6 support.
I am not sure why you think multihoming is a bad thing. That is one of the major things that in my experience makes IPv6 LAN configuration a lot more useful and robust than IPv4 with private addressing. It sounds like you misunderstood some basic IPv6 assumptions - configuring an IPv6 LAN is not that much more difficult than an IPv4 one. I would never go back to IPv4 for my LAN.
Thank you. This summarizes my headaches I've had while trying to implement a dual stack in my home network.
My current curiosity is why my DDNS service only allows IPv6 or IPv4 records for a single domain. Why can't I have a dynamic IPv4 record for the one IPv4 address I have and then make many dynamic IPv6 records as subdomains?
> - broadcasts for address discovery have been replaced by multicast which is much harder for switches to handle correctly
Multicast is sent to a broadcast address and replicated to all ports. If the switch doesn't do any IGMP snooping, multicast and broadcast are the exact same thing.
> - address discovery is now mostly handled via SLAAC which is different from how it worked via DHCP and also doesn't universally allow setting name servers which then will still require DHCP to actually get a working network (if you run v6 only), so now you have two daemons running when in v4 you only needed one.
SLAAC now has support (and all major operating systems support it) for sending DNS servers down as information in the router advertisement. I do not run a DHCPv6 server on my local network and all my systems get my local DNS information without issues
> - hosts are multi-homed by default and rely heavily on multi-homedness which might invalidate some assumptions you had when configuring hosts.
This was also the case in IPv4, nothing new here.
> - for a network to be meaningfully useable, you need working name resolution because while you can remember v4 addresses and v4 address assignments, this is impossible for v6 addresses
Even in IPv4 no-one tends to remember IP's, we have solutions for that like systems automatically announcing themselves on the local network using mDNS.
> - and in a similar vein: Subnetting is harder because the addresses are much less memorable. If you want to subnet a 10.- v4 network, in many cases, you can do this in very memorable full-byte chunks.
There is no subnetting. Just give the local network a /64.
> - also subnetting: due to many ISPs still doing /64 allocations to their customers
If you are a home users with a single flat network, that is all you need. If you are a power user and need multiple networks, your ISP probably has a way to do a prefix delegation request that is larger.
> Worse, some ISPs only do a /128 assignment (one address)
Name and shame them... the /128 should only be for the external customer gateway, and is not strictly necessary. Most ISP's allow you to ask for an IA_NA for a single address, and an IA_PD for a prefix delegation.
> - v6 relies on ICMP much more heavily but this fact has not propagated to default firewall settings, so in many default "let me turn on the firewall" configs, your v6 network will break in mysterious ways.
v4 also breaks in mysterious ways when you just blindly firewall ICMPv4. It's the reason we have so many dumb work-arounds for MTU issues because "ahhhhh, firewall all the things"
> there's no widely-supported equivalent to UPNP or NAT-PMP
> So while v6 is much easier to handle on a global scale, it's at the same time much harder to handle at your local site
I completely disagree. IPv6 is as simple to deploy as IPv4, and in fact because everything now has a globally unique IP address is makes routing so much simpler.
Because it wasn't designed as a drop-in replacement, so using ipv6 necessarily means using ipv4+ipv6 for a time. That time is now twenty years and counting.
There should be lots of implementations of both protocols by now - could we compare them for length and complexity to determine some sort of objective measure of the difference?
I tried to set up IPv6 a while ago, and it looked simple at first. After configuring the router - just flipping "enable ipv6" in gui - my machine got 10 IPv6 addresses (why this many? I don't know). Cool.
I then set up the firewall to expose one of these addresses and I could ssh to my machine from the outside world. A win!
Unfortunately the win was short lived. Eventually I lost the ability to ssh in. It turned out that the 10 IPv6 addresses were replaced by a different bunch of 10 addresses. So I would have to reconfigure the firewall again. I decided it was too much work for me and disabled IPv6. Maybe some other time.
At least in local network environments, using mdns (avahi) to discover other hosts is preferable and possibly more user friendly for less knowledgeable users.
The real problem is standing up DNS on local or enclave networks with zero effort. Right now setting up DNS is a pain, configuring DNS is a pain, and mDNS doesn't scale and is slow.
My issue with IPv6 is that its designers assume that everyone with an IPv6 network will get static IPv6 addresses.
However, it didn’t turn out that way in the real world. Every time my router resets, all of the IPv6 addresses in my home network change. So, I don’t use IPv6 to connect among computers in my home network; since I also get one IPv4 address from my ISP, I simply use IPv4 NAT so that the addresses in my home network are easily remembered and do not change.
The reason I don’t use IPv6 and 6:6 NAT is because the IPv6 designers feel this makes networking too complicated, never mind that NAT is a solved problem, so 6:6 NAT support just really isn’t there.
Another annoyance I have with IPv6 is that it needs to have more than one localhost IP address, considering that IPv4 has a 24-bit space for localhost. A large number of localhost addresses is useful for network regression tests (e.g. if we have one authoritative DNS server on 127.10.0.1 and one which isn’t responding on 127.10.0.2, does our recursive DNS server on 127.12.0.1 correctly handle an upstream DNS server being down? Nice to be able to run the test using only localhost IPs; also nice to be able to change the IPs each test so we don’t have to wait for the kernel to release TCP sockets for a given IP + port).
For the record, I have gone to a lot of effort to give my open source networking software IPv6 support.
> The reason I don’t use IPv6 and 6:6 NAT is because the IPv6 designers feel this makes networking too complicated, never mind that NAT is a solved problem
The problems with NAT continue to grow. A whole swath of IPv4 addresses (100.64.0.0/10) were reserved to allow telcos to do CG-NAT. Because folks often used the usual private RFC 1918 at home, ISPs couldn't necessarily assign those address to client equipment because there was the potential for the same range (e.g., 10/8) to be on the "inside" of the user's router/CPE as on the "outside".
I'm in a similar space. I can't see why I would ever need to understand IPv6. There are all sorts of theoretical benefits, but those will never be available to me as a residential user. For example, my ISP gives me a single IPv6 address. There is no possible reason for me to bother using it, as there will be no advantage to me.
The way I look at it, IPv6 does the following for me
1. Doubles the number of firewall rules
2. Doubles the attack surface
3. Double the header size in each packet, with no change in
MTU this means less space for data.
4. Doubles the number of routes I need to worry about
They should be giving you at least a /64 if not a /60. That way, you can have multiple subnets that are publicly addressable. The ISP should be informed they're out of best practice/RFC.
And while I get your "double" theme, most of them are non-points. Header size? Use adblock. Routes? Oh no, a residential router will now have four instead of two!
Yes, it's another addressing scheme and I agree, the benefits for many peoples' usage is low. But it isn't so bad.
I don't even want them talking out to the internet by default, which is why I have a separate subnet with a different set of firewall rules that only allows whitelisted outbound connections.
> So, I don’t use IPv6 to connect among computers in my home network; since I also get one IPv4 address from my ISP, I simply use IPv4 NAT so that the addresses in my home network are easily remembered and do not change.
Why do you need NAT at all? You can just use IPv4 to communicate among hosts in the network, and use IPv6 for them to communicate with the world. Nothing about IPv6 forbids the existence of IPv4.
> Nothing about IPv6 forbids the existence of IPv4.
Which is now the reality, but at the time IPv6 was created IPv4 was planned to be killed permanently.
Which while impossible in hindsight, the way IPv6 were designed (without even a semblance of a "private" network, even just two IPv6 addresses*) actually raises the question if IPv6 were really that well-designed.
* Okay, link-local addresses do exist, but they're not amenable or even map to how IPv4-style private networks work.
Not link local addresses - there’s a whole space for ‘Unique Local Addresses’ [1]. It’s basically analogous to the private IPv4 space (apart from the fact that you need a separate globally routable address to access the internet from, but that’s not hard).
The general plan was, and is still, to stop using v4 once it stops being useful, in much the same way that people stopped using IPX when it stopped being useful. (By which I mean: people still use IPX today, but in general you don't need to think about it.)
You can do private networks on v6; there's a massive range allocated for them (fc00::/7).
In general, v6 is designed just fine. Most of the complaints you see are from people that either don't know what v6 can do ("why didn't they just implement <thing that v6 already does>?") or don't realize that what they want is impossible ("why not just ignore the pigeonhole principle?").
In the real world ipv6 for home networks often is so frustrating that you need to go back to ipv4. My isp forces you through a CGNAT for ipv4 but only when you have ipv6. On v4 only you get your oen ip and that’s it. On ipv6 the CGNAT is also overloaded and unstable, the network gives you a new prefix once every few days and you get worse routes. Additionally the consumer level hardware is a lot buggier on v6. It will probably change but right now it’s painful.
That's why I said in the real world and not in theory. There are only a handful of ISPs I can pick from and they all have very broken IPv6 support due to bad CGNATs. Yes in theory it should work, but in practice on consumer grade internet you're better off using IPv4 only here.
If the CGNAT is bad, that's a problem with the CGNAT. If your ISP won't turn off CGNAT without turning off v6 at the same time, that's your ISP's fault.
v6 works completely fine in both cases. Your problems aren't with v6.
They're not claiming IPv6 is bad. They're claiming that they can get good service based on IPv4, or bad service based on IPv6. Of course it's the ISP's fault that the IPv6 service is bad, but, since ISPs usually hold local monopolies, overall it means that they are forced to use the IPv4 network - unless they're willing to move to a different area where there are ISPs offering good IPv6 service as well.
I think the reason this is happening is that the IPv6 infrastructure on most ISPs here was built for mobile phones. The few customers who are asking for IPv6 are just added to what was built for the phones which has completely different goals and requirements. Very few people run servers on their phones, do P2P connections or similar.
They said that v6 for home networks was frustrating, but it's not. The frustration is coming from the terrible v4 CGNAT, and that has nothing to do with v6 and everything to do with v4 being insufficient.
It's amazing how hard people will misattribute blame to v6 for the very problems it fixes.
I have IPv6/v4 dual stack home network, it 'just works', the amount fo confoguration I had to do is roughly zero, but i can reach IPv6 hosts along with v4
You’ll get a few ipv6 address two of which may be local (link-local and routable unique local). The global one will be randomized too within your block for privacy.
Now with that in mind, the implementations do all kinds of funny things that don’t seem to meet spec when it comes to router advertisement (the dhcp replacement) and routing. Use the wrong kind of address for the gateway and nothing works for instance.
Use link local IPv6 addresses internally (or unique local if you need to). That's what they are for. You can also make them very short, like fe80::1, fe80::2, etc. Your router won't forward anything in fe80::/10 to the Internet (or any other network).
That could work really great for connecting among hosts in my network (but IPv4 and an appropriate 172.x.x.x subnet works just fine, with the bonus one IP has Internet access while remaining unchanging), and is something I may try if I ever get back to re-configuring my home network (so take an upvote from me), but it still doesn’t solve the pesky “one IP for localhost issue”.
Sure, I could give localhost a lot more addresses in IPv6 with the appropriate `ipconfig` or `ip` command, but that doesn’t work with the testing Docker container whose Dockerfile I share with my users (since their Docker container will have only one IPv6 address; also, you can’t run `ipconfig`/`ip` type stuff in a Docker container).
I'm confused by this too. I use the default mDNS/DNS-SD and access my hosts with the .local TLD. It's not as robust as real DNS (looking at you Android) but works fine on the Windows and Linux hosts.
How did you make it work with Windows hosts? Apple Bonjour? Windows out of the box works fine with LLMNR, but mDNS/DNS-SD is a problem and is available only for "modern" apps, it is not integrated with system resolver.
My use case is accessing Samba shares on a Linux server from a Windows desktop, and it works with Windows Explorer and even older apps like the original Windows Media Player with no configuration or installation of any additional software.
SMB has its own discovery protocol, that powered the "Network Neighbourhood" since Windows 95 - and it is part of the deprecated SMB1, so modern Samba has a special mode, where the discovery part is enabled, but the rest is disabled. Additionally, Windows can discover SMB shares via WS-Discovery. Samba itself does not support WSD, but there are third-party utilities like wsdd, that will do it instead. Some linux-based NAS-es, like those from Synology, also ship with WSD support enabled out of the box.
My experience with Windows 10 and mDNS/DNS-SD mirrors that from the linked article. As a result, I have now a real DNS domain, with devices with their own A records :/
It's not that. I confirmed with Wireshark mDNS queries are being sent from the Windows side and answered from Linux using the Avahi service. Furthermore, web hosting I have on that Linux server also works in Firefox just by visiting the server's host name.
DNS issues have, more often than not, caused networking slow downs for me. Running a recursive DNS server on a home network is quite a bit slower than using a public DNS server on a high speed network; the slowdown with a local cache is less, but still there. Just directly using 8.8.8.8/8.8.4.4 or 9.9.9.9 or 1.1.1.1 or 4.2.2.1 is best (faster, more reliable) in my experience: Fewer moving parts. There are significant privacy and security issues with using DDNS addresses which can be resolved by public DNS servers.
For the record, I have written a DNS server from scratch. Three times, actually (try 1, which is still the authoritative DNS server I use for my domains, try 2 which is a tiny caching DNS server, and try 3 -- which, yes, reuses code from try 2 -- is a very flexible DNS server which uses Lua for configuration).
Your external DNS server is quicker than a local cache? My local cache adds less than 1 millisecond latency to an uncached lookup, and answers queries for all LAN computers in less than 1 millisecond as well.
My DNS server is pretty fast under ideal circumstances (under 0.07ms per reply using 2000 era hardware as per https://maradns.samiam.org/speed.comparison.html ). I’m sure you’re not getting 1ms in less-than-ideal circumstances (router overloaded and dropping packets, which sometimes happens on my home network), where that extra DNS server starts to really slow things down.
Ya my network never drops packets, at least for congestion reasons. Seems like congestion will affect external servers at least as much as internal ones, though.
(Access to my DNS server is not routed on my LAN, it's a flat network.)
You aren't really supposed to be using globally routeable addresses like that.
There are reserved prefixes specifically for local networks that you can allocate statically or through DHCPv6 for exactly your use case.
To me the most important thing about IPv6 is how it enables reliable peer to peer networks. IPv4 is the bane of existence to p2p networks since you have to so much extra shit to make it work (Public address discovery, NAT hole punching, relaying connections through some machine with an open port).
With IPv6 you can just stick your address on a DHT and peers can connect to you 100% of the time, no matter what.
The only thing that sucks is that you can't count on an internet-connceted device having IPv6 yet.
Peers connect to you by opening a connection to the advertised IP(s) and port. Bots connect to you by opening a connection to the advertised IP(s) and port. How do you tell which is which?
With hole punching, at least you have some amount of mutual recognition by using the same external server, and you get some amount of DoS protection from the server itself (though of course the server will likely support many more connections than your local system).
So in the end, aren't you more secure using a hole punch method for direct connections over the internet for P2P communication, even on IPv6?
> So in the end, aren't you more secure using a hole punch method for direct connections over the internet for P2P communication, even on IPv6?
No?
It sounds like you're reinventing authentication, badly. If you want to control which clients are permitted to access a service available, we have well-established ways of doing that. Dynamically messing around with the network and "hole-punching" is not one of them (unless you broaden that to mean VPNs, but if you want a VPN, use a VPN!). If you don't want anyone on the internet to be able to SYN/ACK to a TCP service you put on the internet, don't put it on the internet.
Also, insert standard soapbox speech here about how the contextless phrase "more secure" is meaningless. More secure against what? What's the threat or risk you're trying to control?
First of all, this wasn't about a service on the internet, but a P2P network. I want to download and upload data over BitTorrent, or to have conversations over TeamSpeak, but that doesn't mean I want to manage my PC like a public server.
Having a public server on the path, which is what hole-punching does, helps with this, especially in the area of DDoS, since attackers first have to fool the hole-punch server before attacking any specific peer directly.
If one peer is only allowed to talk to another peer via a centralised "hole-punching" server, it isn't p2p.
There's nothing wrong with that topology, but the very original point was about how sometimes you want p2p and IPv6 helps enormously with this. If you think p2p topologies in general are "insecure" because the peers need to be directly reachable on the internet, then that's a different argument.
If the vast majority of traffic flows directly between peers, with only an initial handshake requiring an external server, the system is somewhere between P2P and Client/Server. Depending on your goals this may be perfectly ok (e.g. if you want P2P connectivity for routing efficiency and throughput) or completely defeat the purpose (e.g. if you want P2P connectivity for censorship resistance).
>Empirically, some ISPs route IPv4 more efficiently and some route IPv6 more efficiently.
It is so refreshing to see this written down somewhere. All of the academic papers I've seen comparing the empirical routing performance between IPv4 and IPv6 show a negligible performance difference. However, in the two states I've lived where I have looked at IPv6 vs IPv4 performance I see consistently higher pings with IPv6. Traceroute reveals that each individual hop adds the about the same amount of latency, but there are ~15% more hops for IPv6.
If I play a competitive game then I don't want to be adding 10 ms of latency unnecessarily. So I just disable IPv6 on my gaming rig? C'mon. We can do better.
You can thank large backbone providers for this. IPv6 works just fine over the same L2 link as IPv4, so there would never need to be any more hops than IPv4 (sometimes they do need to upgrade equipment to support IPv6 so they may be DIFFERENT hops over DIFFERENT L2 links, but they could also move their IPv4 traffic to that new equipment).
What happens is when the large backbone providers have disputes, the de-peer with each other IPv6, which causes rerouting to be visible. They can't "punish" the other with IPv4 depeering, since that would make their own customers angry.
I firmly believe IPv6 adoption is hindered by one little reason more than any other: i can't easily remember an ipv6 address. Its too big, its in hex, and that adds friction to everything. With v4 i can just look at an ip, remember it for a second and type it into whatever config/program/etc i need. I can tell a colleague across the room "that ip is:...". I know there's the condensed address format, but that just makes communicating it harder. Typo `::` as `:`? Now you have a problem that's not easy to see.
Its a small friction, but it adds up quickly and makes working with ipv6 feel sluggish and painful.
(A common response when i say this is "isn't that what DNS is for?" and yes, it is. That's great once you have DNS and reverse DNS working, but "its always dns" is a meme for a reason).
ISP's and networks love clients being behind NAT, so they can't directly host to the outside world and can't rely on a static address. Ipv4 is a scarce resource, therefore it's valuable. Various people don't want that to go away. The only way we're going to get ipv6 everywhere is when the feds start requiring it.
The internet being a world resource should be of significant concern to every country. The limited number of ipv4 resources is a weakness of the U.S. and othe western countries. As is the lack of "cyber security".
Note that IPv6 doesn't necessarily have static addresses either. Also note that IPv6 support on the client side is gradually increasing. With current trends we'll reach the 100% in the next decade. At a certain point, some services will go IPv6 only.
Yes, but it makes static addressing far more affordable, and justifiable. Especially in city board meetings where an issue like that is raised.
Ipv6 is effectively supported everywhere but the ISP. Again, it's intentional, the only reason they'll change is federal requirements. It's the only reason the ISP here does 25/4. Because broadband requirements. They didn't change anything to do it either, they just flipped the switch. Costed them maybe a couple thousand in labor.
> Ipv4 is a scarce resource, therefore it's valuable.
I don't know how valid this argument is in this context. Most ISP clients nowadays are connected 24/7, so they are using an IPv4 anyway. They might as well keep the same IP over a larger period of time.
Vodafone Cable and Telekom VDSL, both in Germany, only change your IP if they have to. You'll usually have the same one for many months.
Also, the NAT you're referring to is the one which runs on the customer's hardware, and usually the customer has the option to set up port forwarding.
In many countries, it’s common to have hundreds, even thousands of customers behind a single carrier-level CGNAT. This obviously prevents a lot of functionality from working.
In Germany, we’ve got enough IPv4s that every customer can have their own one, while e.g. in Asia CGNAT and IPv6 are long common.
There are sadly a ton of ISPs that do CGNAT on a v4-only service. Smaller or newer ISPs like WISPs often do it, or ISPs for apartment blocks or student accommodation, but it's hardly limited to those.
Germany in particular has a couple of large ISPs that give you a choice of either their new platform (DS-lite = v6 + CGNATed v4) or their old platform (v4-only). That's a choice made by those ISPs... and unfortunately it's one that causes a lot of people on those ISPs to end up blaming v6 for problems caused by CGNAT.
I have had a cable modem and now fiber for 20 years. My IP address only ever changed after a power outage that lasted multiple hours. I have probably had about 10 IP address over those 20 years.
My sister and parents are similar. None of our IP addresses have changed in the last 2 years. I know because all of our houses are connected for remote backups and file sharing and I limit access by IP address. I have not had to update my firewall rules in 2 years.
I'm curious when we'll declare some sort of "idea bankruptcy" on IPv6, develop a new version (IPv7?) that has a "ease of migration from IPv4" as a stated goal, and deploy/implement that.
Knowing the historical transition issues collected over the past 20 years, we could, as an industry and society, design a next generation and provide a reasonable rollout target of, say, 2030, and move towards that.
Since 1998/99, there's been an explosion of networking, and large cultural shifts (billions of mobile devices, IoT, etc) which were not around when all this was specced out. No technology adopted IPv6 as a default during that time, and I dare say most things (services, devices, etc) aren't even tested against IPv6.
After 20+ years of this, I see IPv6 as a failure, even if there is 30-50% adoption (or perhaps because of those figures).
> I'm curious when we'll declare some sort of "idea bankruptcy" on IPv6, develop a new version (IPv7?) that has a "ease of migration from IPv4" as a stated goal, and deploy/implement that.
We need more addresses. That's the primary problem of IPv4 right now.
So if all the IPv4 code is written to handle 32-bit addresses, how do you create an addressing system that has more than 32-bits of data, but fits with-in a 32-bit data structure?
AFAICT, code updates will needed to occur on every device that needs to talk to the new address scheme. So what's the difference between updating every device to handle IPv6 versus updating every device to handle this hypothetical IPv7?
> So if all the IPv4 code is written to handle 32-bit addresses, how do you create an addressing system that has more than 32-bits of data, but fits with-in a 32-bit data structure?
IETF also asked that for AS numbers (which were only ~60,000 originally!) Sure enough, there were some reserved bits actually on the spec, which they used to add an extended address. Nowadays, the original BGP actually operates on a single number: AS23456 and the actual AS number is on that reserved spot.
> The long term goal of the TUBA proposal involves transition to a worldwide Internet which operates much as the current Internet, but with CLNP replacing IP and with NSAP addresses replacing IP addresses.
[…]
In §3 Migration:
> Updated Internet hosts talk to old Internet hosts using the current Internet suite unchanged. Updated Internet hosts talk to other updated Internet hosts using (TCP or UDP over) CLNP. This implies that updated Internet hosts must be able to send either old-style packets (using IP), or new style packet (using CLNP). Which to send is determined via the normal name-to-address lookup.
So you're replacing IPv4 with something that is not-IPv4 on every router and every host. During the transition period everyone will have IPv4 and not-IPv4 addresses.
How is not-IPv4 being CLNP/NSAP any different that not-IPv4 being IPv6? What am I missing?
In §6 on DNS:
> TUBA requires that a new DNS resource record entry type ("long-address") be defined, to store longer Internet (i.e., NSAP) addresses.
IPv6 format is just too damn confusing and mental overhead.
Prefixing all IPv4 with 0.0, opens up another 64k copies of the 4 billion address space of a.b.c.d. This would have been far easier for everyone (not just a few admins, but everyone that ever has to deal with an IP address) - to understand and deal with. It still could be, with the aforementioned-yet-not-developed-nor-propsed IPv7.
Sorry, that is a misinformed view. Your proposal has exactly the same problem IPv6 has - all network hardware knows the format of an IPv4 header, and you can’t fit larger addresses in it without changing all network hardware and software. So large swathes of the internet would be inaccessible until all those routers, firewalls, middle boxes, cell towers, etc. are replaced, which would take another 20 years! Whereas it’s mostly been done for IPv6.
This gets repeated over and over, but the issue is that it's an impossible task.
A ipv4-only host cannot ever directly communicate with a IPv6/IPv7/.. host.
The issue is simply that you cannot fit more than 32bit of information into 32bit.
There are plenty of migration technologies that exist to integrate IPv4 and IPv6 (NAT64, tunneling ipv6 over ipv4, etc. etc.).
And anything like IPv7/IPv6-light will end up with the same issues and solutions you are facing today with IPv6.
First - we already have extended the IPv4 address space by using + PORT in many cases (ie, running through CGNAT). So if I want to talk to another IPv4 host, we focus on IPv4+PORT, and the CGNAT's translate that to private IPv4 addresses on their local networks.
Plenty of good migration options exist(ed) that allow for each host to be single stack and still talk to each other while upgrading at different rates. The basic approach was a simpler ipv6 that didn't change all the concepts around with an ipv4 range embeded in it.
Routing then just proceeded as normal, but if an IPv6 host was trying to talk to an IPv4 host, if the next step in route was IPv4, then router drops the extra IPv6 bits (would likely have been hop before server).
Conversely, when IPv4 host is talking to an IPv6 only host, they are routing back, and then when the next link in route is IPv6 only you rehydrate the IPv4 address with the IPv6 IPv4 prefix. The IPv4 network is talking to you using IPv4+PORT. The Port is basically how we get extra address space ALREADY with IPv4 and CGNAT (but to allow transition now you map that inbound IPv4+PORT to an IPv6 address. Being smart you can actually do a predictable mapping here if you wanted).
Other little tweaks make this better - but the key is that I as an end user can go IPv6 only on my network (ideally in a more IPv4 style without the complexity).
Anyways, some networks are already converging / heading this way with things like X64LAT or whatever - allowing devices to basically see an IPv6 only world. But you are still dragging along this insane IPv6 complexity
You're essentially describing NAT64, a thing which exists in v6 already. I use it on my desktop, which has no v4 address. You're arguing for something we already have.
v6 isn't really very complex. In fact it's generally simpler than v4, especially when you run out of v4 address space and NAT gets involved.
> First - we already have extended the IPv4 address space by using + PORT in many cases (ie, running through CGNAT). So if I want to talk to another IPv4 host, we focus on IPv4+PORT, and the CGNAT's translate that to private IPv4 addresses on their local networks.
So how does a VPS / cloud provider use the +PORT to give themselves and their customers more publicly-available addresses to assign to virtual machines?
> Plenty of good migration options exist(ed) that allow for each host to be single stack and still talk to each other while upgrading at different rates.
This was brought up in another part of the thread. I'm copy-pasting my reply:
---
From the RFC (emphasis added):
> The long term goal of the TUBA proposal involves transition to a worldwide Internet which operates much as the current Internet, but with CLNP replacing IP and with NSAP addresses replacing IP addresses.
[…]
In §3 Migration:
> Updated Internet hosts talk to old Internet hosts using the current Internet suite unchanged. Updated Internet hosts talk to other updated Internet hosts using (TCP or UDP over) CLNP. This implies that updated Internet hosts must be able to send either old-style packets (using IP), or new style packet (using CLNP). Which to send is determined via the normal name-to-address lookup.
So you're replacing IPv4 with something that is not-IPv4 on every router and every host. During the transition period everyone will have IPv4 and not-IPv4 addresses.
How is not-IPv4 being CLNP/NSAP any different that not-IPv4 being IPv6? What am I missing?
In §6 on DNS:
> TUBA requires that a new DNS resource record entry type ("long-address") be defined, to store longer Internet (i.e., NSAP) addresses.
In case of BGP it was possible since it's not a end to end communication protocol.
How exactly does TUBA solve the problem where ipv4-only host wants to communicate with a non-ipv4 host?
This proposal solves none of the issues we are facing of IPv6.
The RFC suggests some of the transitional technologies that are in issue with IPv6 today/the past: tunneling IPv6 over IPv4 (6in4, teredo, etc.), dualstack, NAT64 and other mapping, AAAA recrods in dns.
But it doesn't get around the fact that you cannot fit more than 32bit of information into a 32bit address.
also, in the case of BGP it is also possible because it is a protocol with a strict process around its use in the default free zone with a community of peers who all benefit from being interconnected and being able to exchange routes.
it is far easier to convince a couple of thousand organizations to modify their highly specialized infrastructure compared to end users and manufacturers who just want to use or create widgets which will do what is required.
RFC1347 (TUBA) is architecturally not very different from a Dual Stack IPv6 setup with 6rd and IPv4 (CG)NAT, except that you're stuck with the fragmented IPv4 routing table forever since the extended (CLNP) packets are routed over the original IPv4 network. And of course it's missing various other improvements made in IPv6 such as the higher baseline MTU of 1280 bytes (vs. the IPv4 baseline MTU of 576 bytes minus the CLNP headers), the removal of in-route IP fragmentation, the streamlined IP header, etc. Implementing TUBA would still requirement changes to all existing network applications and many protocols to work with extended addresses, so you might as well just add IPv6 support instead.
This exists for IPv6 and would need to exist for any alternative proposal.
And there are certain deployments of this in action, where hosts only have IPv6 and they only receive IPv4 connectivity via NAT mapping at the edge.
One issue with it is however that it doesn't address the address exhaustion issue since you would still need one IPv4 address for each IPv6 address you want to map to.
I think folks are trying to break apple's stranglehold / monopoly so that other app stores can be offered on iOS. This should help address these types of issues so more devs can get onto iOS more easily without having to meet Apple's level of requirements.
> This should help address these types of issues so more devs can get onto iOS more easily without having to meet Apple's level of requirements.
Well if an app developer does not ensure that their code works on a device that only has an IPv6 address they will be in for a big surprise: lots of mobile telcos only give out IPv6 addresses.
Connecting to the IPv4 world is done via CG-NAT with the first few hops being IPv6-only.
Realistically, such a new protocol will only make it worse. No doubt that new problem will lack some features that IPv6 has and that are in active use. So some group will refuse to move to the new protocol.
Some group will not move at all. IPv4 is fine for them.
And the rest of the world will get an endless amount of translation between IPv4 and the new protocol that is a nightmare to debug.
There is also the big question whether there actually exists a better transition path. I have not seen any ideas that are likely to be accepted on a large scale by the networking community.
Nope, you don't need to, because of a simple fact that there is a very minuscule amount of IPv6-reliant systems in the wild. Most of them also operate in IPv4 (because IPv4 is more prevalent).
Not really, since IPv6 adoption is low, a service that doesn't also exist on IPv4 practically does not exist, so if you have a good migration path (unlike v6) from v4, you're taking approximately all services with you.
Comcast's entire core network is pure IPv6. Every cable box, cable modem, everything is connected and managed using IPv6 addressing.
None of it is IPv4.
T-Mobile's core network is entirely IPv6, IPv4 lives on the edge only.
Facebook's entire internal network is IPv6, they have IPv4 edges that translate to IPv6 internally so that it is routed as if it were IPv6 and all services see IPv6 only.
Sorry, but your "adoption is low" is very VERY wrong and IPv6 has already solved a lot of problems, for example the "we are out RFC1918 space, and adding more NAT is not the solution".
That last part is literally why Comcast moved to an IPv6 core.
Mobile networks, for example, have huge deployments of IPv6. Whether or not they are used to access IPv6-only services doesn’t change the fact that IPv6 is an integral part of their network design and therefore an alternative to IPv6 would need a migration pathway from it.
This is what Python 3 did. Python 3.0 removed a lot of Python 2 features which were then progressively re-added in 3.1, 3.2, and 3.3. No matter how good your design, you need people using something else to actually want to make the move.
> You can if you want, but despite what some people will claim, it probably won’t make much difference.
This apathy is exactly why adoption is slow. For all its faults Google needs to be commended for their commitment to IPv6. They are the only large email provider I see sending and receiving email using IPv6 even though "it probably won’t make much difference"
It's okay imo. IPv4 on the internet is getting very expensive very fast. The market will do its thing and the solution is already on hand: IPv6. The addresses are virtually free. Sure it's not with all the fanfare and it doesn't meet all of its promises, but it will happen and fast.
Close to 5€ per month for my virtual server. Almost half the total cost. I’m still paying them because I’m not sure people could still contact my web server (and when I used it my mail server) if it was IPv6 only.
Mr. cube00, for a static website it's a toss-up (you should activate it anyway if you're able to!).
The problem is that (for example) for forums etc., 40-bit addresses (the best-approximation considering that only a slice was allocated and /64 is treated as a single network connection) adds a whole lot of problems when it comes to combating spam etc. 8 bits sounds like nothing to you but you multiplied their problem 256 times. In shorter words, it's not always economical to turn the proverbial switch on. For Google, they can rely on their AIs but for small forums? That's just (unfortunately) an additional attack surface on something that they want to be gone.
How do you make sure that the single IPv4 address you are blocking is not used for CGNAT? When you don't care about collateral damage you could as well block IPv6 /40 or /48s. Currently this is maybe not a problem yet because most people don't have CGNAT addresses but the problem will become bigger.
It also helps when you connect from mobile networks, as due to various things (including, afaik, licensing shenanigans) there's a huge push for v6 in mobile networking. Even your v4 traffic is probably going over v6 using 4x6x4 translation.
Yes, Spectrum should fix their issue and if more of the Internet was IPv6 user facing, they’d have to more urgently, but what will happen instead is companies will just turn off IPv6 for maximum compatibility.
Right, but given how widely deployed IPv4 is the bugs are generally shallower. E.g: if there firewall bug existed with IPv4, I bet it would be fixed now.
> IPv6 has several advantages, including a much larger address space. IPv4 had only 2^32 addresses, less than one per person on earth. IPv6 has 2^128 addresses, an immensely larger number which is not expected ever to be exhausted. Estimates are that this is enough to assign 100 IPv6 addresses to every atom on earth.
Yeah, so that's overestimating the number of IPv6 addresses by quite a couple of orders of magnitude. This website estimates the number of atoms at 10^49 to 10^50, whereas 2^128 is in the order of 3 * 10^38. https://www.fnal.gov/pub/science/inquiring/questions/atoms.h... Perhaps the writer was thinking of grains of sand instead of atoms? I'm not sure how many sand we have, but it's probably more in the 2^128 ballpark.
>The primary purpose of IPv6 was to expand the address space of IPv4.
I wish this were all it did. People can make arguments about IPv6 being "simpler," but it's often not simpler in practice:
- You still need to run dual-stack
- You need to re-learn a lot of your networking fundamentals
- Despite IPv6 being effectively infinite, most ISPs will not give you a static IP
- Most hardware is built for IPv4 and NOT IPv6
- Local subnetting is far more complicated. Not just due to the the length of the address, but due to the fact that you often need to work out SLAAC and/or have a local DNS service to handle the address changes.
IPv4-Compatible IPv6 address are deprecated, because people can use IPv6 to circumvent IPv4 NAT firewalls, so OS vendors asked to ban this behavior. It's also confusing to people when they can call both IPv4 and IPv6 at the same time:
$ ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms
# OK
$ ping6 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.063 ms
# OK
$ ping ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.023 ms
# OK
$ ping6 ::ffff:127.0.0.1
PING ::ffff:127.0.0.1(::ffff:127.0.0.1) 56 data bytes
ping6: sendmsg: Invalid argument
# CONFUSING & DANGEROUS
Just think how much simpler it would have been to keep all the IPv4 tooling with just an ifdef to make the addresses wider, and a few version changes for IP and ARP.
Part of what makes V6 adoption lag is that it took a while to be supported across the board by router vendors, OS vendors, etc. And a lot of the reason for that is all the gratuitous changes from IPv4.
Around 1990~1991, the TUBA proposal was already ready to go with two implementations (one on hardware router), bringing addresses to iirc either 160bits or 144 bits (don't recall exactly, been long time). Might have been better if they went for 144 bit host name and embedded port numbers in the last 2 bytes, but the point was to run TCP and UDP close to unchanged.
Then IPng got started and for most of 1990s IETF played with sweeping changes while "temporary solution" that was IPv4 entrenched itself in worse and worse ways.
I was an undergrad then, and wasn't really aware of this sort of thing. I just went and read the TUBA RFC (rfc1347), and it looks like CNLP was no walk in the part either. From the RFC:
CLNP contains a number of optional and/or variable length fields. For example, CLNP allows addresses to be any integral number of bytes up to 20 bytes in length
Did anybody advocate for the simple approach of just expanding the in_addr from 32 to 64 bits, calling it IPv5 and being done with it? That's what I think would have been the right thing to do..
It would still be incompatible with IPv4. It couldn't have avoided the complexity of a prolonged dual stack situation. So it would have had at least comparable deployment complexity.
And when you're incompatible anyway, why wouldn't you at least simplify headers and global routing? BGP has always been on the verge of imploding.
A backwards compatible IPng had been great, but no credible design was ever put forward.
No credible design is possible, because when people ask for a backwards-compatible IPng what they're really asking for is a forwards-compatible IPv4, and they can't have it because v4 isn't forwards compatible.
IPv6 is backwards compatible with v4 in a great many different ways. You've got dual stack, Teredo, 6to4, 6rd, 6over4, ISATAP, 6in4/4in6, NAT64/DNS64, 464xlat, DS-lite, MAP-T/E, 4rd, LW4over6... you could make a reasonable argument that it has too many methods of backwards compatibility, even. But obviously v4 isn't forwards compatible with it, because v4 isn't forwards compatible with any longer address length, and the time to fix that was in the 70s.
That's not something that can be changed without replacing v4. In fact, if it could be, we wouldn't need a new protocol in the first place.
A big chunk of the problem was the BSD Sockets, which unlike TLI/XTI leak implementation details like crazy, meaning every BSD Sockets application effectively hardcoded IPv4 behaviours.
Gratuitous changes to "fix" things is exactly why adoption took forever. Look at nonsense like IPv6 extension headers. The number of extension headers is UNBOUNDED. This is HORRIBLE to deal with in hardware, and not pleasant in software.
As someone who experience many problems with Multicast on IPv4 / IPv6 hybrid networks, how different are the implementation of Broadcasting on each protocol?
Mostly I came aware that in IPV4 the router tries to create a local multicast group, using IGMP snooping you can solve some problems to get your broadcast thru multiple devices, but in IPv6 this is kind of confusing to me... does any one has information on this?
> At Tailscale we believe the main reason for the slow IPv6 rollout is that it simply has not been able to provide enough direct value, when deployed as a hybrid in parallel with IPv4. The intention was to deploy IPv6, then retire IPv4 completely, in which case IPv6 would have made the Internet overall simpler and cheaper to manage, which is a big benefit. Unfortunately, this value doesn’t materialize until the very end, after IPv6 has been fully deployed to billions of devices. This means companies usually will not recoup the costs of IPv6 deployment on a predictable timeline, which makes investment hard.
Anyone else get a strong climate change parallel vibe from this section? Hopefully the stronger (dis)incentives of a slower rollout of carbon reduction efforts will be able to overcome some of these same obstacles.
I'm on linux... Is there some document on how I can tell if i'm getting ipv6, a description of what i'm seeing or what to expect, and what I can do with it that is cool?
For anyone wondering why they cant access their ipv6-addressed pi box from public WiFi networks: those public networks still use ipv4 and assign you only a lonk-local ipv6 address.
To go from 4 to 6, the public WiFi box will need to use a 4 to 6 "broker" and there are surprisingly few of these around and they're not usually free.
So basically, your home network and Pi is future-ready, but public infrastructure might need a moment...
Keep in mind this allocation strategy only affects 1/8th of all IPv6 space, so if we picked the wrong one we have the other 7/8ths to use in a (hopefully) better strategy.
This was brought up in another part of the thread. I'm copy-pasting my reply:
---
From the RFC (emphasis added):
> The long term goal of the TUBA proposal involves transition to a worldwide Internet which operates much as the current Internet, but with CLNP replacing IP and with NSAP addresses replacing IP addresses.
[…]
In §3 Migration:
> Updated Internet hosts talk to old Internet hosts using the current Internet suite unchanged. Updated Internet hosts talk to other updated Internet hosts using (TCP or UDP over) CLNP. This implies that updated Internet hosts must be able to send either old-style packets (using IP), or new style packet (using CLNP). Which to send is determined via the normal name-to-address lookup.
So you're replacing IPv4 with something that is not-IPv4 on every router and every host. During the transition period everyone will have IPv4 and not-IPv4 addresses.
How is not-IPv4 being CLNP/NSAP any different that not-IPv4 being IPv6? What am I missing?
In §6 on DNS:
> TUBA requires that a new DNS resource record entry type ("long-address") be defined, to store longer Internet (i.e., NSAP) addresses.
Can you explain how it was compatible? That RFC says "Updated Internet hosts talk to old Internet hosts using the current Internet suite unchanged." which sounds exactly the same as the way v6 normally does it.
As far as I could tell from the RFC, if TUBA counts as v4-compatible then so does v6.
I really don't know where this ridiculous claim comes from. Yes, IPv6 addresses look more complicated but various other things about the protocol are drastically simplified — no more on-path fragmentation, simpler header formats and fewer required header fields, correctly implemented link-local scopes, previously separate ICMP+ARP+IGMP protocols consolidated into ICMPv6 (which handles neighbour discovery, router advertisements, path MTU discovery and multicast group membership amongst others), no more broadcast, and in many cases clients will quite happily get along without DHCP. If anything, it is considerably less complex.