Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, IPv6 is often more complex. The main exception is that by and large IPv6 doesn't have NAT, so that saves a few headaches in that area.

No more on-path fragmentation is not a benefit. IPv6 and large DNS replies is an endless source of problems.

Moving fragmentation to an extension header similarly creates problems. Dealing with extension headers is just more code complexity.

Link local does not work (reliably) in browsers: https://[fe80::1]/ doesn't work on most platforms.

ICMP, ARP, and IGMP perform completely separate functions. Putting then all in ICMPv6 doesn't help. In contrast, having ND in ICMPv6 leads to code complexity. In IPv4, ICMP logically uses IP to send packets with uses ARP. In IPv6, ICMPv6 logically uses IPv6 to send packets, which uses ICMPv6 for neighbour discovery.

IPv6 created a lot of flexibily by having multiple addresses per interface created automatically from router advertisements. And multiple routers on a subnet that can each support different prefixes (poor man's multihoming). Net result, certainly with devices that frequently connect to different networks, such as phones and laptops), is way too complex.

That said, the only way forward is IPv6. Putting everything behind multiple layers of NAT is ultimately going to fail.



> No more on-path fragmentation is not a benefit. IPv6 and large DNS replies is an endless source of problems.

I thought this was the other way around, IPv4 only guarantees reassembly up to 576 bytes so DNS avoided issues with split UDP datagrams by limiting the payload to 512. Ends stuff got added on once the defacto internet mtu became 1500 and there was more room. Things like 4G have a 1482 MTU though so it may seem frag!mentation helps but in reality most IPv4 routers don't fragment and reassemble anymore they just drop. In practice with DNS this has meant either keeping the packet size closer to 1k or using TCP which negotiates miss and handles correcting/merging lost split payloads.

If anything IPv6 has made the situation cleaner with a minimum supported MTU of 1280 vs IPv4s 68 guaranteeing the 1kish UDP DNS payloads can make it through without relying on pmtud.


That's two separate issues. The default (maximum) IPv4 reassembly buffer is 576. This issue is solved in DNS with the EDNS udp buffer size option.

For IPv4, you can just send a 1500 octet DNS reply and it will be fragmented as needed. For IPv6, you have to fragment at 1280 or do path MTU discovery (which doesn't work very well, certainly not for DNS over UDP). You can always fragment at 1280 but many firewall will drop fragmented packets, also because IPv6 extension header parsing is complicated.


> For IPv4, you can just send a 1500 octet DNS reply and it will be fragmented as needed

As mentioned in theory yes, in practice most hardware base IPv4 routers don't actually implement fragmentation anymore.

> You can always fragment at 1280 but many firewall will drop fragmented packets, also because IPv6 extension header parsing is complicated.

Many of the same firewalls drop fragmented DNS packets as well because of cache poisoning attacks and other issues.

All that isn't to say people haven't tried/used fragmentation for UDP DNS packets but rather it's historically never worked reliably or securely anyways which is why all of the current BCPs RFCs are to avoid it at all costs.

All of that is why EDNS0 specified the min max to be 1220 bytes and dnsflagday last year focused on 1232 of payload bytes instead of 1500 (minus change).


> Link local does not work (reliably) in browsers: https://[fe80::1]/ doesn't work on most platforms.

1. You have to specify an interface, since fe80::1 may be in use on more than one link (so that becomes https://[fe80::1%en0]/ for instance), 2. that IP address may not be assigned to any devices on the link-local network.

What platforms does it not work on?


That breaks significant assumptions of the WWW. Specifically, it means that devices have different addresses when accessed by different hosts, which breaks all hyperlinks the Server may send back, unless the User Agent also sends the scope ID to the Server. However, the scope ID is meant to be meaningful only in the context of the host that originated it, so RFC6874, which introduced this concept officially in URLs, prohibits sending it.

Overall, this means that, in practice, WWW on IPv6 does not support link-local addresses. This is especially true given that none of the major browsers support them.


On POSIX systems (including MacOS), just 'fe80::1' doesn't work. You need something like fe80::1%eth0. The 'eth0' is in general unknown, because it is the name of the outgoing interface, which varies from OS to OS and even between Linux distributions.

Then in URLs you have the question whether it is 'http://[fe80::1%eth0]' or 'http://[fe80::1%25eth0]' ('%'s escaping). And by and large browsers have decided that the whole '%eth0' is complex from a security point of view, so they don't support it.

In some cases Windows does allow just a 'fe80::1'. But I don't know under what circumstances.


The URL you have in parentheses with a zone name is not supported by Chrome:

https://bugs.chromium.org/p/chromium/issues/detail?id=70762

In general, zone support is spotty. Many networking libraries do not handle it at all.


Sounds like Chrome is broken.


It's not supported in any browser, so good luck using it on the web. Even more, the semantics for such a URL as defined by RFC6784 means that common HTTP features like redirects can't work: client sends a request to http://[fe80::6%7], the User Agent sends a request to server with "Host: http://[fe80::6]", and receieves a 302 response with "Location: https://[fe80::6]" . What can client do next?


I'm curious what you see as the reason that putting everything behind multiple layers of nat can't work? It seems to me like it has worked pretty well so far, and we're nowhere close to running out of (ip, port, ip, port) tuples.


1 layer of nat on each side hasn't bad, the 2 layers of nat on each side carriers have been moving to has been a godawful mess of complexity for any conversation where at least one side isnt a public IP (e.g. p2p chat/calls).


> The main exception is that by and large IPv6 doesn't have NAT, so that saves a few headaches in that area.

I wish. ipv6 under cgnat is a PAIN.


You could drop everything before "cgnat".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: