Does ipv6 result in higher latencies? I could see larger addresses increasing latency, but then again I could also see a more efficient protocol resulting in lower net latency. I should probably read a book describing the differences.
Real-world latency is impacted by configuration and hardware (e.g. your ISP may have different routes for IPv6 that can be either better or worse, it can be handled by different routers with different performance, IPv4 traffic may be going through CGNAT, PPPoE concentrators, etc) that dwarf any theoretical differences.
Google's IPv6 stats also measure latency compared to IPv4 and in most countries IPv6 has lower latency (e.g. in the US on average you get 10ms lower latency with IPv6). When this chart was new it was mostly the other way around, with early IPv6 implementations being poor https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...
Real world performance differences between v4 and v6 are more likely to be influenced by different routing and network manipulation for v4 vs v6 than the larger address size.
If your v4 goes through NAT and v6 doesn't, that's a big thing.
If you have different peering and transit providers in v4 and v6, that's a big thing.
If overhead from address sizes was really a big deal, we'd see work to push larger MTUs and working MTU discovery, but that kind of stalled a while ago. 1500 works for a lot of people, and many major sites drop effective MTU by 20 or so and that makes more things work, and then it gets swept under the rug. (OTOH, I think Android may have finally gotten MTU probing enabled after many years of shipping it disabled; Apple has had very effective probing, at least on iOS for a long time)
The address space has very little effect in the real world. Other latencies far outweigh anything measurable by having slightly more bits in the address.
Eg: Sometimes IPv6 is faster due to routing differences.
That makes sense. I'm very curious about real-world studies. As a gamer, I'm especially interested in the affect ipv6 has on UDP for real time gaming applications. That's an area where even 5ms can have an enormous affect on the experience.
That's a very interesting case, as UDP is very reliant on MTU. If the IPv6 headers take out more space from the ethernet frame, that leaves less space for the UDP payload. Which means that a UDP payload which was at the limit for IPv4 on the typical MTU needs to be fragmented into two IPv6 packets, which will likely increase latency quite significantly.
However, this will depend on each specific game, if they are using all the available space or not. If they're sending 200 byte datagrams, they shouldn't see any difference.
On the flipside, IPv6 has a larger minimum MTU than IPv4, so it could happen that your maximum UDP payload actually goes up when switching to IPv6. So, if the game previously had to send 5 packets to do an update, it might be able to send only 3 when it can rely on IPv6, so maybe latency actually significantly improves.
If you try "ping" and "ping6" towards a multi-protocol host, you see both send 64 bytes each, so while v6 source and destination addresses take up lots of extra space, the v6 IP packets have less of the "this part could be useful for tcp" which means icmp pings can be of the same size, even though the two addresses eat up lots more bytes.
Not sure if the same goes for game UDP packets, but the optional header stuff in v6 IP packets means more of it goes to the useful parts of the payload and less to "the sum of all protocol bits and flags that is not used by all traffic".
This is straight up wrong. An IPv4 ICMP echo request over ethernet uses a minimum of 42 bytes, the same request with IPv6 uses 62. The ethernet frame is 14 bytes and the ICMP echo is 8 bytes for both packets, the difference is that the IPv4 header uses 20 bytes where IPv6 uses 40.
Anecdotally, my ping to HN is consistently 166ms with either protocol. I doubt an extra 20 bytes is going to make any meaningful difference to latency, but I'll leave that for the game devs to find out.
There is nothing in the IPv4 header that is only useful for TCP, especially not in the parts removed from IPv6. Overall the IPv6 header gets rid of the 4 bytes of fields used for fragmentation, and 2 bytes used for the checksum. Fragmentation was never used for TCP in any sane implementation (as TCP can do fragmentation at the TCP layer), and the checksum were fully redundant for TCP. For UDP, the checksum used to be optional, but is now required. So, for an optimized implementation, the checksum removal isn't even a win for UDP, as it has just moved from the IP layer to the UDP layer.
So, we have added 24 bytes to the header because of the address difference, and removed 4 bytes from other places.
Now again, there are many differences between IPv4 and v6 that are much more relevant to latency than this extra header overhead. But it is a real overhead, there is no extra scope for payload. Your observation with ping is just wrong (most likely both versions are just padding the packets up to 64 bytes by default).
This is an area you want to measure carefully because some of the older reports about IPv6 being slower were artifacts of old hardware limitations or under-optimized software which are no longer relevant.
It shouldn't. There is no checksum in header so that is one thing that doesn't need to be calculated, even if it can be done in hardware. And more efficient routing should mean smaller tables to lookup things from so being faster.
Possibly due to routing differences on the path to your service, but not due to the protocol itself. Definitely not due to the address size. Beyond your local equipment, that switching normally happens in hardware.
20 bytes / 1 Gbps = 160 ns per hop. That's 0.016 ms additional latency over 100 hops.
On the internet, most links are faster than 1 Gbps and most paths are shorter than 100 hops, so that's a conservative estimate.
If you're sending lots of 10-byte payloads, then IPv6 requires (40+10)/(20+10)=166% as much network capacity, but are you really filling up an expensive link with VoIP traffic?
In theory, any impact from longer addresses would be outweighed by the benefit of the shorter non-CIDR routing table (and in turn that should be outweighed by avoiding NAT, and that should be outweighed by avoiding CGNAT). (Plus with most systems being natively 64-bit these days, that impact should be 0 - the routable part of an IPv6 address is 64 bits, and comparing a 64-bit value is no harder than comparing a 32-bit value).
In practice IPv6 is newer, which has good and bad sides; IPv6 routing paths are more likely to be using newer (and therefore faster) equipment, but there's also a bigger risk of someone making a mistake that messes up your routing/latency, particularly if your ISP hasn't been doing IPv6 for very long.