Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The address space has very little effect in the real world. Other latencies far outweigh anything measurable by having slightly more bits in the address.

Eg: Sometimes IPv6 is faster due to routing differences.



That makes sense. I'm very curious about real-world studies. As a gamer, I'm especially interested in the affect ipv6 has on UDP for real time gaming applications. That's an area where even 5ms can have an enormous affect on the experience.


That's a very interesting case, as UDP is very reliant on MTU. If the IPv6 headers take out more space from the ethernet frame, that leaves less space for the UDP payload. Which means that a UDP payload which was at the limit for IPv4 on the typical MTU needs to be fragmented into two IPv6 packets, which will likely increase latency quite significantly.

However, this will depend on each specific game, if they are using all the available space or not. If they're sending 200 byte datagrams, they shouldn't see any difference.

On the flipside, IPv6 has a larger minimum MTU than IPv4, so it could happen that your maximum UDP payload actually goes up when switching to IPv6. So, if the game previously had to send 5 packets to do an update, it might be able to send only 3 when it can rely on IPv6, so maybe latency actually significantly improves.


If you try "ping" and "ping6" towards a multi-protocol host, you see both send 64 bytes each, so while v6 source and destination addresses take up lots of extra space, the v6 IP packets have less of the "this part could be useful for tcp" which means icmp pings can be of the same size, even though the two addresses eat up lots more bytes.

Not sure if the same goes for game UDP packets, but the optional header stuff in v6 IP packets means more of it goes to the useful parts of the payload and less to "the sum of all protocol bits and flags that is not used by all traffic".


This is straight up wrong. An IPv4 ICMP echo request over ethernet uses a minimum of 42 bytes, the same request with IPv6 uses 62. The ethernet frame is 14 bytes and the ICMP echo is 8 bytes for both packets, the difference is that the IPv4 header uses 20 bytes where IPv6 uses 40.

Anecdotally, my ping to HN is consistently 166ms with either protocol. I doubt an extra 20 bytes is going to make any meaningful difference to latency, but I'll leave that for the game devs to find out.


There is nothing in the IPv4 header that is only useful for TCP, especially not in the parts removed from IPv6. Overall the IPv6 header gets rid of the 4 bytes of fields used for fragmentation, and 2 bytes used for the checksum. Fragmentation was never used for TCP in any sane implementation (as TCP can do fragmentation at the TCP layer), and the checksum were fully redundant for TCP. For UDP, the checksum used to be optional, but is now required. So, for an optimized implementation, the checksum removal isn't even a win for UDP, as it has just moved from the IP layer to the UDP layer.

So, we have added 24 bytes to the header because of the address difference, and removed 4 bytes from other places.

Now again, there are many differences between IPv4 and v6 that are much more relevant to latency than this extra header overhead. But it is a real overhead, there is no extra scope for payload. Your observation with ping is just wrong (most likely both versions are just padding the packets up to 64 bytes by default).


If you look at Googles IPv6 statistics the latency seems to be lower with IPv6 in almost all countries: https://www.google.com/intl/de/ipv6/statistics.html#tab=per-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: