Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Three broad points.

First, the supply side: if you read this article through, you saw the demand curve inflections at the classful-classless change and at the NAT change. Downthread someone already mentioned carrier-NAT, which is one more potential inflection. But there are others; the biggest could be a liquid market for routable blocks. There are large companies pointlessly squatting on huge allocations; some of them assign routable IP addresses for every desktop in their network, despite (sanely) firewalling them off entirely from the Internet. A market for IP addresses would make that waste expensive and return potentially large numbers of addresses back to general use.

Second, the demand side: It will no doubt enrage HN nerds to hear this, but most Internet users do not need a first-class address. In fact, it's possible that most Internet users are poorly-served by first-class addresses! They never take advantage of them, but holding them exposes them to attacks. Because mass-market application developers in 201x have to assume large portions of their users don't have direct Internet connectivity, technology seems to be trending away from things that require it, and towards tunneling, HTTP-driven solutions, and third-party rendezvous.

Finally: Who says IPv6 needs to be the future? Bernstein's summary[1] of the problems with transitioning is getting old, but the points in it seem valid. If we're going to forklift a whole new collection of libraries and programs onto everyone's computer, why don't we just deprecate the whole IP layer and build something better. I don't think I see the reason why IP can't just be to 2020 what Ethernet was to 1990: a common connectivity layer we use to build better, more interesting network layers on top of.

The core functionality of the IP protocol has served beautifully over the last 20 years, but the frills and features have not. IP multicast is a failure. IPSEC is a failure. QOS is still a tool limited to network engineers. We barely have anycast.

These are all features that would be valuable if they work, but that don't work because the end to end argument militates against them --- their service models evolve too fast for the infrastructure to keep up, and they're pulled in different directions by different users anyways.

We can get new features and unbounded direct connectivity with overlay networks. We have only the most basic level of experience with overlays --- BitTorrent, Skype --- but the experience we've had seems to indicate that if you have a problem users care about, overlays tend to solve them nicely. We should start generalizing our experience with successful P2P systems like Skype and pull in some of the academic work (like the MIT PDOS RON project) and come up with a general-purpose connectivity layer that treats IPv4 like IPv4 treats Ethernet.

Special bonus to that strategy: Verizon and AT&T don't really get a say in how those overlays work, and nobody needs to wait for a standards committee to agree on whether things are going to be big endian or use ASN.1 or XML.

[1] http://cr.yp.to/djbdns/ipv6mess.html



Each of those three points has strong counter arguments.

> A market for IP addresses would make that waste expensive and return potentially large numbers of addresses back to general use.

The concept of legal ownership of IP addresses as property is explicitly denied by ARIN and RIPE[1], and for good reason. If they were property, the addresses would be held and hoarded as investments (which can be seen in the subset which is already owned in this manner). They would also fragment, which leads to worse performance and higher costs. Given those two reasons, the first point fails under the label of really bad idea.

> it's possible that most Internet users are poorly-served by first-class addresses!

Internet users, or let's call them end users, want to use software that works, is effective and thirdly cheap. Software that has those 3 attributes serves them. However, with NAT, those attributes are directly harmed. Some software will never work with NAT. Of those that will work, some are much less effective, such as harmed latency and privacy. And thirdly, NAT adds costs to software development in from of complexity, meaning that the end cost for users increase. Thus, because of NAT, software is less useful and more costly, which in turn harms Internet users.

> Who says IPv6 needs to be the future?

IPv6 was the smallest possible change to fix the problem, while still maintaining a form of performance requirement that overlay networks don't. Performance here being 1) latency, 2) router capacity, 3) privacy/security. If a new protocol would fulfill the performance requirements, then there would be a reason to discuss replacing IPv6 with something better, but until that time is here, IPv6 is the upgrade that is necessary for the Internet, end-users, and suppliers.

[1] https://www.arin.net/policy/nrpm.html


> IPv6 was the smallest possible change to fix the problem

Really? Going from a 32 bit addressing scheme to 33 bits would double the amount of addresses, pushing the problem a way down the track. Sticking to octets for simplicity, going to 40 bit addressing (five octets) would provide as many addresses as we need for the near future. But they went for 128 bit addressing with IPv6 - I fail to see how that was the smallest change that they could have made.

Making it backwards compatible with v4 would have been an even smaller change and we'd probably be using it by now if they hadn't broken compatibility.

And how was picking an address size that didn't match up to the native integer size of any common CPU good for performance? Maybe they expected us all to be running 128 bit CPUs by now.


If you're going to break compatibility by changing the fundamental address-size anyway, what does it matter if you future proof it by a factor of 256 or something ginormeous so that we wont have to go down this road again ten years from now?


Because the difference between a 64 bit integer (which would also have been future-proofed for the current IP service model) and a 128 bit integer is not simply 8 more bytes, but also the fact that all modern non-MCU computers can treat a 64 bit integer as a scalar, but are effectively forced to handle a 128 bit integer as a string or a structure of some sort.

This can be the difference between a 1-line patch to a C program and a 30 line patch.

Of course, the standards committees don't care about that cost (it is an externality to them), because "rough consensus and working code" stopped being the code of the IETF more than a decade ago.


AFAIK IPv6 originally became a RFC back in 1995.


One could always try rewriting IPv6 into a ipv4 extension as rfc1726 hint that some IP versions could be done, and then use the original ipv4 part as backward compatibility data, pointing towards a ipv4-ipv6 gateway. On the upside, it would be cooperative with the current ipv6 work.

I do not know if such attempts has been made or considered, or if its would be easier than the current approach.


Yeah, IPv6 is a great example of Second System Syndrome.


But, the only real change is the larger address space. And some rarely used features of IPv4 were removed. How is that 2nd system syndrome?


They increased the address space in the most disruptive possible way.

They could have defined a 64 bit address space and an escape hatch/upgrade path in the extraordinarily unlikely event that we ran out of addresses --- you could allocate a static IP address to every email sent in 2012, spam included, and still consume only 0.0003% of a 64 bit address space. You could address every page in Google's index in 0.00000005% of that address space.

In a 64 bit addressing scheme, IP addresses would remain scalar integers in the vast majority of programming environments used on the Internet (and where they aren't scalars, 128 bit addresses are even worse!). Instead, we have to forklift out not just the code that bakes in 32 bits as the width of an address, but also all the code built on the assumption that addresses are numbers you can compute with.


How would you define an "escape hatch" without replacing every device on the planet?

And IPv6 is really a 64 bit addressing scheme already. 64 bits for the network, and another 64 bits for the host within that network. The later part can be ignored by routers outside of the target network.


From IP address spam prevention perspective 40 bit addressing would be optimal: it would give enough IP addresses for everyone for comfortable use, but at the same time would make make IP addresses more expensive than zero (so it would be hard to simply use new IP address for every spam request).

40 bit address is also still possible to memorize (unlike 128 bit address).


Responses in reverse order.

First: IPv6 is not the smallest possible change. IPv6 is a very large change, involving an infrastructure upgrade and software upgrades across the Internet. Overlay networks necessitate neither. Routers on the Internet can at the beginning remain ignorant of new overlays. Endpoints add software as and when they decide to participate in a specific overlay. Meanwhile, the existing IPv4/HTTP service model, which works just fine under NAT, continues to operate. This is a more incremental approach than IPv6 and thus by definition a smaller set of changes.

Second: NAT demonstrably doesn't harm the interests of most Internet users, because a huge fraction of satisfied Internet users are already NAT'd. But if you believe Internet user interests are harmed by NAT, you also must believe they're harmed by IPv4, which has no functioning multicast or workable group messaging and has a security model designed in the '70s. At this point, arguing for a NAT-less Internet is de facto an argument for a massive software upgrade. If we're going to upgrade, let's upgrade to something better than IPv6.

Finally: we already have hoarding. It's just hoarding of fiat allocations.


Overlay networks do not fulfill the performance requirement, and your comment completely ignores that aspect. Overlay networks would be perfect if we could. Who here would not use Tor if latency was as good as outside Tor? Who here would complain if Skype-like networks were decentralized? The problem with overlay networks is that you either have to sacrifice latency or decentralizedness to get them, and IPv6 does not - ergo, overlay networks can not solve the problem because they can't incrementally improve without adding critically negative aspects while doing so. Invent an overlay network which is decentralized and adds zero latency, and such network would easily beat IPv6.

As for NAT, you are making an Argumentum ad populum fallacy to counter evidential claims. NAT adds complexity to software design when it operates over the network. Proof exists for this fact through RFCs and large design considerations documents. There are also evidental proof that such complexity adds to the cost of developing software, with equal proof that increased developing costs means increased prices. NAT traversal for many services also adds bandwidth costs and increases latency. I have a hard time understanding the argument that increased development costs, higher latency, and more bandwidth would be neutral for the user. The fact that a large number of users, mostly limited to a single ISP, are content with higher costs and higher latency doesn't seem to me to be a good argument in favor of NAT.


Vast amounts of content are delivered today on overlay networks (again: we call them CDNs). Overlay networks have enabled the current scale of content delivery on the Internet. Your performance concern --- about an overlay design you haven't even sketched --- is worse than handwaving: it can be falsified even without asking you to clarify.

I wish you'd stop trying to make me defend the NAT service model, because that argument is extremely boring. My point, which I think sees overwhelming evidence from just a cursory look at the modern Internet, is that most users are not harmed by NAT. Innovation continues despite its pervasiveness. We should use the time NAT has bought us to come up with something better than IPv6, which continues to bake critical policy decisions into $60,000-$200,000 Cisco router and switch chassis.


Legacy addresses, meaning those registered before ARIN's creation in 1997, are essentially property. Otherwise, they'd force you to pay for them.


No third party can call memory in some other third party's router's RIB their property. All it takes to overcome that hurdle is filtering.


I agree, but for "legitimate" use of address space ("resold" legacy or otherwise), why would this happen?


IPv6 support is pretty much standard in routers, applications, libraries, etc. these days, so we're not "going to forklift a whole new collection of libraries and programs onto everyone's computer" - all these libraries have been quietly ported over the last decade or so...

Really, CG-NAT is such an ugly hack that I think the only acceptable use for it is really just as a solution used when somebody has IPv6 to provide IPv4 connectivity to services that are behind the times.

Overlay networks are interesting, but that's no reason to not have the ability to have end-to-end IP routeability when pretty much the entire core of the Internet and many internet services support IPv6 already.


"IPv6 support is pretty much standard in routers, applications, libraries, etc. these days, so we're not "going to forklift a whole new collection of libraries and programs onto everyone's computer" - all these libraries have been quietly ported over the last decade or so..."

So what? DJB's linked criticism correctly predicted that would happen, and also correctly predicted that would not cause any significant uptake in ipv6.

The issue is that 98% of the ISPs customers will be happier with CG-NAT than with an ipv6 address, so the ISPs are going to spend money on the former and not the latter. This will be true as long as a majority of their customers connect to even one server without deployed ipv6.

The vast majority of people connected to the internet consider themselves a client, not a peer. CG-NAT is better for you than ipv6 if you are a client that wants to talk to even a single ipv4-only server.


So 98% of ISP users do not have an Xbox or PS3 or wii or skype


CG NAT breaks console multi player for most games (those without dedicated hosting) it will break voip systems such as skype and sip. it will break in game voice comms.


Guess what has no IPv6 support? Xbox, PS3, and Skype.

I will point out, though, that Dual Stack Lite may end up being cheaper for ISPs than NAT444 because CGNs are relatively expensive and native IPv6 traffic (including Google and Netflix) doesn't have to go through a CGN.


I actually have none of these; how does it dispute my argument? I bet all of these will work over CG-NAT.


I actually have none of these; ... I bet all of these will work over CG-NAT.

Yes. Let's gamble the indefinite future health of the internet on what works for you, a single point of reference, right now, at the very beginning of the IPv4 shortage, without a single thought spared for use-cases not concerning you.

That sounds like a very good and not at all short-sighted strategy.


That's not what I intended; I don't know what the issues are with any of these w.r.t. CG-NAT, since I don't have or use any of them. It was a request for clarification (you know the part of my quote you turned into ellipses).

I do know that every place I have been to the PS3/Xbox has been behind local NAT, so I would be surprised if CG-NAT broke these; my understanding also is that both of them have a central service for game-discovery which means there is no reason they couldn't implement NAT traversal there.

I also never said that CG-NAT wasn't more short-sighted than ipv6; rather that the ISPs have no motivation to deploy ipv6 and much motivation to deploy CG-NAT.


most Internet users do not need a first-class address

They would if they could use them, that is, if developers didn't "have to assume large portions of their users don't have direct Internet connectivity". What about we solve that instead?

We have only the most basic level of experience with overlays --- BitTorrent, Skype --- but the experience we've had seems to indicate that if you have a problem users care about, overlays tend to solve them nicely.

How well that BitTorrent and Skype work if everyone's behind carrier-grade NAT? How can Supernodes function?

I don't think I see the reason why IP can't just be to 2020 what Ethernet was to 1990: a common connectivity layer we use to build better, more interesting network layers on top of.

Because it imposes stupid restrictions on those connections that it's supposed to be serving. Like not being able to connect any two arbitrary endpoints.


Your second question is the only one I care about. The answer is, "just fine", especially so if not everyone is behind CG-NAT. Surely you're creative enough to devise ways for two parties on the Internet to rendezvous through a third party server. We already have overlay networks that are NAT-compatible; they're called CDNs.


Isn't the issue here that pretty much everyone would be behind Carrier Grade NATs? For every 1000 or so servers (or CGNs) an ISP adds to a network, 1000 home users must be NATted. I think you can rule out vhosts and the like for NAT traversal use, so that means you need to supply a VPS or dedicated server for this and pay extra for the bandwidth/IP address. Alternatively, you can use 6to4/Teredo/native IPv6 right now as your "overlay network" to address clients directly and avoid all that at the expense of using up CGN mappings. Bearing in mind that Windows 7 already has decent IPv6 support along with Teredo on by default anyway, why bother trying to work around the NAT?


No, everyone would not be behind carrier grade NATs. Presumably, in a dystopic future where CG-NAT became the new norm, the outcome for people like us is that we'd pay extra every month for our Internet service.

The rest of your comment presumes that the only connectivity on the Internet is via IP packets. But that's not true; it's an assumption based on historical patterns of access. Instead, assume the emergence of a routed message relay substrate built out of TCP connections (or even best effort SCTP or some other TCP-friendly datagram service). You'd "connect" to that next-generation Internet by making the same kind of connection your browser does, and having done so would be off to the races.

This stuff makes me want to blog again.


There are large companies pointlessly squatting on huge allocations; some of them assign routable IP addresses for every desktop in their network, despite (sanely) firewalling them off entirely from the Internet.

Ptacek, I'm a bit surprised to hear this from you of all people. Globally-unique addresses are very useful for things other than direct end-to-end routability.

Financial institution question: "What was the last device to hit that mission-critical system?" Answer: "Well, the access logs show 10.100.1.4, but that's the Internal --> DMZ NAT address, so I'll need to check the firewall logs to see."

Without NAT in play, network configurations become much easier to conceptualize. ACLs become easier to deal with. Things become more clear.

It's a bit of a slur to call even the legacy class-A allocation squatting, being able to do sparse allocation is a godsend. I'm looking forward to IPv6 for this alone.

Finally, I cringe at anything that dictates or predicts what an end-user needs: I don't think it's unreasonable to expect a global communications network to provide globally unique addressing.


I don't think it's unreasonable to expect a global communications network to provide globally unique addressing.

This a million times. People growing up these days, behind NATs, probably never experienced the internet like it was in the early days.

Back when anyone online could offer anything to anyone else without having to resort to "hosting providers" and confined to what they "supported". Because you didn't need anyone to "provide" you with "hosting", because, hey, you were already online!

That sort of openness allowed the internet as we now know it to flourish and develop. Who are we to deny the future the same possibilities?


You've totally missed my point. You read my comment as arguing that we shouldn't have globally unique addresses. My argument is that we shouldn't wait for IPv6 to provide unbounded unique addresses, and in multiple addressing domains and with different service models. There is no reason that we need to be held hostage to Cisco and the IETF.


"We can get new features and unbounded direct connectivity with overlay networks."

Unless routing is defacto-illegal (in Canada, a prior bill would have made routing come with certain data retention and interception obligations, with non conformance being a crime punishabel by $50K to $250K per day), or carried a prohibitive liability (ex, TOR exits). Skype got a foothold in a different political environment (and it's proxying isn't well-publicized), and BitTorrent isn't a general proxy (and politicians would love to ban it, even though that's a technical non sequitur). If I could afford to run an IPv6 tunnel+open router, I would (I would also love to run an open WiFi router), but I also don't want my door kicked in at 2 AM either, so it would be nice if my ISP helped too. These are pretty big obstacles to experimenting with large-scale network alternatives. (Not disagreeing, just making an observation.)


NB: Skype no longer does the Supernode thing since Microsoft took it over. The work of the supernodes is now done by Linux boxes in Microsoft datacentres http://arstechnica.com/business/2012/05/skype-replaces-p2p-s...


It's not necessarily a waste to assign globally unique addresses to internal networks that will never see the unfirewalled Internet, because the uniqueness means that when that company is merged with some other company and you want to route between their internal networks, you can do it without renumbering or NAT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: