Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is nice, but with increased speed, they should also require ISPs do not impose a data cap to be considered broadband.


I'm actually on the opposite side of this. I wish there were more metered options available for data, especially for wireless options. I'd love to see an offering that has a low monthly connection fee (i.e. $1-5), along with a sensible price per GB option (i.e. double the cost of transit so there are decent margins on this, but isn't something crazy like $10/gb).

Assuming that for most consumers the end price averages to be the same, the benefit of metered is it puts an incentive on the data provider to make their speeds as high as possible. Higher speeds will result in more bandwidth usage (i.e. Youtube/Netflix will default to a higher data rate), which will result in more profit for the company. The desires of the customer and the desires of the provider are the same - faster and more stable internet leads to better customer experience and more profit.

One of the issues I see with fixed-speed-unlimited-data options is that providers are incentivized to oversell the fixed speed bandwidth, leading to "up to 100mbps" type connections where it's a theoretical max, not the average. Under this model the desires of the customer are misaligned with the desires of the provider. The provider wants the customer to use as little data as possible - the less they use the more they can sell plans on the same infrastructure. This can be addressed with rules and regulations, but I prefer a situation where I know both the customer and provider have aligned desires.


It makes no sense to price wireless data based on IP transit costs.

IP transit costs per GB are a fraction of a hundredth of a penny.

It barely makes any sense to bill consumers per GB on any terrestrial network.

In Northern Europe a typical 4G SIM with a monthly data quota is $9 per month. An unlimited 4G SIM with all you can eat data is $18. How many price points do you need in between?

Heck, international 4G/5G data roaming fees are capped at $2/GB and by 2027 the cap will be $1/GB.


Yes, metering is the rational thing to do.

But caps are different and don't work the same.


The metered pricing model doesn’t align with reality of data costs for ISPs which are often negligible (thanks to technological advances and peering agreements especially). This makes the metered model more of a revenue strategy than a necessity. It’s also conflicting with basic tenets of net neutrality.


Well twice the price of transit is .. NaN? Data isn't generally charged in bits.


At least for wired connections, data caps are utterly artificial [1].

> Network congestion arguments in this debate operate on an unsubstantiated and uncontextualized assumption of scarcity — there is only so much bandwidth, and a few people are going to use it all. Yes, some network congestion arguments have prevailed with wireless broadband because of the atmospherical and technical limitations of the medium. However, congestion is not as consequential for fixed broadband. Broadband providers reason that instead of limiting these internet “super-users” (which would be discriminatory and litigated accordingly), they need to implement data caps for all consumers, which they claim is fair and unbiased. However, scarcity is not a reality for broadband providers, even with super-users. Internet functions by “statistical multiplexing” meaning that bandwidth is dynamically allocated and reused without a limit “[u]nlike other utilities such as water, electricity, gas or oil.”

> This means that no super-user is consuming bandwidth at the expense of other users; you’re not going to receive less internet because your neighbor runs a Twitch channel, for example. Rather, super-users are simply using more internet more often, and the “super-user discount” (the fact that broadband providers aren’t able to charge them, specifically, for this “above normal” usage) angers providers more than anything else. This brings us to the real reason broadband providers apply data caps to consumers: money. Applying data caps to all of us, therefore, enables broadband providers to pat themselves on the back for devising clever “overage” fees.

As time goes on the same will become true for the majority of wireless usage, if it isn't true already.

Sending twice the data at a time doesn't cost the ISP twice the money. But large ISPs with regional monopolies continue overcharging you [2] while committing subsidy fraud [3] and subsidy obstruction [4] at every opportunity (sometimes with help from corrupt legislators [5]).

[1] https://publicknowledge.org/no-cap-the-truth-about-data-caps...

[2] https://news.ycombinator.com/item?id=38105873

[3] Pick any one of these links. https://www.techdirt.com/2020/10/06/mississippi-says-att-too... https://www.techdirt.com/2023/02/15/report-shows-comcast-con... https://www.techdirt.com/2023/02/16/verizon-t-mobile-oversta...

[4] Pick any one of these links. https://communitynets.org/content/monopoly-providers-mire-nt... https://www.techdirt.com/2023/02/15/report-shows-comcast-con... https://www.techdirt.com/2015/04/16/alec-threatens-to-sue-cr...

[5] Ditto. https://www.techdirt.com/2021/02/19/new-bill-tries-to-ban-co... https://www.techdirt.com/2022/04/11/illinois-missouri-and-ne...


And that speed should be symmetrical. We are continuing to push the boundaries are content going out.


A huge portion of people don't need a massive upstream. This sounds more expensive for basically no benefit (for most)


A lot of people need enough for at least a steady Zoom/FaceTime call, and that's only going to increase over time.


potentially multiple simultaneously if mom&dad have separate jobs with kids attending classes.

asymmetrical is as antiquated as dial-up. there's no reason for it in 2023.


You don't need more than 10 mb for that though. How often are people uploading 1+ GB? For me at least the answer is never.


Not many are using gigabit upstream, but Spectrum’s 300 megabit down plans in my area come with like 20 megabit up. I’d get noticeable stutter in calls if I browsed the web during one.

Thankfully synchronous gigabit fiber came this year.


If I take a video on my phone and sync it to the cloud that's easily more than that. And if that takes up the whole 10 Mbit, video calls get unusable.


The majority of people is not the qualifier here for this argument. It's what is actually a good definition for the term "broadband" in this time and age.


A good definition shouldn't apply to the majority of people?


> A good definition shouldn't apply to the majority of people?

If "good definition of minimum broadband speed", then "applies to the majority of people". The converse is not true. It is not the case that if "applies to the majority of people", then "good definition of minimum broadband speed."

1. Whether a speed applies to a majority of people is necessary but not sufficient.

2. You are unknowingly accepting being ripped off. It's not reasonable for big ISPs like Comcast to offer me 300 megabits download 15 megabits upload for $70 a month (might've been $90, but assume $70) while EPB of Chattanooga [1] offers 1 gigabit symmetrical for $67.99 a month. What speed any individual actually needs doesn't have to come into the picture. In matters of consumer protection, the principle of the thing matters just as much as actual consumer needs.

Today's internet technology (particularly optical fiber [2], paired with hardware implementing DOCSIS 3.1 or 4 [3]) is fully capable of providing 1 gigabit symmetrical for "the majority of people", even in rural areas. Moreover, in the long term, transitioning to fiber would be less expensive to the big ISPs like Comcast [4], but Comcast keeps raising prices on broadband over decades-old copper wires and committing subsidy fraud [5]. Don't let big ISPs define "good enough" to be much lower than technology and the price of the technology allow.

Going by the Chattanooga metric, 100 mbps symmetrical should cost no more than $10, and anything lower is not a good definition of minimum broadband speed for the present (never mind the future).

[1] https://www.eff.org/deeplinks/2019/10/why-fiber-vastly-super...

[2] https://en.wikipedia.org/wiki/DOCSIS#Versions

[3] https://epb.com/fi-speed-internet/gig/

[4] https://www.eff.org/deeplinks/2020/06/why-slow-networks-real...

[5] Pick any one of these links. https://www.techdirt.com/2020/10/06/mississippi-says-att-too... https://www.techdirt.com/2023/02/15/report-shows-comcast-con... https://www.techdirt.com/2023/02/16/verizon-t-mobile-oversta...

--------Everything below this line is bonus.--------

The following excerpt is from an article about data caps [6] but applies equally well to the state of broadband pricing in general.

> Network congestion arguments in this debate operate on an unsubstantiated and uncontextualized assumption of scarcity — there is only so much bandwidth, and a few people are going to use it all. Yes, some network congestion arguments have prevailed with wireless broadband because of the atmospherical and technical limitations of the medium. However, congestion is not as consequential for fixed broadband. Broadband providers reason that instead of limiting these internet “super-users” (which would be discriminatory and litigated accordingly), they need to implement data caps for all consumers, which they claim is fair and unbiased. However, scarcity is not a reality for broadband providers, even with super-users. Internet functions by “statistical multiplexing” meaning that bandwidth is dynamically allocated and reused without a limit “[u]nlike other utilities such as water, electricity, gas or oil.”

[6] https://publicknowledge.org/no-cap-the-truth-about-data-caps...


I'm not talking about costs, data caps, or anything else. I'm asking if a specified, asymmetric speed is really insufficient for most people. We don't need to define something that covers 99.9% of the population, we need minimums that are roughly sufficient for any given person. If a cable can only move a certain amount of data at a time, why not engineer for the direction the vast majority of it goes already?

>It is not the case that if "applies to the majority of people", then "good definition of minimum broadband speed."

I don't see why this wouldn't be the case. This isn't a physics problem with a right/wrong answer, it's a balance of the wants/needs of corporations vs. individuals. You're only talking about half of the equation to begin with, so forming logical statements on it is incomplete to begin with.


Let's pitch symmetrical, let them counter with 20Mbit up per 100Mbit down, and call it a victory.


i think you're about 1/5 the minimum speeds we should be countering. 100mbps up, 1000mbps down


I just noticed the document does prescribe 100 down, 20 up. Mb.

And yeah, I could go for a gigabit minimum.


Bs. Not sure why you think it but every system has cloud backup these days, folks regularly broadcast multiple video streams at once from home and a plethora of other activities that require upload.

Please stop spreading such misinformation as its an unnecessary expense. It costs zero dollars for upload stop cucking it because you don't understand its use.


And throw in an availability requirement so my gigabit connection doesn't drop to 10kbps every Friday night when everyone is using the internet.


Add in some moderate packet loss and you’ve got CenturyLink fiber.


I don't care if it's symmetrical. I care that I have enough upstream.

I'd obviously rather have a useful 500/200 than a 100/100 or 50/50, even though the latter is symmetrical. (I think my service now is around 400/20. I'm way happier with that than I would be with 50/50.)


My upload speed is a mere 35th of my download. Affordable symmetrical needs to be a thing. Especially with a certain ISP monopoly.


That's what gets me. Is there a technical reason why symmetrical connections would be more expensive than asymm?


The technical reason is that by restricting upload speed they can offer a higher download speed, and that is what most users care about.


I find that reasoning suspicious. Ever notice how most of the muni and coop internet providers offer a symmetrical speed while att and comcast don't? My bet is that there's no real technical limitation, it's simply price segmentation.


Fiber is symmetric, coax is not. This isn't some grand conspiracy, just how the underlying engineering works.


In the UK, Openreach FTTP is heavily asymmetric. Network operators aren't entirely at the whims of asymmetric standards: network operators want asymmetric services.


AT&T does offer symmetrical speeds. Most PON-based FTTH ISPs I've encountered do.

There are technical reasons for offering asymmetrical speeds when it comes to coax connections. There's only so many channels on the wire, a lot are still dedicated to television, people mostly only care about download, so they prioritize it. PONs don't carry television and are just fundamentally different in design, so its easier for them to support symmetrical bandwidth.


There is a technical limitation on cable and DSL networks but not on other techs like fiber or wireless.


This is nonsense. Fiber cables are always capable of the same speed in both directions at once. The actual reason is so they can sell you an overpriced "business" plan if you need upload speeds from the current decade.


A very small part of the country is connected by fiber, and fiber connections are always symmetrical. The vast majority of consumers still get internet via coax cables, and bandwidth is very much an issue on those.


No, a very small part of the last mile is connected by fiber. Trunking is always fiber these days. As are local neighbourhood branch points, if your provider evolved from a telephone company. (If you live in an apartment building served by an ex-telephone ISP, you likely have fiber run all the way to the network closet of your building, with only the per-subscriber in-wall wiring switching over to copper.)

ISPs that evolved from cable companies might still be using cable hubs with a common collision domain, but only a relatively small number of subscribers will be riding the same copper — it’s just cheaper these days to convert the signal to fiber as early in the signal path as possible. Plastic wires are cheaper than metal wires, and you need fewer of them (and so fewer switches.)


Yes, *fiber* can. But the majority of Americans use Cable/HFC (DOCSIS) technologies. These were designed as a one-to-many closed loop TV broadcast on a coax line. Except with the incoming DOCSIS 4.0 deployments it's always been asymmetric as a limitation of the technology


Fiber is not ubiquitous. Many of us are still on DOCSIS over copper, in which case the person you are replying to is exactly correct: upload speed can be reduced for additional download speed.


Though the fiber system as a whole may not be capable of symmetry. For example GPON with 2.5gbps/1.25gbps up/down and 10G-PON at 10/2.5 up/down.


True FTTH is naturally symmetric; PON is not nor is Cable.


No, I think you might be confused. We all know that the fiber for the download link is at least 10x wider than the fiber for the upload channel. There's just no way to squeeze the same number of bits through the smaller "tube" /s


Assuming we are talking about Cable internet: If they were starting from scratch all possibilities are equally expensive, but they have installed equipment in your neighborhood which assumes a specific channel allocation.

They have increased the channel allocation for upstream in recent years and DOCSIS 4.0 allows symmetric connections, but the demand is for downstream, and the fewer channels they allocate to upstream the bigger number they can advertise.


It depends on the layer 1 medium. In a shared RF environment (dialup, cable, wifi, p2p RF) you normally have a shared bandwidth space. This is typically "channelized" and you can pick how many channels you want for "up" and how many for "down". This isn't normally dynamic, it needs to be fixed by the standard or at the very least by the head end equipment. It's more expensive in the above RF spaces because cabling is expensive the more home runs you want to do. With fiber it's a little easier, 1:32 pon splits still give a _lot_ of bandwidth for upstream because it's easier to isolate the adjacent wavelengths (you can pack them in tighter), and the normal noise floor is lower. With 1:1 fiber DIA it's a total non issue and you can do what ever is the limit of the noise of the fiber and the limits of the transceivers (typically optical packages in sfp/sfp+ packaging these days)


AFAIK, PON (Passive Optical Network) has inherent reasons to be asymetric


Not really. I'm sure asymmetric PON is cheaper but there are plenty of symmetric EPON modes.


My assumption is torrents are a threat to streaming services already and potentially a death blow if uplink speeds are too fast.


why would uplink speeds being fast contribute to torrents outside of the first few seeders?... you would just have more peers contributing smaller amounts


When I worked at a small ISP, this was indeed a threat. We offered symmetric speeds though and dealt with it via traffic shaping.

It really was just the odd teenager torrenting here and there.


Also new media (streamers and tubers) rely on uploads and are eating cable’s lunch. It’s just extortion through monopoly.


Asymmetric speeds let ISP's oversell their network (more).


It blows my mind that you guys in the US still have to deal with data caps ... It's been years in the UK since that's been a thing for broadband, and we're close to the point where mobile data will be the same


Western Europe is covered in little hamlets and the odd big city, but it is amazingly densely populated as a region.

So cell service, internet, trains, and cheap flights are all easier to provide there.

Whereas here in the much more sparsely-populated North America, those things are all pricier, and it’s easier to accommodate big trucks, big highways, giant Costco’s, etc.


For residential internet most people have no data cap and those who do have a very large cap (over 1TB/month). Even I as a pretty active user only use like 500GB/month on my desktop according to Windows.


1TB/mo is not “very large” since you can get that kind of data allocation on a $2-4/mo server.


Residential internet connections aren't designed for the same kind of usage as some VPS.


Ok but we’re talking about a whole computer with redundant power and connectivity and memory and disk storage compared to throughput on already-installed non-redundant wires. One of those certainly costs way less than the other, but Comcast still charges multiple orders of magnitude more for.


Its not a whole computer, its a tiny fraction of a whole computer. And that whole computer is in a giant datacenter hundreds of feet from massive ISP interconnections and only a few hops of hardware away from it. Meanwhile the residential customer is probably many miles from the nearest interchange, often even hundreds or more miles.

Location, location, location.

I do agree the over the cap costs imposed by residential ISPs are many, many multiples of their true cost, but they're mostly there to discourage use not be a reasonable price. If you want business level usage hop on their business networking where pricing models are more designed for more average load.


Only data caps if you only have one choice in broad band. Once you have two or more, data caps go away and price is cut in half. If there are no competitors, the provider will milk the customers of cash.


I suppose at that point it's a geography issue which is less of a problem here due to the size of the country


It depends on your ISP. My phone has a data cap of 100 GB/month. Once I go over that I am throttled when the network is congested.

My home internet has no data caps.


I don't know about other ISPs, but Xfinity/Comcast residential internet has a data cap of 1TB/month, regardless of the bandwidth tier you pay for. Although you can pay a (substantial) fee to remove that data cap.


Removing the Xfinity data cap would increase my bill by 60%. It is outrageous.


Comcast Business has no data cap and so might be an option


I wish ISPs were required to advertise the number of seconds you're allowed to use their service at full speed. Ooh, a gigabit pipe with a terabyte cap? You're allowed to use it for 3 hours per month before throttling or extra charges kick in.

I believe it's false advertising when ISPs claim their price is $X per month if it's only for 1/240th of a month. I understand and support the idea of overprovisioning, but a 240x ratio is insanity.


Or at least require that uncapped tiers be available.


They should also require the ISP to use a reasonable method to count the amount of data if there is a data cap. No matter how I count the data in and out of my gateway running OpenWRT, Comcast/Xfinity consistently counts and charges me for at least 20% more data with no discernible reason why.

I think somewhere in fine print they claim they get to round up to the nearest KB or MB every minute or some asinine made-up scale that only benefits them.


So that someone's grandma who hardly uses her connection can subsidize your extreme over-consumption of digital media?


Yes, for the same reason that roads used by daily drivers are subsidized by remote workers.


Bad analogy, because it actually costs more to maintain roads that are used more. ISPs don't have to pay extra if I consume more data packets than my neighbor. Overage charges are just a money grab on their part, nothing more.


Also a bad analogy because the roads are paid for with gas taxes, which WFH workers typically purchase less of


A minority of US road costs are paid for with gas taxes, and it depends tremendously on the location. https://taxfoundation.org/data/all/state/states-road-funding...

WFH workers likely pay more per mile travelled in taxes than someone who super-commutes.


In the US, we tax roads by usage via fuel (gasoline and diesel) tax [^1]. It's a simple solution: the more miles you drive, the more fuel you use; the more fuel you use, the more tax you pay. Vehicles that use more fuel per mile driven tend to be larger and thus cause more wear on the roads.

It's not without its faults though. Fuel usage isn't directly related to cost of road maintenance, it's just a very rough approximation. Fuel usage has mattered less and less over the past couple of decades with hybrids and EVs – though this is addressed in some places by imposing an extra EV tax (since EV drivers would pay no fuel tax but would still cause wear on the roads).

[^1]: https://en.wikipedia.org/wiki/Fuel_taxes_in_the_United_State...


A minority of roadbuilding funds come from fuel taxes in the US. https://taxfoundation.org/data/all/state/states-road-funding...

The additional problem with this is that road wear scales a lot faster than fuel usage. https://en.wikipedia.org/wiki/Fourth_power_law


A lot more wear on the roads in a lot of cases, as it's exponential (~fourth power) with respect to weight and EVs weigh a lot (~30% more than a comparable ICE car).


Ironically roads should also have caps and/or tolls, so this take is strange. This is partly why gas and excise tax is a thing…


> Ironically roads should also have caps and/or tolls, so this take is strange.

We'll set up electronic tolls on your cul-de-sac and every light and stop sign, or toll you by mile driven using an odometer beaconing out to the tax authorities per drive and registered to your tax ID number (US) or equivalent. And then cap you on the number of miles you're allowed to drive per year on all cars unless you pay an additional fee.

Older cars must be retrofit at the driver's expense.

Also the auditing and anti-fraud infrastructure must be instituted for all of this.

---

Or we can just do flat taxes and chill.


I'm absolutely down for vehicle registration fees to relate to miles driven and weight classifications, and major highways being funded by extra tolling.

The streets in my neighborhood cost way less per mile for maintenance than giant highways with massive bridges and interchanges. It makes sense they'd cost more to use.

Showing the real cost of these things to consumers might make them change their minds about riding the train. These days a lot of people don't even think about the cost of driving their car across town, but gosh that metro day pass is $5? How expensive!


> gosh that metro day pass is $5? How expensive!

Nah, metro day pass should cost as much as howevermany miles the buyer traveled. Otherwise people who buy the passes for just one stop would be subsidizing the costs for people who travel from one end of a line to the other.

It'd be pretty shit to meter one and not the other.


Flat taxes are regressive and inefficient, but I suppose we can still chill.


The economy wide productivity gains from having tax funded roads mean it doesn’t make sense to have roads be profitable or break even as an individual entity.

Similar thing probably goes for internet access.


Gotta be careful what you measure though. If I spend a bunch of money owning and maintaining my car, it's possible that that could've been spent on other things that would be overall better for the economy.


Taxes are not flat, just like caps, so there’s no contradiction.


Do you seriously want every road you drive on to charge you? Roads should be a public good that we all benefit from


Personally, that sounds pretty wonderful. Better yet, charge by vehicle weight. I'd like to be able to ride a bike on the road and have everyone think the guy in the car is imposing inconvenience rather than the other way around.


I would happily pay for congestion pricing because it would allow me to go where I'm going faster.

Right now we have this weird sunk cost and lack of obvious incremental charges to driving, that encourages driving when there are other better options for some trips.


You already are charged by every road you drive on, just not explicitly or proportionally.


What is ironic about you're opinion disagreeing with the status quo in some countries?


It’s ironic because they’re arguing against caps but talk about roads, but roads in the United States are in terrible shape due to certain groups, such as construction vehicles not paying their fair share to begin with.

This partly why gas tax is even a thing.


Data packets aren't a limited resource. The costs for your ISP are fixed whether they serve you 1MB or 1TB. Data caps and overage charges are purely a money grab, not some fundamental economic requirement. So no, infrequent users aren't "subsidizing" anyone else.


This just isn't true, though. The limit for a network provider is peak bandwidth usage. When they sell a 2gbps connection to a customer, that 2gbps connection is not reserved entirely for that customer. It isn't like the total bandwidth an ISP has is 2gbps * N number of customers.

They over-subscribe because they know not every customer is going to be using up that full 2gbps 24/7.

Now, you can argue that an ISP SHOULD provision that way, with the expectation that their customers are going to be using the full connection 24/7, but that would raise the cost per customer a LOT, and I don't think people want to be paying for an ISP that provisions like that. It would be a lot more expensive per customer, and it would end up with a lot of the bandwidth going unused most of the time.

If the average bandwidth used by customers goes up, it will require the ISP to pay for and maintain more circuits and cost more money. We can argue there are better, fairer, ways to limit usage, but it is simply not true that every customer downloading a lot more doesn't cost the ISP more money in the long run.


ISPs face costs for peak usage / level of “playing with the numbers” yes, but economies of scale and exchange agreements reduce these costs significantly. Bandwidth caps not only have a weak financial justification but also run counter to net neutrality.


How do bandwidth costs run counter to net neutrality? (Unless you are talking about certain types of traffic not counting against the caps)


Zero-rating = skewing whole internet playing field, not just stifling small creators; subtly - or not so subtly - influences user behaviour, leading to a constrained experience. Users might feel financially pressured to use certain services, missing out on potentially superior or more diverse options, which goes against the original ethos of the open internet.


Ok, so yeah you are talking about some data not counting against the cap. I agree, that violates net neutrality.

A pure cap that counts all traffic the same wouldn’t be, though.


would it be fair to say that bandwidth is just another product sold like plane tickets?


Yes? But with orders of magnitude more difference. Let’s do a back of the napkin calculation:

Say your ISP gives you 100mbit/s = 1.08TB/day ≈ 30TB/mo. On gigabit that’d be 300TB. While you do have some heavy torrenters they are outliers.

Now I assume everything but TV/movie streaming is a rounding error for average Joe. Netflix says 1-7GB/h depending on quality. Average user watches ~3.2h/day (wtf is wrong with people!) but that’s ~100-700 GB/mo. Now that’s between 0.033%-2.3% of downstream bandwidth.

Of course, people generally watch TV at the same time of day, so it gets more complicated to provision resources. But there’s also no question that pooling bandwidth (over-provisioning) makes sense to reduce costs. The question is more about how much congestion is acceptable, and I wouldn’t trust shitty monopolistic companies to behave. But if you can handle eg Super Bowl or a World Cup final without degradation you’re probably good the rest of the year?


With the crucial difference that the equivalent of 7/8ths of a a seat is still useful when it comes to bandwidth, especially for home internet.


> The costs for your ISP are fixed whether they serve you 1MB or 1TB.

That's only true in a very abstract sense. If everybody went from using 1MB/day to 1TB/day there would be massive congestion issues and costs would increase as ISPs rush to install higher-bandwidth equipment. Put a different way: It is cheaper to construct a network where all subscribers consume only 1MB/day than a network where all subscribers consume 1TB/day, because the former can be done with much lower end equipment.


Well it should be true in the concrete sense. If they can't consistently serve you at 100mpbs or whatever else then they shouldn't advertise it. "But everyone else is using it at the same time so too bad" wouldn't work as an excuse in any other industry.


> If they can't consistently serve you at 100mpbs or whatever else then they shouldn't advertise it.

But that's true of almost every industry. Your bank advertises that you can withdraw your money at any time, but if everybody withdrew their money simultaneously there would be issues. A store advertises next-day shipping, but if everybody ordered simultaneously there would be issues. A house might have 100A power service, but if every house started burning 100A simultaneously there would be issues.

Put another way, what you're proposing is dramatically slower speeds for most users. ISPs are profitable, but they're not that profitable. Actual average usage on most lines is probably low single digit percentages, if not sub-1%. I just checked my line - my average use is about 200kB/s (for a total of about 500GB per month), or about 1% of my speed cap.


If you want every ISP to provision the full bandwidth for every customer, so that their network can handle all their customers using their full bandwidth at the same time, it is going to cost each subscriber a LOT more money or get a lot lower peak bandwidth. I think MOST people would prefer to pay less and be able to use the full speed for a fraction of the time.

If you want to know how much more expensive the connection would be if you expect to use the full bandwidth 24/7, just look at the cost you pay for transit in a datacenter. It is multiples of the cost home consumers pay at a per-bit level.

Most users are best served by being told the speeds they will usually get if they stay within average usage patterns, because that is what most people do.


Yes it would work in any other industry. Users can overload a power grid. Users can overload a telephone network. Users can overload a grocery store. NO industry can deal with critical overload without problems.


> "But everyone else is using it at the same time so too bad" wouldn't work as an excuse in any other industry.

Go to a grocery store in Florida before a hurricane and try to buy bread.


> The costs for your ISP are fixed whether they serve you 1MB or 1TB.

I cannot believe how anyone with a straight face can claim that the infrastructure and maintenance required to serve "n times 1MB" is exactly the same as the costs for "n times 1TB". This is so obviously not the case that I genuinely don't even know how to explain it.

You can serve your thousand subscribers on a dingy Pentium 1 if they're using 1MB, because it adds up to just 1,000MB. You wouldn't be able to serve just a single customer with the same hardware. Never mind the cabling etc.


There is some truth to what you say but Comcast charges me around $1 per gigabyte over my cap. It absolutely does not cost them that much more money to serve me.


Almost certainly not, no. That it's overpriced is of course a different thing. I suspect part of the reason these costs are so high is that they really don't want you to use more data, so they won't have to upgrade the network.


>> Data packets aren't a limited resource.

No, but bandwidth is. Particularly wireless where you can't just lay down another cable or fiber.


Bandwidth and data caps aren't hand in hand. If I download a COD update that is 100gb right now at 1gbps. or I watch 3gb of netflix a day at 10mbps, i'm still using 100gb. But the bandwidth requirements are very different.

They either have infrastructure that can handle 1gbps per user, and then they have bandwidth available, and data caps are nonsense... or they don't have the infrastructure, and are banking on their service being idle 90% of the time for most users, and this is why after 5 oclock, my internet goes to crap when everyone is getting home and starting to use the internet. And now suddenly Im drastically overpaying for non-broadband internet because my ISP underprovisioned way too much.

This goes back to the point that everyone is making that ISPs shouldn't be able to underprovision to the point that it degrades performance during high usage times.

My ISP saw $800 million in profits on $2 billion in revenue in 2019. That's an AMAZING profit margin. They can afford to not underprovision, and when they don't under provision, then no, bandwidth isn't a limited resource.


> No, but bandwidth is

Which is why they already charge for bandwidth


This is how consumption of any resource works, in any group where the number of people using it is more than 1.

If I drink less water than you, am I subsidizing your water usage. I'd say so!

Not the hill you want to die on, trust me.


Except in many places you pay water by amount used (Islamic and some other countries excepted). This is the case for loads of resources.


No, so that ISPs are incentivized to actually invest in bigger pipes.


So it was a series of tubes after all...


>> So that someone's grandma who hardly uses her connection can subsidize your extreme over-consumption of digital media?

While I agree that grandma doesn't need 1Gbps, neither do I and neither do most people. So why mandate such a high speed for the label anyway? It seems like a way to disqualify things like Starlink that will have difficulty providing extreme speeds to large numbers of users at once.

The only time I'll use such a speed is when downloading some data - an ISO file or something similar. Streaming video does not require such (gbps) speeds even for multiple streams in the same house.


Why shouldn’t there be a cap? Bandwidth isn’t free.

It’s sad seeing people on a forum called hacker news having no idea how networking infrastructure works.


Because it's next to free (no additional cost to run the infrastructure we're already running) and we're nowhere near the limits of our existing infrastructure's overall bandwidth limits right now, as evidenced by the lack of caps not affecting quality of internet service for the entirety of 2020, when more people were online per hour than at any previous point in history.


Yes and no, depending on your plant build out you might have great interplant connectivity - 10G between each switch, 10G to the router, but the extraplant connections still cost and have bandwidth limits. So if you only use 2 gbps total during peak, you may not need to upgrade your plant to support higher speeds but you will have to pay more to your upstream provider to support it.

This is actually relevant to my ISP day job, we have a pipe to a well known backbone provider, and dual pipes to a statewide provider. We ran on the dual pipes for a long time before pushing our ISP traffic over the single backbone provider. These days, if that main ISP pipe goes down, it fails over to the dual pipes to keep everything running. But now that we are offering packages higher than 25/3 for DSL, terrestrial wireless, and cable, the dual pipes don't have enough bandwidth to keep up.

If we offer gigabit service over fiber (and we do) just a couple of customers actively using their whole pipe is an enormous chunk of the network compared to the 40 or so customers that same gigabit of bandwidth would serve on previous 25/3 packages. We don't have data caps or even contracts (small town benefits :P ) but there is a lot more to it for smaller ISPs than adjusting the rate limiting and packages we allow people to use.


It’s true that smaller ISPs have their challenges, but the cost for both interplant and extraplant connections has been generally decreasing. Economies of scale and peering agreements can mitigate many of these costs.


True, we connect our main ISP pipe to an internet exchange for that reason. It's also interesting to see how the industry is adopting pluggables for a lot of newer deployments because depending on what you need, there's a lot of flexibility now compared to even 5 years ago.


Also, when caps are implemented, they're ridiculously expensive and limited, especially in an age of streaming video. Expensive comically beyond any reasonable conception of the actual marginal cost of transiting the traffic.

Also - just look at LTE operators. They're also no-cap at this point (though subject to QoS at certain breakpoints, but they're typically reasonable breakpoints ime?). And that's in a SIGNIFICANTLY more capital intensive market - you have most of the concerns with terrestrial fiber, plus the joy of having to own RF spectrum and maintain towers etc.

So I think no-caps-and-required-speed requirements would just make sense and be required to keep ISPs from trying to backtrack on being forcibly dragged into the modern area like balky calves.


I think that the issue is lack of competition. on mobile market it's easy to switch operator to any from multiple nation wide ones. for cable ISP AFAIK in US there is usually one. (in my central european country there is at least 3 ISP available ans AFAIK fibers to building are in some way rented to internet providers (?))


In principle caps help with oversubscribing. If you want to use a 10GBit/s uplink to serve 5000 people with 100MBit/s each, you won't saturate your uplink nearly as often if you add a data cap.

Of course that's not all that different from restricting bandwidth, just that you are restricting average bandwidth while allowing for some burst. Still violates the spirit of having fast bandwidth, unless the caps are quite generous.


They do help push people to use less, but for places like Comcast I see data caps as just a revenue stream increaser. The entire company strategy seems to follow this approach.


> Because it's next to free (no additional cost to run the infrastructure we're already running)

Aren't we effectively arguing CapEx versus OpEx?

Sure, an ISP's OpEx is tiny. But CapEx is huge, especially when you've got an old neighborhood that's all wired up with copper and you gotta replace it all with fiber going into the house.


Source? Running things requires labor, labor is getting expensive. Pricing isn’t set on simply the marginal cost of materials…


Running a fully utilized fiber line is.. exactly the same amount of labor as running a 10% utilized one.

The cost to track and bill people for caps and let them pay overages though..


Chokepoints are peering exchanges, transit handoffs, and core network gear, which is sized for a percentile of utilization below constant 100%. Having implemented a billing system and utilization monitoring for managed hosting datacenter environment, it is trivial to collect metrics from equipment and bill accordingly. You don't want to nickle and dime or gouge customers, but you do want to ensure proportionality across the customer population.

Some combination of caps, overage charges, traffic shaping, and edge CDN appliances is needed (Netflix, Akamai, etc). Off peak unlimited is also a potential strategy to drive low priority transfer to low utilization periods. Pick your poison. There is a reason most of Netflix global infra is monster CDN boxes closest to customers.

https://openconnect.netflix.com/en/


> There is a reason most of Netflix global infra is monster CDN boxes closest to customers.

Because it would be ridiculously expensive to egress video streams directly from AWS. They've determined that it's cheaper to pay ISPs for rack space.


You misunderstand the model. They don't pay ISPs, they offer the appliance for free to the ISP for colocation, reducing the upstream networking costs. AWS egress being extortionate is a distinct issue. The cost to the ISP in the rack (cooling, power, space) is immaterial considering the transfer savings (not only opex, but reduced capex from needing to oversize network gear or specific ports for peak streaming consumption) over the life of the relationship.

If you want to learn more, lurk on the NANOG (North American Network Operators' Group) mailing list. It is common to monitor AS traffic flows to understand whether you can offload with CDN appliance relationships or how to improve your network architecture (which networks to peer to directly or which internet exchanges [IXs] to meetup at). These agreements sometimes happen at a bar over beers during NANOG events. Call Verizon and ask them how much a 1Gbps circuit is. Consider why Comcast built their own national backbone.

Also, some important history to remember. Fast.com measures your speeds to Netflix servers because some ISPs wanted to charge their customers and Netflix for that transfer (or to upgrade peering points) because of the revenue destruction from cable customers fleeing for streaming (net neutrality debate).

https://archive.nanog.org/meetings/nanog45/presentations/Nor... (Peering 101, Bill Norton [Equinix cofounder], NANOG 45)

https://drpeering.net/white-papers/Video-Internet-The-Next-W... (Video Internet: The Next Wave of Massive Disruption to the U.S. Peering Ecosystem (v1.7))

https://drpeering.net/HTML_IPP/ipptoc.html ("The Internet Peering Playbook")

https://arstechnica.com/features/2008/09/peering-and-transit... ("How the ‘Net works: an introduction to peering and transit")

https://arstechnica.com/information-technology/2014/07/how-c... ("How Comcast became a powerful—and controversial—part of the Internet backbone")

https://openconnect.zendesk.com/hc/en-us/articles/3600361636...

> How much does the appliance cost my organization?

> Appliances, including replacement appliances, are provided to qualifying ISPs at no charge when used within the terms of the license agreements.


Cost of the traffic is: hardware (staggered cost covered over many months), and running costs (subscription to higher tier internet carriers, electricity, rent of the premises, salaries etc). Most of this is covered by the monthly fees, to have a profitable business one must have those costs covered as a minimum. None of the above affects how much internet data goes through, zero or maxed out capacity, cost is the same.


This is patently false. Given optics can only support so many people at a given latency and bandwidth.


Let me throw that question back at you: why should there be a cap? You are paying ISPs for a certain bandwidth speed per month and they sure as hell are not prorating your service based on data that you did not consume. If the ISP is in capable of delivering the agreed-upon band with every minute of the whole month, then they are not meeting the SLA that you’re paying them for.


If ISPs were like an electric utility, we'd see something like a cost per GB transferred in cents, with a minimum charge reflecting the cost of maintaining a connection of a certain speed to the network.

Which is algebraically identical to a monthly charge and data cap with overage charge. The main issue is the overage charge is too high, it should be like 1 cent per GB (Comcast is charging 20x that).


1c/GB would be a decent rate for home data transfer in 2023, but where would you set the free limit? For reference, at my house I have a 1Gbps line and it looks like in 48 hours I have downloaded 1TB of data (looking at the number of received bytes my AT&T device is reporting). Am I normal? I don't know, but AT&T does...


I think that's rather high compared to average, considering the standard cap for Comcast Xfinity residential is 1.2TiB per month, and they claim only "a very small percentage" of their customers use more.

I don't think the actual cap really matters if the per-GB and base pricing reflects the true costs. If it's low it means heavy users pay more, if it's high, light users pay more.


1TB in 48 hours is absolutely not a normal residential connection usage.

The average US residential customer uses a bit over 500GB/mo in data. You're doing 2x that in 1/15th the time.

https://www.allconnect.com/blog/report-internet-use-over-hal...


They already charge for bandwidth. People pay more for 500mpbs than 50mbps. We are talking about total monthly data caps, which have nothing to do with bandwidth.


False. A month is a discrete amount of time that yields a potential maximum amount of data downloaded, this amount is clearly related to the bandwidth provided, hence caps.

To think otherwise is to falsely believe they support unlimited throughput. Furthermore the bandwidth itself literally is a function of the transmission material and infrastructure, which also isn’t free…


I have a 500 Mbps connection with a 1TB data cap, both pretty standard numbers in the US. If I used the connection at its advertised speeds, I would exhaust my monthly limit in about 4 hours. So no, that relationship is non-existent. The data cap math instead is "99% of our users use < 1TB, so we will charge the rest more because we can".


Or more accurately, to disincentive excessive use. They don’t need a cap to simply charge you more money, they’ve been doing that just fine with or without caps.


If that was a big issue, why are smaller ISPs (like sonic.com) able to offer 10Gb ethernet at a flat rate with no caps?


That would be a valid argument if they defined "excessive" based on infrastructure limits rather than a demand curve and market research.


All prices are based on “demand curve and market research” so I’m not sure what your point is.


My point from the start has been that ISPs put data caps and overage charges in place to make extra money, not to protect their infrastructure.


Peering agreements at internet exchanges make additional bandwidth very cheap to provide and infra costs are spread over years and many users, further reducing ongoing costs..

Bandwidth caps not only have a weak fiscal basis but also clash with net neutrality tenets, go against what made the internet successful in the first place.


Bits that go unused have no value. Caps on numbers of bits in a billing cycle don't make sense. While it isn't exactly pure artificial scarcity, it's darn close. A retail ISP already has to have some fair share mechanisms for peak use times. That leaves very very little usefulness in monthly caps.


once the infrastructure is there does each byte/gigabyte really have an intrinsic cost? isn't more about the bytes per second and not the bytes total?


Sure, but the argument for caps is that it reduces overall demand and therefore peak consumption, and peak consumption is what the ISPs actually have to pay for.


hmm, spitballing here but if they were really suffering could they not offer a package with lower speeds during peak? (while slowly using some of their profit to re-invest in infrastructure?)


There’s more to running an isp than simply plumbing electricity.


sure, but is your 'profit' to be made in.. charging per byte consumed/sent? it just feels... gougey?

I had to live under an ISP that charged something on the order of $10+/GB after a certain amount and after they were forced to offer unlimited internet it was an actual breath of fresh air just to be able to use the internet and not worry about literally everything in the house, or if I could update my OS, or play this game, or watch this Netflix show, etc.

i'm honestly curious. why not a standard margin on your 'per second' allowances (100/100Mbps, etc) or other 'addon' services?

why should someone with a ring doorbell (or a homekit secure video whatever) be punished over someone who doesn't?


"It's weird that none of the so-called experts agree with me. The problem is clearly with their understanding of the space, not mine."




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: