Hacker Newsnew | past | comments | ask | show | jobs | submit | gmm1990's commentslogin

Some of the utilization comparisons are interesting, but the article says 2 trillion was spent on laying fiber but that seems suspicious.


There's an enormous amount of unused, abandoned fiber. All sorts of fiber was run to last mile locations, across most cities in the US, and a shocking amount effectively got abandoned in the frenzy of mergers and acquisitions. 2 trillion seems like a reasonable estimate.

Giant telecoms bought big regional telecoms which came about from local telecoms merging and acquiring other local telecoms. A whole bunch of them were construction companies that rode the wave, put in resources to run dark fiber all over the place. Local energy companies and the like sometimes participated.

There were no standard ways of documenting runs, and it was beneficial to keep things relatively secret, since if you could provide fiber capabilities in a key region, but your competition was rolling out DSL and investing lots of money, you could pounce and make them waste resources, and so on. This led to enormous waste and fraud, and we're now on the outer edge of usability for most of the fiber that was laid - 29-30 years after it was run, most of it will never be used, or ever have been used.

The 90s and early 2000's were nuts.


For infrastructure, central planning and state-run systems make a lot of sense - this after all is how the USA's interstate highway system was built. The important caveat is that system components and necessary tools should be provided by the competitive private sector through transparent bidding processes - eg, you don't have state-run factories for making switches, fiber cable, road graders, steel rebar, etc. There are all kinds of debatable issues, eg should system maintenance be contracted out to specialized providers, or kept in-house, etc.


I so desperately wish it weren't abandoned. I hate that it's almost 2026 and I still can't get a fiber connection to my apartment in a dense part of San Diego. I've moved several times throughout the years and it has never been an option despite the fact that it always seems to be "in the neighborhood".


That has nothing to do with fiber, it’s all about politics and a regulatory environment where nobody is incented to act. Basically, the states can’t fully regulate internet and the Federal government only wants to fund buildouts on a pork barrel basis. Most recently rural.

At the local level, there is generally a cable provider with existing rights of way. To get a fiber provider, there’s 4 possible outcomes: universal service with subsidy (funded by direct subsidy), cherry-picked service (they install where convenient), universal service (capitalized by the telco) and “fuck you”, where they refuse to operate. (ie. Verizon in urban areas)

The private capitalized card was played out by cable operators in the 80s (they were innovators then, and AT&T was just broken up and in chaos). They have franchise agreements whose exclusivity was used as loan collateral.

Forget about San Diego, there are neighborhoods in Manhattan with the highest population density in the country where Verizon claims it’s unprofitable to operate.

I served on a city commission where the mayor and county were very interested in getting our city wired, especially as legacy telco services are on the way out and cable costs are escalating and will accelerate as the merger agreement that formed Spectrum expires. The idea was to capitalize last mile with public funds and create an authority that operated both the urban network and the rural broadband in the county funded by the Federal legislation. With the capital raised with grants and low cost bonding (public authority bonds are cheap and backed by revenue and other assets), it would raise a moderate amount of income in <10 years.

We had the ability to get the financing in place, but we would have needed legislation passed to get access to rights of way. Utilities have lots of ancient rights and laws that make disruption difficult. The politicians behind it turned over before that could be changed.


The worst part is it'd probably cost less than $100 of fiber and labor to splice something into your building, maybe $200-400 of gear to light it up, and you'd have a 10 gbps pipe back to some colo. It's more economical to run new fiber in most places these days, even if the local ISP knows exactly where all the old abandoned legacy lines are run, because of subsidization and basically scamming. The big companies like Lumen keep their knowledge regionally compartmented, legally shielded, and deliberately obfuscated, because if it became known that existing fiber was already run to a place they claim they can't serve, they can't get access to yet more funding for their eternal "service for the underserved" government money grift.

I stumbled on old maps that showed a complete coverage of fiber in my municipality, paperwork from a company that was acquired, and which in turn merged, then was bought out by one of the big 5 ISPs. When local officials requested information regarding existing fiber, this ISP refused and said any such information was proprietary. They later bid on and won contracts to run new fiber (parallel to existing lines which they owned, which still had more than a decade of service life left in them at that point).

I estimated that only around 10-15% of the funding went toward actual labor and materials, the remainder was pure profit. The local government considered it a major victory, money well spent.


The GDP 1995-2000 (inclusive) was about $52T. So that assertion would mean that about %3.8 of the US' economic activity was laying fiber. That seems like a lot, but in my ignorance doesn't sound totally impossible.


If there is really amazing stuff happening with this technology how did we have two recent major outages that were cause by embarrassing problems? I would guess that at least in the cloud flare instance some of the responsible code was ai generated


> I would guess that at least in the cloud flare instance some of the responsible code was ai generated

Your whole point isn't supported by anything but ... a guess?

If given the chance to work with an AI who hallucinates sometimes or a human who makes logical leaps like this

I think I know what I'd pick.

Seriously, just what even? "I can imagine a scenario where AI was involved, therefore I will treat my imagination as evidence."


Microsoft is saying they're generating 30% of their code now and there's clearly been a lot of stability issues with Windows 11 recently that they've publicly acknowledged. It's not hard to tell a story that involves layoffs, increased pressure to ship more code, AI tools, and software quality issues. You can make subtle jabs about your peers as much as you want but that isn't going to change public perception when you ship garbage.


The whole point is that the outages happened not that the ai code caused them. If ai is so useful/amazing then these outages should be less common not more. It’s obviously not rock solid evidence. Yeah ai could be useful and speed up or even improve a code base but there isn’t any evidence that that’s actually improving anything the only real studies point to imagined productivity improvements


good thing before “ai” when humans coded we had many decades of no outages… phew


I think he could have been instrumental to the iphone (not saying he was or wasn't) and whatever he tries next is a complete flop. The ability to be successful is contextual, and great artists can produce mediocre art.


> The ability to be successful is contextual

Excellent point. I think most great creative work is due to a uniquely 'right' combination of people, problem, experience and environment being together at an opportune moment.


Yes, he could be a one-hit-wonder artist.



Is there a public generic measure of IT outages with historical data. Severe outages seem to be more common lately, but I don't have any data to back it up.


value doesn't have to be monetary


Yes, but it's often implied to be. If you want it to be comparable it usually defaults to be, we have few other agreed on ways to weigh one output of value against another. If someone says "we can't afford that" it's monetary value and thus mostly rich people's preferences they're actually talking about, but it's not as if it's easy to ignore either.


Interesting that there's only the m5 on the macbook pro. I thought the m4 and m4 pro/max were at the same time on the macbook pro


that is an interesting use case, I hadn't thought about a setup like this with a local redis cache before. Is it the typical advantages of using a db over a filesystem the reason to use redis instead of just reading from memory mapped files?


> Is it the typical advantages of using a db over a filesystem the reason to use redis instead of just reading from memory mapped files?

Eh - while surely not everyone has the benefits of doing so, I'm running Laravel and using Redis is just _really_ simple and easy. To do something via memory mapped files I'd have to implement quite a bit of stuff I don't want/need to (locking, serialization, ttl/expiration, etc).

Redis just works. Disable persistence, choose the eviction policy that fits the use, config for unix socket connection and you're _flying_.

My use case is generally data ingest of some sort where the processing workers (in my largest projects I'm talking about 50-80 concurrent processes chewing through tasks from a queue (also backed by redis) and are likely to end up running the same queries against the database (mysql) to get 'parent' records (ie: user associated with object by username, post by slug, etc) and there's no way to know if there will be multiples (ie: if we're processing 100k objects there might be 1 from UserA or there might be 5000 by UserA - where each one processing will need the object/record of UserA). This project in particular there's ~40 million of these 'user' records and hundreds of millions of related objects - so can't store/cache _all_ users locally - but sure would benefit from not querying for the same record 5000 times in a 10 second period.

For the most part, when caching these records over the network, the performance benefits were negligible (depending on the table) compared to just querying myqsl for them. They are just `select where id/slug =` queries. But when you lose that little bit of network latency and you can make _dozens_ of these calls to the cache in the time it would take to make a single networked call... it adds up real quick.

PHP has direct memory "shared memory" but again, it would require handling/implementing a bunch of stuff I just don't want to be responsible for - especially when it's so easy and performant to lean on Redis over a unix socket. If I needed to go faster than this I'd find another language and likely do something direct-to-memory style.


Thanks for the write up. Seems like a cool pattern I hadn’t heard of before


Strange unit of measurement. Who would find that more useful than expected compute or even just the number of chips.


I wouldn't be surprised if power consumption is a starting point due to things like permitting and initial load planning.

I imagine this as a subtractive process starting with the maximum energy window.


It's a very useful reference point actually because once you hit 1.21 GW the AI model begins to learn at a geometric rate and we finally get to real AGI. Last I've heard this was rumored as a prediction for AI 2027, so we're almost there already.


Is this a crafty reference to Back to the Future? If so I applaud you.


1.21GW is an absurd level of precision for this kind of prediction.


It's from the movie "Back to the Future"


Came only here searching for 1.21GW


If a card costs x money, and operating it every year/whatever costs y money in electricity, and y >> x, it makes sense to mostly talk about the amount of electricity you are burning.

Because if some card with more FLOPS comes available, and the market will buy all your FLOPS regardless, you just swap it in at constant y / for no appreciable change in how much you're spending to operate.

(I have no idea if y is actually much larger than x)


A point of reference is that the recently announced OpenAI-Oracle deal mentioned 4.5 GW. So this deal is more than twice as big.


At large scales a lot of it is measured on power instead of compute, as power is the limitation


For a while, it's become increasingly clear that the current AI boom's growth curve rapidly hits the limits of the existing electricity supply.

Therefore, they are listing in terms of the critical limit: power.

Personally, I expect this to blow up first in the faces of normal people who find they can no longer keep their phones charged or their apartments lit at night, and only then will the current AI investment bubble pop.


Probably because you can't reliably predict how much compute this will lead to. Power generation is probably the limiting factor in intelligence explosion.


That, and compute always goes up.


I wouldn’t think gpt5 is any better than the previous chat gpt. I know it’s a silly example but I was trying to trip it up with the 8.6-8.11 and it got it right .49 but then it said the opposite of 8.6 - 8.12 was -.21.

I just don’t see that much of a difference coding either with Claude 4 or Gemini 2.5 pro. Like they’re all fine but the difference isn’t changing anything in what I use them for. Maybe people are having more success with the agent stuff but in my mind it’s not that different than just forking a GitHub repo that already does what you’re “building” with the agent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: