Hacker News new | past | comments | ask | show | jobs | submit login

Why is it meaningless?



Because it lost meaning somewhere between a micron and 100nm.

From roughly the 1960s through the end of the 1990s, the number meant printed gate lengths or half-pitch (which were identical).

At some point, companies started using "equivalences" which became increasingly detached from reality. If my 50nm node had better performance than your 30nm node because I have FinFETs or SOI or whatever, shouldn't I call mine 30nm? But at that point, if you have something less than 30nm somewhere in your process, shouldn't you call it 20nm? And so the numbers detached from reality.

So now when you see a 1.6nm process, it's think "1.6nm class", rather than that corresponding to any specific feature size, and furthermore, understand that companies invent class number exaggerations differently. For example, an Intel 10nm roughly corresponds to Samsung / TSMC 7nm (and all would probably be around 15-30nm before equivalences).

That should give you enough for a web search if you want all the dirty details.


The same thing happened with chip frequency around the end of the 1990s.

Chip frequencies stagnated (end of Dennard scaling if I remember correctly) giving the impression that single threaded performance had stagnated, but since then chip makers have used increasing data and instruction parallelism to squeeze even more apparent single threaded performance out of chips. A 3ghz chip today is usually way faster on average code than a 3ghz chip 15 years ago. They also started expanding into multiple cores and adding more cache of course.

For fab processes we should just switch to transistor density. That still wouldn't capture everything (e.g. power efficiency) but would be a lot better than not-actually-nanometers.

For performance we really don't have a good single metric anymore since all these performance hacks mean different levels of gain on different code.


"For fab processes we should just switch to transistor density."

Indeed

May I propose transistor density divided by the (ergodic) average transistor switching power?


> The same thing happened with chip frequency around the end of the 1990s.

Not really the same thing, though. If I buy a chip that's advertised as 3.4GHz, it'll run at 3.4GHz. Maybe not all the time, but it'll achieve that speed. If I buy a chip advertised as being produced with a 3nm process, there's nothing even resembling 3nm on there.


> For fab processes we should just switch to transistor density.

The marketing departments of silicon companies are saving that as their ultimate weapon.


No, this is the opposite phenomenon.

Chip frequencies didn't detach from the reality. However, as you point out, the performance improved regardless because frequency wasn't the only thing that could be improved.

Another poster summarized what happened with transistor 'density'. New techniques were invented that 'effectively' improved performance, and the industry apparently wanted to continue to use this one number as the basic benchmark.


It’s not meaningless, it’s near meaningless. That’s what the nm stands for, right?


Because it's not measuring anything, except the tech generation. It conveys about as much information as "iPhone 14".


It's at least more easily understandable (lower is newer) than the average tech product naming scheme, especially those by Microsoft.


The problem is it sounds like something any engineer can understand without domain knowledge, but interpreting it that way is completely wrong. The worst kind of naming. Not just IKEA-style random names (and I say that as a Swede,) but reusing a standard, while not keeping to what the standard is normally used for, and what it previously meant even in this domain.

N1.6 is much better for naming node processes. Or even TSMC16.


Normally? That "standard" hasn't been used "normally" for 20 years now. Arguably the new way is normal in every sense of the word


Yes. By standard I mean that nanometers ("nm") is used to describe physical distances. That's it's normal use, understood by all engineers. That's also how it was born into the semiconductor domain. In that domain, it should have either stayed a description of physical distances, or been replaced.

They could invent codenames, or start using a better (physical) metric, if this one is no longer relevant.


Do you think they'll be advertising 0.9nm in the future or switch it up at some point?


Intel has already switched to Ångström for the most part


I see MTr/mm2 a lot, it feels like a better measure, and it feels like the time is ripe for marketing to make a jump. Bigger is better, "mega" is cool, the numbers are hundreds to thousands in the foreseeable future so no chance of confusion with nm. What's not to love? But hey, I don't make these decisions, and I see no indication that anyone in a position to make these decisions has actually made them. Shrug.


While it may be divorced from any particular measurement, doesn't it give an approximation of increased density?


It stopped being an actual measure of size long ago. The nm is t Nanometer anything it’s just a vague marketing thing attempting some sort of measure of generations


process/node "size" has, for some time now, been divorced from any actual physical feature on or of the chip/transistor itself. See the second paragraph of: https://en.wikipedia.org/wiki/2_nm_process for additional details




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: