Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I get the impression that Intel overextended on their 10nm process, in that they were perhaps a bit more ambitious that other manufacturers and it came back to bite them when there were scaling problems.

I’m no micro-electronic expert but I wonder if we are hitting limits in clock speed scaling with regards to feature size - i.e. shrinking pass a certain feature size clock speeds actually have to drop for the chip to be stable.

Intel’s priority is clock speed first and foremost due to what they produce - desktop and server CPUs. A new process is pointless for them if they can’t get at least equal clock speeds out of it as their old process.

TSMC caters to mobile CPU and GPU production - those will never boost to 5Ghz like CPUs; the former for power efficiency reasons (and heat) and the later tends to go for more “cores” as it focuses on parallizable workloads.



As I understand it, it's not chip speed. It's chip voltage. Everything is a conductor if the voltage is high enough, and the closer the traces get, the less resistance the insulation provides. The problem is that at the temperatures we run computers at, the conductor traces need a fair bit of voltage to push the current through the entire chip.


The ratio of conductivities of insulators and conductors stays the same.

It's more that making many very long conductors with very short insulators between them becomes problematic. But that was the case at any process size, but now we are pushing the limits as far as possible to try to make bigger chips.


Not using silicon they won't but other materials might make that possible. I doubt TSMC will sit on the sidelines as those become more mainstream.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: