Hacker Newsnew | past | comments | ask | show | jobs | submit | markhahn's commentslogin

interesting that the stock market (a subset of the prediction market now, right?) would even care, or would take this as a negative.

"sorry guys, I did something token-bad a while ago that got you more money."

that's the sort of meaculpa I'd expect to get rewarded these days...


It's because they're now getting you less money because they had to stop doing the thing

well, physics does work that way, depending on what you mean by performance. (in the sense that power is normally part of performance when we're talking about chips).

you could certainly use a larger process and clone chips at an area and power penalty. but area is the main factor in yield, and talking about power is really talking about "what's the highest clockrate can you can still cool".

so: a clone would work in physics, but it would be slow and hot and expensive (low yield). I think issues like propagation delay would be second- or third-order (the whole point of GPUs is to be latency-tolerant, after all).


Copying CPUs isn't really a thing: they are too complex.

If you could steal all the designs at TSMC, and you had exactly the process that TSMC uses, you could definitely make counterfeits. If you didn't have TSMC's specific process, you could adapt the designs (to Intel or Samsung) with serious but not epic effort. If you couldn't make the processes similar (ie, want to fab on SMIC), you are basically back to RTL, and can look forward to the most expensive and time-consuming part of chip design.

This is nothing like copying a trivial, non-complex item like a car. Copying a modern jet engine is starting to get close (for instance, single-crystal blades), but even they are much simpler. I mention the latter because the largest, most resourced countries in the world have tried and are still trying.


They have done a bit of this. SMIC is basically operating off of a cloned TSMC N7 node that they have since iterated on to get to a 5nm class node.

But its still such a complex sort of beast.

Even if you had 'ai tools' guessing at component blocks on evaluation you would have to have some evaluation of the result.

And, thats assuming NVDA hasn't pulled a Masatoshi Shima type play on their designs (i.e. complex traps that could require lots of analysis to determine if they are real or fake)

Im not sure how much of a speedup even modern tooling/workflow could do reliably.

Even then,

The elephant in the room is that China is working on their own AI accelerators/etc, so while there can be benefit from -studying- the existing designs, however I think they do not want to clone regardless.


Oh, absolutely. Straight up soviet style cloning of masks makes no sense for multitude of reasons. In addition to what you've said, China isn't banned from N7 class Nvidia architectures so could just buy those on the open market.

This (variable rewards -> gambling, illusion of control) is really important.

I'm not an expert in the psych/neuro literature on addiction, but I suspect latency isn't that critical. But is that just because it's things like fruit-machines that have been studied? Gambling (poker, racehorses) are quite long-latency. OTOH, scrolling is closer to 400ms, and that's certainly the modern addition...


re right vs left: the usual metaphor here is red-brown alliance.

little unclear what drove the E-7 thing - my impression is that accelerationists on the political side wanted to push for space-based defense, and drove the attempt to cancel.

it is a reasonable point that any airborne radar is an attractive target to long-range missile. and that if your radar is in space, it's a different, less available class of missile to attack it (and also that so far treating space as contested is taboo).

the recent loss of THAAD radar should also make people rethink how to make an emitter that survives the first round of missiles.


Thanks to you both for the interesting comments.

From a combination of both curiousity and a long standing ANZAC tradition of ribbing allies, I have to ask ... Did these accelerationists push for space based mine sweepers as well??

Not sure I've seen a less prepared, plan absent, voluntary own choice entry into combat.

No drama, I'm sure the current circumstances don't sit well with many.


it's just a machinecode emulator that happens to run on a gpu. it's more of a flying pig than a new porcine airliner.


General Parallel Units


people say this a lot, but with little technical justification.

gpus have had cache for a long time. cpus have had simd for a long time.

it's not even true that the cpu memory interface is somehow optimized for latency - it's got bursts, for instance, a large non-sequential and out-of-page latency, and has gotten wider over time.

mostly people are just comparing the wrong things. if you want to compare a mid-hi discrete gpu with a cpu, you can't use a desktop cpu. instead use a ~100-core server chip that also has 12x64b memory interface. similar chip area, power dissipation, cost.

not the same, of course, but recognizably similar.

none of the fundamental techniques or architecture differ. just that cpus normally try to optimize for legacy code, but gpus have never done much ISA-level back-compatibility.


Maybe I'm not getting it. Doesn't the problem start with ever deleting a passkey? That is: how do you ever know you don't need it anymore?

Also, what is the alternative? Just a password that you store in the vault? Seems like deleting those gets you back to the same place (with all the disadvantage of a plain password).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: