Hacker News new | past | comments | ask | show | jobs | submit | more jxjnskkzxxhx's comments login

You know who also told everyone what he was going to do? And about whom everyone said "nah he's not really that crazy, he's just pretending for his supporters". Hitler. Also just stating facts.


One thing I like about Casio's, especially the cheap ones, is how unpretentious they are.


I went thru a couple of F91W and the bracelet kept breaking. I since got a A158WA-1. Check it out.


Done. Thank you!


Most complicated wristwatch, has all the things you never knew you didn't want.

I'll keep my Casio thank you.


I've used Jax quite a bit and it's so much better than tf/pytorch.

Now for the life of me, I still haven't been able to understan what a TPU is. Is it Google's marketing term for a GPU? Or is it something different entirely?


There's basically a difference in philosophy. GPU chips have a bunch of cores, each of which is semi-capable, whereas TPU chips have (effectively) one enormous core.

So GPUs have ~120 small systolic arrays, one per SM (aka, a tensorcore), plus passable off-chip bandwidth (aka 16 lines of PCI).

Where has TPUs have one honking big systolic array, plus large amounts of off-chip bandwidth.

This roughly translates to GPUs being better if you're doing a bunch of different small-ish things in parallel, but TPUs are better if you're doing lots of large matrix multiplies.


Way back when, most of a GPU was for graphics. Google decided to design a completely new chip, which focused on the operations for neural networks (mainly vectorized matmul). This is the TPU.

It's not a GPU, as there is no graphics hardware there anymore. Just memory and very efficient cores, capable of doing massively parallel matmuls on the memory. The instruction set is tiny, basically only capable of doing transformer operations fast.

Today, I'm not sure how much graphics an A100 GPU still can do. But I guess the answer is "too much"?


Less and less with each generation. The A100 has 160 ROPS, a 5090 has 176, the H100 and GB100 have just 24.


TPUs (short for Tensor Processing Units) are Google’s custom AI accelerator hardware which are completely separate from GPUs. I remember that introduced them in 2015ish but I imagine that they’re really starting to pay off with Gemini.

https://en.wikipedia.org/wiki/Tensor_Processing_Unit


Believe it or not, I'm also familiar with Wikipedia. It reads that they're optimized for low precisio high thruput. To me this sounds like a GPU with a specific optimization.


Perhaps this chapter can help? https://jax-ml.github.io/scaling-book/tpus/

It's a chip (and associated hardware) that can do linear algebra operations really fast. XLA and TPUs were co-designed, so as long as what you are doing is expressible in XLA's HLO language (https://openxla.org/xla/operation_semantics), the TPU can run it, and in many cases run it very efficiently. TPUs have different scaling properties than GPUs (think sparser but much larger communication), no graphics hardware inside them (no shader hardware, no raytracing hardware, etc), and a different control flow regime ("single-threaded" with very-wide SIMD primitives, as opposed to massively-multithreaded GPUs).


Thank you for the answer! You see, up until now I had never appreciated that a GPU does more than matmuls... And that first reference, what a find :-)

Edit: And btw, another question that I had had before was what's the difference between a tensor core and a GPU, and based on your answer, my speculative answer to that would be that the tensor core is the part inside the GPU that actually does the matmuls.


You asked a question, people tried to help, and you lashed out at them in a way that makes you look quite bad.


Did you also read just after that "without hardware for rasterisation/texture mapping"? Does that sound like a _G_PU?


I mean yes. But GPU's also have a specific optimization, for graphics. This is a different optimization.


Wake up, he got impeached twice.


During this term.


The media is largely at fault here, pretending that both sides are equal.


The people are even more at fault, being fine with this.


There is some kind of weird issue here, not sure what it is called, that where if the media were to blow this up everyone would say they are overreacting.

So it becomes a "seat at the table is better than no seat" thing where they won't ask Trump these questions.


It's a standard fear of repercussions, no ones wants to speak up today so it all gets progressively worse and harder to speak up tomorrow.


It’s not the only possible outcome. Many countries have successfully protested against less controversial government practices and/or under worse conditions than there are now in the US.


It's funny that "it's a computer but I'll tell people it's a human" and "it's a human but I'll tell people it's a computer" are both commons ideas.


I predict he's not gonna be on Mars.


Nothing means anything then.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: