Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The size of the cached internal state of the network processing the book is much larger than the size of the book. The resource that is preserved with caching is the compute required to recreate that state.



Sure, but a direct forwards pass of the book would surely require more compute than simply loading and setting the hidden state?

The second doesn't require any matrix operations, it's just setting some values.


> it's just setting some values

But it may very well be slower than just recompute it. At least for ordinary MHA and even GQA.

So, either a model arch woodoo significantly reducing kv cache size (while keeping roughly the same compute cost), or some really careful implementation moving kv cache of upcoming requests to devices in background [0].

[0] My back of envelop calc shows that even then it still does not make sense for, say, Llama 3 70B on H100s. Time to stare at TPU spec harder trying to make sense of it I guess.


It depends on how large the input prompt (previous context) is. Also, if you can keep cache on GPU with a LRU mechanism, for certain workloads it's very efficient.

You can also design an API optimized for batch workloads (say the same core prompt with different data for instruct-style reasoning) - that can result in large savings in those scenarios.


If you can pipeline upcoming requests and tie state to a specific request, doesn't that allow you to change how you design physical memory? (at least for inference)

Stupid question, but why wouldn't {extremely large slow-write, fast-read memory} + {smaller, very fast-write memory} be a feasible hardware architecture?

If you know many, many cycles ahead what you'll need to have loaded at a specific time.

Or hell, maybe it's time to go back to memory bank switching.


The throughput of the PCIe link between the CPU and GPU, is far less than the aggregate throughput of the internal interconnects between neighbouring tensor cores.

Matrix operations might flow a lot of data around — but that data flow is akin to a bunch of individual people travelling along the individual residential streets they live on. There's a lot of movement there, but also a lot of capacity for movement, because there's no bottleneck of everyone needing to go to the same place or come from the same place.

Persisting the data out of the GPU and then loading it back in, is more like all those people commuting to work and then going back home. Big fan-in onto the PCIe "highway" over to the CPU and into RAM; then big fan-out back. Traffic jams for miles.

In the time it takes to restore a 1GB state snapshot from RAM into VRAM, you can probably chew through the equivalent of 1TB or more of intermediate matrix states.


I don’t know of any public details on how they implement Context Caching, but that is presumably exactly what they are doing. Just caching the text would be a minimal savings.


"some" is doing a lot of lifting. # of tokens * # of layers * head dimension * # of heads * 2 (K+V vectors) * 4-16bits (depending on quantization)


But isnt the information somehow cached when you start a new chat and build context with say GPT4? If the caching was so large as you say so many chat sessions in parallel would not be possible.


That's not my understanding. We can't be sure how OpenAI does things themselves, but adding messages to a conversation in the API means just rerunning the history through the prompt every time


>The size of the cached internal state of the network processing the book is much larger than the size of the book

It's funny that sometimes people consider LLMs as compression engines. While a lot of information gets lost in each direction (through the neural net)


Why is that funny? Sometimes compression is lossy, like JPEG and H.265


And the internal state of a JPEG decoder can be an order of magnitude larger than the JPEG file (especially progressive JPEG that can't stream its output).


I don't lose anything with gzip or rar.


You can make any lossy compression scheme into a lossless scheme by appending the diff between the original and the compressed. In many cases, this still results in a size savings over the original.

You can think of this as a more detailed form of "I before E, except after C, except for species and science and..." Or, if you prefer, as continued terms of a Taylor-series expansion. The more terms you add, the more closely you approximate the original.


And just as fast? The issue here is how do you do these things both accurately and while maintaining reasonable speeds.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: