You ask a question. It outputs a cell. The cell contains the the answer and the prompt. There is no state, unlike the webapp. When constructing an API call, we use the materialized cells to reconstruct the context. Because of the notebook format, it's easy to edit everything. Because the LLM is speaking code, it's easy to check everything. You can feed the cell value outputs back into the prompt.
Cells can depend on each other, so you can build up. Cells can be prose, so you can plan. You can change the plan halfway. You can edit the AIs plan. You can forget the plan and do something else without affecting what happened after. You can keep the dialogue and context tight and information dense.
When generating a response, the LLM considers it's previous responses, which you can edit. So you get very fine control over its chain-of-thought. No need to use system prompt to train.
It's simple, but the emergent properties enable much more powerful collaboration.
First impression, oh my, er, what is this, deeper reflection, wow, so I can sort of program this like a smart spreadsheet where cells can contain LLM results which can feed into LLM prompts, and as I alter the contents of a cell, or LLM provides new results, the whole thing is 'recalculated', did I get that about right? Really looking forward to seeing how the UI develops in this space, so many ideas to explore.
Exactly. It needs to be experienced to understand the effect on workflow. I don't really know how to use it optimally yet, but it has very quickly learnt observable idioms without a hefty initial prompt, which it never normally can.
The cells become manipulatable knowledge memes. That they are computer checked and editable totally changes the speed you can teach by example, and therefore how quickly you can upskill it to do what you actually want it to do.
Cells can depend on each other, so you can build up. Cells can be prose, so you can plan. You can change the plan halfway. You can edit the AIs plan. You can forget the plan and do something else without affecting what happened after. You can keep the dialogue and context tight and information dense.
When generating a response, the LLM considers it's previous responses, which you can edit. So you get very fine control over its chain-of-thought. No need to use system prompt to train.
It's simple, but the emergent properties enable much more powerful collaboration.