Ah, but that's not that case here at all. It's not an app. It's still a sandboxed web page. The only difference is the communication method. Rather than a transactional client-server relationship, we adopt the media-steaming approach post-load.
That is to say: You would load your desired website (the standard way) and then experience a "refresh-less" session for the duration of your visit.
Imagine visiting this website, Hacker News, to read a few posts/discussions. Does the idea of circumventing a refresh offend you? You could click into a threaded discussion (from the homepage) and back without triggering a reload or needing the data beforehand.
Other than the indexability problem, it seems like an ideal solution. In many use cases, the indexability problem is actually a feature too.
"I haven't understood why apps are not what you ask, so I tried to remember some well-known technologies/approaches."
I'm talking about websites that function like apps but have the privacy/security benefits of being sandboxed inside the browser (accessed by URL or link).
It seems we may be talking about different things. I am not advocating for downloadable apps (I've never liked the app store paradigm nor the access it grants random developers). I'm advocating for more capable websites (which IMO "ought" to protect user data--not leverage it).
> I'm talking about websites that function like apps but have the privacy/security benefits of being sandboxed inside the browser (accessed by URL or link).
What privacy benefits are not being realized by the current approach? If I want privacy, I can just disable JS for not letting some website know too much about my device. But in general, privacy is a joke since Edward Snowden's revelations.
> I'm advocating for more capable websites (which IMO "ought" to protect user data--not leverage it).
What opportunities are not being realized with the current approach? Who will be responsible for protecting user data, given that every website/resource/service strives to sell Alphabet/Meta as much data about you as possible.
Let me clarify with some context. I've been tinkering with an infinite grid concept that consumes a streamed JSON feed (plus a sufficient data buffer to hide any delay from the user) to create and then display content with the help of a JS factory. All related media is then streamed to the browser and lazy loaded when needed. With this setup, you can traverse a database without refreshing the page by redrawing the window.
The only potential drawback I see is whether or not search crawlers could index content that's introduced via JS after a page load.
Edit: It also appears to protect from scraping... so I suspect it would conflict with indexability. That's a pretty big downside if true.
It does sound like it, but no it's just plain JS. I recorded a short clip; showing is better than attempting to explain.
Edit: It is almost AJAX. The more I think about it the more the boundaries get blurry. Essentially, it's AJAX that does not fetch or receive resources directly. It interacts with a buffer that holds JSON which describes the next batch of cards. The images are streamed via <img> tag, so the buffer is small relative to the media it represents.
On a very fundamental level the LLM is a function from context to the next token but when you generate text there is a state as the context gets updated with what has been generated so far.
"On a very fundamental level the LLM is a function from context to the next token but when you generate text there is a state as the context gets updated with what has been generated so far."
Its output is predicated upon its training data, not user defined prompts.
If you have some data and continuously update it with a function, we usually call that data state. That's what happens when you keep adding tokens to the output. The "story so far" is the state of an LLM-based AI.
'If you have some data and continuously update it with a function, we usually call that data state. That's what happens when you keep adding tokens to the output. The "story so far" is the state of an LLM-based AI.'
You're being pedantic. While the core token generation function is stateless, that function is not, by a long shot, the only component of an LLM AI. Every LLM system being widely used today is stateful. And it's not only 'UX'. State is fundamental to how these models produce coherent output.
The concept of a "mom test" is condescending as well. Many moms code.