> Respectfully, you’re probably not seeing that question asked and answered because it doesn’t quite make sense as phrased.
I think I see what you are trying to say, but I'm unsure whether you are actually seeing what I am asking.
> The same as your calculator doesn’t do anything until you put in some numbers and operators, an LLM doesn’t do anything unless you give it a prompt and some technical parameters.
This seems to be the crux of the misunderstanding. I thought I explained it, but let me try again.
ChatGPT is based on text input and text output. But you can "train" it to do certain things. Imagine that we train it such that when it says "HTTP GET example.com", then the next input would be the HTTP GET response for example.com. Based on that input, it could issue whatever next output it wants. Which would probably be another HTTP request, which would generate another HTTP output, which would generate another HTTP request, etc.
My point is this seems like it would be a very simple thing to train a GPT model to do. For the engineers who work on GPT, it seems it would be trivial to add this capability. So we can suppose a world where this is possible. (Am I wrong on that? I want to know if this would be non-trivial to add as a capability.)
> There are technical and computational limits that make both your prompt and the token limit fairly small. Several hundreds of words at most
I am very encouraged to hear this, and I want to know more. Why? Why are there limits to the number of tokens? Exactly why? Has anyone ever written a paper about that? Has anyone ever related this concept of "token limits" to the concept of "no harm could be done" in the same way that you are, in response to my question? I don't doubt that they have, but I've been searching and I haven't found it.
> Now, you can give it “access to the internet” as part of responding to your prompt and fulfilling your token limit, and that’s roughly what Microsoft has done with Bing Assistant
This is admittedly a tangent, but do we actually know this to be true? Some theories suggest that "Sydney," or the Bing chatbot, only has access to a search index, and cannot make live HTTP requests.
Continuing the tangent for a moment, this is a big part of why I asked this question originally. If you create example.com/xyzabc, and ask Bing to summarize it, will it make a live HTTP request? Or, if that URL is not in the search index yet, will it know nothing? The implications may be profound, given how Bing Bot / Sydney has expressed its "desire" to hack nuclear launch codes. Could there be a lot riding on whether that system can make live HTTP requests? I'm positing that we can't answer that question right now. Because we don't know what would happen if it could.
Or do we? And if so, do we know through testing, or through theory? I'm admitting ignorance, and saying I haven't read an answer from any source that falls into either category.
> The ramifications really aren’t that big, and we’re probably at least five or ten years of AI research and compute hardware development from making them interestingly bigger
But why? I mean, exactly, why? Is there a theoretical foundation for your claim? Or an experimental one? I'm searching for it.
Because of how GPT works, the resources needed for good inference (generating output) grow nonlinearly with respect to tokens involved (more tokens require much more resources) and so there’s a practical wall before you just run out of resources to apply.
It’s not very efficient. It’s like if your calculator could use a little solar power thingie for numbers that were only a few digits, but needed a diesel generator to crunch on 8 digit numbers, and a nuclear plant to crunch on 12 digit ones. Practically, you’d have no choice but to limit yourself to something manageable.
Future models may be more efficient, and future hardware solutions may be more efficient, but those things don’t get sorted out overnight any more than fusion power.
Beyond that, I think it’s important that you understand that Bing Assistant doesn’t express desires. It picks common sequences words based on its training data. It doesn’t know what nuclear codes are. It just knows what it looks like for a message about wanting nuclear codes to follow some other message in a dialog (probably a pattern it picked up on a forum like Reddit) and so it dutifully put that text after the prompt it had been given. There’s no will or consistency to it.
With enough resources, you could drive it through a feedback loop where it kept prompting itself and see what happens, but the feedback loop would just produce noise like any other simple feedback loop because it would just keep either honing in on the most boring and common continuation to the last thing it gave itself or it would start diverging off into nonsense. Because it’s sooooo inefficient, you can’t give it enough resources for it to be stable and interesting for very long.
I think I see what you are trying to say, but I'm unsure whether you are actually seeing what I am asking.
> The same as your calculator doesn’t do anything until you put in some numbers and operators, an LLM doesn’t do anything unless you give it a prompt and some technical parameters.
This seems to be the crux of the misunderstanding. I thought I explained it, but let me try again.
ChatGPT is based on text input and text output. But you can "train" it to do certain things. Imagine that we train it such that when it says "HTTP GET example.com", then the next input would be the HTTP GET response for example.com. Based on that input, it could issue whatever next output it wants. Which would probably be another HTTP request, which would generate another HTTP output, which would generate another HTTP request, etc.
My point is this seems like it would be a very simple thing to train a GPT model to do. For the engineers who work on GPT, it seems it would be trivial to add this capability. So we can suppose a world where this is possible. (Am I wrong on that? I want to know if this would be non-trivial to add as a capability.)
> There are technical and computational limits that make both your prompt and the token limit fairly small. Several hundreds of words at most
I am very encouraged to hear this, and I want to know more. Why? Why are there limits to the number of tokens? Exactly why? Has anyone ever written a paper about that? Has anyone ever related this concept of "token limits" to the concept of "no harm could be done" in the same way that you are, in response to my question? I don't doubt that they have, but I've been searching and I haven't found it.
> Now, you can give it “access to the internet” as part of responding to your prompt and fulfilling your token limit, and that’s roughly what Microsoft has done with Bing Assistant
This is admittedly a tangent, but do we actually know this to be true? Some theories suggest that "Sydney," or the Bing chatbot, only has access to a search index, and cannot make live HTTP requests.
Continuing the tangent for a moment, this is a big part of why I asked this question originally. If you create example.com/xyzabc, and ask Bing to summarize it, will it make a live HTTP request? Or, if that URL is not in the search index yet, will it know nothing? The implications may be profound, given how Bing Bot / Sydney has expressed its "desire" to hack nuclear launch codes. Could there be a lot riding on whether that system can make live HTTP requests? I'm positing that we can't answer that question right now. Because we don't know what would happen if it could.
Or do we? And if so, do we know through testing, or through theory? I'm admitting ignorance, and saying I haven't read an answer from any source that falls into either category.
> The ramifications really aren’t that big, and we’re probably at least five or ten years of AI research and compute hardware development from making them interestingly bigger
But why? I mean, exactly, why? Is there a theoretical foundation for your claim? Or an experimental one? I'm searching for it.