Forget AI "code", every single request will be processed BY AI!
People aren't thinking far enough, why bother with programming at all when an AI can just do it?
It's very narrow to think that we will even need these 'programmed' applications in the future. Who needs operating systems and all that when all of it can just be AI.
In the future we don't even need hardware specifications since we can just train the AI to figure it out! Just plug inputs and outputs from a central motherboard to a memory slot.
Actually forget all that, it'll just be a magic box that takes any kind of input and spits out an output that you want!
--
Side note: It would be really interesting to see a website that generates all the pages every time a user requests them, every time you navigate back it would look the same, but some buttons in different places, the live chat is in a different corner, the settings now have a vertical sidebar instead of a horizontal menu.
> On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
An Artificial Intelligence on that level would be able to easily figure out what you actually want. We should maybe go one step further and get rid of the inputs? They just add all of that complexity when they're not even needed.
At some point we just need to acknowledge that such speculation is like asking for a magic genie in a bottle rather than discussing literal technology.
"Magic genie in a bottle" is a very good way of framing what an AGI would (will) be if implemented, complete with all the obvious failure modes that end disastrously for the heroes in magic genie stories.
Also consider that you won’t know if the answer relates to what you want or not because you have no reference helping distinguish between reality and hallucination.
And have been conditioned to accept LLM responses as reality.
And if it wasn't what you wanted, well maybe you are wrong. An AI that intelligent is always right after all.
Seriously though, at some point AI does actually need an input from the user, if we are imagining ourselves still as the users.
If instead we just let AI synthesise everything, including our demands and desires, then why bother being involved at all. Take the humans out of the loop. Their experience of the outputs of the AI is actually a huge bottleneck, if we skip showing the humans we can exist so much faster.
Yea but the world they created wasn't one anyone wanted or asked for any more than our own reality caters to us; it is intended to portray life as it was in the nineties. To accept this as ideal is to accept that we currently live in an ideal world, which is extremely difficult to accept.
In one of the movies it's actually explained the machines originally created an utopia for humanity, but it was bad for "engagement" and "retention" and they had to pivot on the nineties simulator - which was better accepted.
They were simply asking if Babbage had prepared the machine to always give the answers to the questions Babbage knew he was going to enter - i.e. whether he was a fraud.
Enter 2+2. Receive 4. That's the right answer. If you enter 1+1, will you still receive 4? It's easy to make a machine that always says 4.
Mr. Babbage apparently wasn't familiar with the idea of error correction. I suppose it's only fair; most of the relevant theory was derived in the 20th century, AFAIR.
No, error correction in general is a different concept than GIGO. Error correction requires someone, at some point, to have entered the correct figures. GIGO tells you that it doesn't matter if your logical process is infallible, your conclusions will still be incorrect if your observations are wrong.
GIGO is an overused idea that's mostly meaningless in the real world. The only way for the statement to be true in general sense, is if your input is uniformly random. Anything else carries some information. In practice, for Babbage's quip to hold, the interlocutor doesn't need to merely supply any wrong figures, they need to supply ones specifically engineered to be uncorrelated with the right figures.
Again, in general sense. Software engineers are too used to computers being fragile wrt. inputs. Miss a semicolon, program won't compile (or worse, if it's JavaScript). But this level of strictness wrt. inputs is a choice in program design.
Error correction was just one example anyway. Programmers may be afraid of garbage in their code, but for everyone else, a lot of software is meant to sift through garbage, identifying and amplifying desired signals in noisy inputs. In other words, they're producing right figures in the output out of wrong ones in the input.
I don't think machines should rely on an opaque logic to assume and "correct errors" on user input. It's more accurate to "fail" than handling out an assumed output.
And also:
> they need to supply ones specifically engineered to be uncorrelated with the right figures.
I assume most people will understand this way (including me) when it's said to "input wrong figures".
> In other words, they're producing right figures in the output out of wrong ones in the input.
This does not refute the concept GIGO nor does it have anything to do with it. You appear to have missed the point of Babbage's statement. I encourage you to meditate upon it more thoroughly. It has nothing to do with the statistical correlation of inputs to outputs, and nothing to do with information theory. If Babbage were around today, he would still tell you the same thing, because nothing has changed regarding his statement, because nothing can change, because it is a fundamental observation about the limitations of logic.
I don't know what the point of Babbage's statement was; it makes little sense other than a quip, or - as 'immibis suggests upthread[0] - a case of Babbage not realizing he's being tested by someone worried he's a fraud.
I do usually know what the point of any comment quoting that Babbage's statement is, and in such cases, including this one, I almost always find it wanting.
I suppose spell checking is a sort of literal error correction. Of course this does require a correct list of words and misspellings to not be on that list.
Honestly I see this not about error but instead divining with perfect accuracy what you want. And when you say it that way it starts sounding like a predicting the future machine.
Yes, with sufficient context, that's what I do every day, as presentation authors, textbook authors and Internet commentariat alike, all keep making typos and grammar errors.
You can't deal with humans without constantly trying to guess what they mean and use it to error-correct what they say.
(This is a big part of what makes LLMs work so well on wide variety of tasks, where previous NLP attempts failed.)
I often wish LLMs would tell me outright the assumptions they make on what I mean. For example, if I accidentally put “write an essay on reworkable energy”, it should start by saying “I'm gonna assume you mean renewable energy”. It greatly upsets me that I can't get it to do that just because other people who are not me seem to find that response rude for reasons I can't fathom, so it was heavily trained out of the model.
Huh, I'd expect it do exactly what you want it to, or some equivalent of it. I've never noticed LLMs silently make assumptions on what I meant wrt. anything remotely significant; they do stellar job at being oblivious to typos, bad grammar and other fuckups of ESL people like me, and (thankfully) they don't comment on that, but otherwise, they've always been restating my requests and highlighting if they're deviating from direct/literal understanding.
Case in point, I recently had ChatGPT point out, mid-conversation, that I'm incorrectly using "disposable income" to mean "discretionary income", and correctly state this must be the source of my confusion. It did not guess that from my initial prompt; it took my "wrong figures" at face value and produced answers that I countered with some reasoning of my own; only then, it explicitly stated that I'm using the wrong term because what I'm saying is correct/reasonable if I used "discretionary" in place of "disposable", and proceeded to address both versions.
IDK, but one mistake I see people keep making even today, is telling the models to be succinct, concise, or otherwise minimize the length of their answer. For LLMs, that directly cuts into their "compute budget", making them dumber. Incidentally, that could be also why one would see the model make more assumptions silently - these are one of the first things to go when one's trying to write concisely. "Reasoning" models are more resistant to this, fortunately, as the space between the "<thinking> tags" is literally the "fuck off user, this is my scratchpad, I shall be as verbose as I like" space, so one can get their succinct answers without compromising the model too badly.
> It would be really interesting to see a website that generates all the pages every time a user requests them
We tried a variant of this for e-commerce and my take is that the results were significantly better than the retailer's native browsing experience.
We had a retailer's entire catalog processed and indexed with a graph database and embeddings to dynamically generate dynamic "virtual shelves" each time when users searched. You can see the results and how it compares to the retailer's native results
It is nicer, but that interface latency would turn me off. When I search for online groceries, I want simple queries e.g spinach to return varieties (e.g frozen, fresh, and canned) of spinach as fast as possible.
I don't know where we are on the LLM innovation S-curve, but I'm not convinced that plateau is going to be high enough for that. Even if we get an AI that could do what you describe, it won't necessarily be able to do it efficiently. It probably makes more sense to have the AI write some traditional computer code once which can be used again and again, at least until requirements change.
The alternative is basically https://ai-2027.com/ which obviously some people think is going to happen, but it's not the future I'm planning for, if only because it would make most of my current work and learning meaningless. If that happens, great, but I'd rather be prepared than caught off guard.
That leads to a kind of fluid distinction similar to interpreted vs. compiled languages.
You tell the AI what you want it to do. The AI does what you want. It might process the requests itself, working at the "code level" of your input, which is the prompt. It might also generate some specific bytecode, taking time an effort which is made up for by more efficiently processing inputs. You could have something like JIT, where the AI decides which program to use for the given request, occasionally making and caching a new one if none fit.
Yeah AI at least now is so energy inefficient. There is only so much sun hitting earth we can be stupid with. Using AI for everything makes electron apps seem efficient! Once the hype runs out and you pay full price much AI today will be unattractive. Hopefully that leads to more efficient AI. (Which I suspect is more interesting)
I think you’re joking in general, but your sidenote is already extremely close to websim[0] which is an art adjacent site that takes a URL as prompt and then creates the site. The point is effectively a hallucinated internet, and it is a neat toy
Exactly. People are too focused on shoehorning AI into today’s “humans driving computers” processes, instead of thinking about tomorrow’s “computers driving computers” processes. Exactly as you say, today it’s more efficient to create a “one size fits all” web site because human labor is so expensive, but with computer labor it will be possible to tailor content to each user’s tastes.
> Side note: It would be really interesting to see a website that generates all the pages every time a user requests them, every time you navigate back it would look the same, but some buttons in different places, the live chat is in a different corner, the settings now have a vertical sidebar instead of a horizontal menu.
Please don't give A/B testers ideas, they would do that 100% unironically given the chance.
I know this is satire, but you're right. In the future loose requirements will produce statically executable plans that run fast in future invocations.
Our jobs will be replaced. It's just a matter of when. I'm very pessimistic about the near term. But long term, there's no way the jobs we do survive in their current form.
If I were to guess, I'd say 20 or more years? But who knows.
> Side note: It would be really interesting to see a website that generates all the pages every time a user requests them, every time you navigate back it would look the same, but some buttons in different places, the live chat is in a different corner, the settings now have a vertical sidebar instead of a horizontal menu.
Turtles all the way down: why bother with a GPU when you can get AI to guess the attention operation. Just tell the AI what the AI should do and it can use AI to compute the AI!
People aren't thinking far enough, why bother with programming at all when an AI can just do it?
It's very narrow to think that we will even need these 'programmed' applications in the future. Who needs operating systems and all that when all of it can just be AI.
In the future we don't even need hardware specifications since we can just train the AI to figure it out! Just plug inputs and outputs from a central motherboard to a memory slot.
Actually forget all that, it'll just be a magic box that takes any kind of input and spits out an output that you want!
--
Side note: It would be really interesting to see a website that generates all the pages every time a user requests them, every time you navigate back it would look the same, but some buttons in different places, the live chat is in a different corner, the settings now have a vertical sidebar instead of a horizontal menu.