I understand the hype. I think most humans understand why a machine responding to a query like never before in the history of mankind is amazing.
What you’re going through is hype overdose. You’re numb to it. Like I can get if someone disagrees but it’s a next level lack of understanding human behavior if you don’t get the hype at all.
There exists living human beings who are still children or with brain damage with comparable intelligence to an LLM and we classify those humans as conscious but we don’t with LLMs.
I’m not trying to say LLMs are conscious but just saying that the creation of LLMs marks a significant turning point. We crossed a barrier 2 years ago somewhat equivalent to landing on the moon and i am just dumb founded that someone doesn’t understand why there is hype around this.
The first plane ever flies, and people think "we can fly to the moon soon!".
Yet powered flight has nothing to do with space travel, no connection at all. Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech. No matter how you try.
And yet, the moon was reached a mere 66 years after the first powered flight. Perhaps it's a better heuristic than you are insinuating...
In all honesty, there are lots of connections between powered flight and space travel. Two obvious ones are "light and strong metallurgy" and "a solid mathematical theory of thermodynamics". Once you can build lightweight and efficient combustion chambers, a lot becomes possible...
Similarly, with LLMs, it's clear we've hit some kind of phase shift in what's possible - we now have enough compute, enough data, and enough know-how to be able to copy human symbolic thought by sheer brute-force. At the same time, through algorithms as "unconnected" as airplanes and spacecraft, computers can now synthesize plausible images, plausible music, plausible human speech, plausible anything you like really. Our capabilities have massively expanded in a short timespan - we have cracked something. Something big, like lightweight combustion chambers.
The status quo ante is useless to predict what will happen next.
>By that metric, there are lots of connections between space flight and any other aspect of modern society.
Indeed. But there's a reason "aerospace" is a word.
>No plane, relying upon air pressure to fly, can ever use that method to get to the moon
No indeed. But if you want to build a moon rocket, the relevant skillsets are found in people who make airplanes. Who built Apollo? Boeing. Grumman. McDonnell Douglas. Lockheed.
I feel like aeronautics and astronautics are deeply connected. Both depend upon aerodynamics, 6dof control, and guidance in forward flight. Advancing aviation construction techniques were the basis of rockets, etc.
Sure, rocketry to LEO asks more in strength of materials, and aviation doesn’t require liquid fueled propulsion or being able to control attitude in vacuum.
These aren’t unconnected developments. Space travel grew straight out of aviation and military aviation. Indeed, look at the vertical takeoff aircraft from the 40s and 50s, evolving into missile systems with solid propulsion and then liquid propulsion.
I thought your point was terrible about aerospace. And since you're insisting I follow you further into the analogy, I think it's terrible here.
LLMs may be a key building block for early AGI. The jury is still out. Will a LLM alone do it? No. You can't build a space vehicle from fins and fairings and control systems alone.
O1 can reach pretty far beyond past LLM capabilities by adding infrastructure for metacognition and goal seeking. Is O1 the pinnacle, or can we go further?
In either case, planes and rocket-planes did a lot to get us to space-- they weren't an unrelated evolutionary dead end.
> Yet powered flight has nothing to do with space travel, no connection at all.
The relationships you are describing are why airflight/spaceflight and AI/AGI are a good comparison.
We will never get AGI from an LLM. We will never fly to the moon via winged flight. These are examples of how one method of doing a thing, will never succeed in another.
Citing all the similarities between airflight and spaceflight makes my point! One may as well discuss how video games are on a computer platform, and LLMs are on a computer platform, and say "It's the same!", as say airflight and spaceflight are the same.
Note how I was very clear, and very specific, and referred to "winged flight" and "low/high pressure", which will never, ever, ever get one even to space. Nor allow anyone to navigate in space. There is no "lift" in space.
Unless you can describe to me how a fixed wing with low/high pressure is used to get to the moon, all the other similarities are inconsequential.
Good grief, people are blathering on about metallurgy. That's not a connection, it's just modern tech, has nothing to do with the method of flying (low/high pressure around the wing), and is used in every industry.
I love how incapable everyone has been in this thread of concept focus, incapable of separating the specific from the generic. It's why people think, generically, that LLMs will result in AGI, too. But they won't. Ever. No amount of compute will generate AGI via LLM methods.
LLMs don't think, they don't reason, they don't infer, they aren't creative, they come up with nothing new, it's easiest to just say "they don't".
One key aspect here is that knowledge has nothing to do with intelligence. A cat is more intelligent than any LLM that will ever exist. A mouse. Correlative fact regurgitation is not what intelligence is, any more than a book on a shelf is intelligence, or the results of Yahoo search 10 years ago were.
The most amusing is when people mistake shuffled up data output from an LLM as "signs of thought".
Your point is good enough any spaceflight, despite some quibbling from commenters.
But I haven't seen where you make a compelling argument why it's the same thing in AI/AGI.
In your old analogy, we're all still the guys on the ground saying it'll work. You're saying it won't. But nobody has "been to space" yet. You have no idea if LLMs will take us to AGI.
I personally think they'll be the engine on the spaceship.
No amount of compute will generate AGI via LLM methods.
LLMs don't think, they don't reason, they don't infer, they aren't creative, they come up with nothing new, it's easiest to just say "they don't".
One key aspect here is that knowledge has nothing to do with intelligence. A cat is more intelligent than any LLM that will ever exist. A mouse. Correlative fact regurgitation is not what intelligence is, any more than a book on a shelf is intelligence, or the results of Yahoo search 10 years ago were.
The most amusing is when people mistake shuffled up data output from an LLM as "signs of thought".
From where I sit, I don't even see LLMs as being some sort of memory store for AGIs even. The knowledge isn't reliable enough. An AGI would need to ingress and then store knowledge in its own mind, not use an LLM as a reference.
Part of what makes intelligence, intelligent, is the ability to see information and learn on the spot. And further to learn via its own senses.
Let's look at bats. A bat is very close to humans, genetically. Yet if somehow we took "bat memories", and were able to implant them in humans, how on earth would that help? How do you use bat memories of using sound for navigation, to "see" work? Of flying? Of social structure?
For example, we literally don't have them brain matter to see spatially the same way bats do. So when access those memories, they would be so foreign, that their usefulness is greatly reduced. They'd be confusing, unhelpful.
Think of it. Ingress of data and information is sensorially derived. Our mental image of the world depends upon this data. Our core being is built upon this foundation. An AGI using an LLM as "memories" would be experiencing something just as foreign.
So even if LLMs were used to allow an AGI to query things, it wouldn't be used as "memory". And the type of memory store that LLMs exhibit, is most certainly not how intelligence as we know it stores memory.
We base our knowledge upon directly observed and verified fact, but further based upon the senses we have. And all information derived from those senses is actually filtered, and processed by specialized parts of our brains, before we even "experience" it.
Our knowledge is so keyed in and tailored directly to our senses, and the processing of that data, that there is no way to separate the two. Our skill, experience, and capabilities are "whole body".
An LLM is none of this.
The only true way to create an AGI via LLMs would be to simulate a brain entirely, and then start scanning human brains during specific learning events. Use that data to LLM your way into an averaged and probabilistic mesh, and then use that output to at least provide full sense memory input to an AGI.
Even so, I suspect that may be best used to create a reliable substrate. Use that method to simulate and validate and modify that substrate so it is capable of using such data, thereby verifying that it stands solid as a model for an AGI's mind.
Then wipe and allow learning to begin entirely separately.
Yet to do even this, we'd need to ensure that sensor input at least to a degree enables the same sort of sense input. I think that Neuralink might be best in play to enable this, for as it works at creating an interface for, say, sight, and other senses... it could then use this same series of mapped inputs for a simulated human brain.
This of course works best with a physical form to also taste the environment around it, and who also is working on an actual android for day to day use?
You might say this focuses too much on creating a human style AGI, but frankly it's the only thing we can try to make and work into creating a true AGI. We have no other real world examples of intelligence to use, and every brain on the planet is part of the same evolutionary tree.
So best to work with something we know, something we're getting more and more apt at understanding, and with brain implants of the calibre and quality that neurolink is devising, something we can at least understand in far more depth than ever before.
> The first plane ever flies, and people think "we can fly to the moon soon!".
Yet powered flight has nothing to do with space travel, no connection at all.
You eventually said winged flight much later-- trying to make your point a little more defensible. That's why I started explaining to you the very big connections between powered flight and space travel ;)
I pretty much completely disagree with your wall of text, and it's not a very well reasoned defense of your prior handwaving. I'm going to move on now.
Yet powered flight has nothing to do with space travel, no connection at all. Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech. No matter how you try.
Winged flight == "low/high pressure" flight, it's how an airplane wing works and provides lift.
Maybe you just said what you wanted to say extremely poorly. Like "wing technology doesn't get you closer to space." I mean, of course, fins and distribution of pressure are important, but a relatively small piece.
On the other hand, powered flight and the things we started building for powered flight got us to the moon. "Powered flight" got us to turbojets, and turbomachinery is the number one key space launch technology.
Maybe you just said what you wanted to say extremely poorly.
Or maybe you didn't read closely? You claimed I didn't mention winged flight, yet I mentioned that and the method of winged flight. Typically, that means you say "Oh, sorry, I missed that" instead of blaming others.
I have refuted technology paths in prior posts. Refute those comments if you wish, but just restating your position without refuting mine doesn't seem like it will go anywhere.
And if you don't want a reply? Just stop talking. Don't play the "Oh, I'm going to say things, then say 'bye' to induce no response" game.
You gave a big wall of text. You made statements that can't really be defended. If you'd been talking just about wings, you could have made that clear (and not in one possible reading of a sentence that follows an absolutist one).
> Just debate fairly.
The thing I felt like responding to, you were like "noooo, i didn't mean that at all.
> > > > > Yet powered flight has nothing to do with space travel, no connection at all.
Pretty absolute statement.
> > > > > Gliding in the air via low/high pressure doesn't mean you'll get near space, ever, with that tech.
Then, I guess you're saying this sentence is trying to restrict it to "airfoils aren't enough to go to space", and not talk about how powered flight lead directly to space travel... Through direct evolution of propulsion (turbo-machinery), control, construction techniques, analysis methods, and yes, airfoils.
I guess we can stay here debating the semantics of what you originally said if you really want to keep talking. But since you're walking away from what I saw as your original point, I'm not sure what you see as productive to say.
That’s not true. There was not endless hype about flying to the moon when the first plane flew.
People are well aware of the limits of LLMs.
As slow as the progress is, we now have metrics and measurable progress towards agi even when there are clear signs of limitations on LLMs. We never had this before and everyone is aware of this. No one is delusional about it.
The delusion is more around people who think other people are making claims of going to the moon in a year or something. I can see it in 10 to 30 years.
That’s not true. There was not endless hype about flying to the moon when the first plane flew.
I didn't say there was endless hype, I gave an example of how one technology would never result in another... even if to a layperson it seems connected.
(The sky, and the moon, are "up")
People are well aware of the limits of LLMs.
Surely you mean "Some people". Because the point in this thread is that there is a lot of hype, and FOMO, and "OMG AGI!" chatter running around LLMs. Which will never ever make AGI.
You said you didn’t comprehend why there was hype and I explained why there was hype.
Then you made an analogy and I said your analogy is irrelevant because nobody thinks LLMs are agi nor do they think agi is coming out of LLMs this coming year.
Actually, plenty of people think LLMs will result in AGI. That's what the hype is about, because those same people think "any day now". People are even running around saying that LLMs are showing signs of independent thought, absurd as it is.
And hype doesn't mean "this year" regardless.
Anyhow, I don't think we'll close this gap between our assessment.
And yet, the overall path of unconcealment of science and technological understanding definitely traces a line that goes from the Wright brothers to Vostok 1. There is no reason to think a person from the time of the Wright brothers would find it to be a simple one easily predicted by the methods of their times, but I doubt that no person who worked on Vostok 1 would say that their efforts were epochally unrelated to the efforts of the Wright brothers.
What you’re going through is hype overdose. You’re numb to it. Like I can get if someone disagrees but it’s a next level lack of understanding human behavior if you don’t get the hype at all.
There exists living human beings who are still children or with brain damage with comparable intelligence to an LLM and we classify those humans as conscious but we don’t with LLMs.
I’m not trying to say LLMs are conscious but just saying that the creation of LLMs marks a significant turning point. We crossed a barrier 2 years ago somewhat equivalent to landing on the moon and i am just dumb founded that someone doesn’t understand why there is hype around this.