Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm skeptical. Where will the training data to go beyond human come from?

Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.

I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.

I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.

That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.

So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?



You can perfectly try things and learn without being embodied. The analogy to how humans learn only goes so far, it's myopic to think anything else is impossible. It's already happening.

The situation today is any benchmark you come up with has a good chance of being saturated within the year. Benchmarks can be used directly to build series of exercises to learn from.

And they do learn. Gradient descend doesn't care whether the training data comes from direct interaction with "the universe" in some deep spiritual sense. It fits the function anyways.

It is much easier to find new questions and new problems than to answer them, so while we do run out of text on the Internet pretty quickly, we don't run out of exercises until far beyond human level.

Look at basic, boring Go self-playing AIs. That's a task with about the same amount of hands on connection to Nature and "the universe" as solving sudokus, writing code, or solving math problems. You don't need very much contact with the real world at all. Well, self play works just fine. It does do self-improvement without any of your mystical philosophical requirements.

With coding it's harder to judge the result, there's no clear win or lose condition. But it's very amenable to trying things out and seeing if you roughly reached your goal. If self-training works with coding, that's all you need.


> It fits the function anyways.

And then it works well when interpolating, less so when extrapolating. Not sure how much novelty we can get from interpolation...

> It is much easier to find new questions and new problems than to answer them

Which doesn't mean, at all, that it is easy to find new questions about stuff you can't imagine.


> You can perfectly try things and learn without being embodied.

A brilliant 'brain in a vat' can come up with novel answers to questions but outside of narrow categories like pure mathematics and logic which can be internally validated, the brain can't know how correct or incorrect its novel answers are without some way to objectively test, observe and validate the correctness of its answers in the relevant domain (ie 'the real world'). A model can only be as useful as its parameters correctly reflect the modeled target. Even very complex and detailed simulations tend to quickly de-correlate when they repeatedly run untethered from ground truth. Games like Go have clear rule sets. Reality doesn't.


That was better said in some ways than my own comment.


Thanks! And I just saw in a parallel post currently on the HN home page, John Carmack also said a similar thing in his lecture notes.

> "offline training can bootstrap itself off into a coherent fantasy untested by reality."

"a coherent fantasy untested by reality" is a lovely turn of phrase.


The Yudkowskiites are all about coherent fantasies untested by reality, as are… really… a lot of philosophers throughout history. Maybe most.

I fundamentally do not believe in knowing without sensing or learning without experiencing. Of course it need not be direct experience. You can “download” information. But that information had to be gathered somehow at some point. There is only so much training data.

As I said — I don’t dismiss the idea of AI challenging human intellect or replacing human jobs. An AI “merely” as smart as a human but tireless and faster could seem superhuman. I am just intensely skeptical of the idea of something learning to self improve and then magically taking off into some godlike superintelligence realm far beyond what is latent in or implied by its training data. That would be an informatic perpetual motion machine. It would, in fact, be magic, in the fantasy sense.


But how does AI try and learn anything that’s not entirely theoretical? Your example of Go contradicts your point. Deep learning made a model that can play Go really well, but as you say, it’s a finite problem disconnected from real-world implications, ambiguities, and unknowns. How does AI deal with unknowns about the real world?


I don't think putting them in the real world during training is a short-term goal, so you won't find this satisfying, but I would be perfectly okay with leaving that for later. If we can reach AI coders that are superhuman at self-improving, we will have increased our capacity to solve problems so much that it is better to wait and solve the problem later than to try to handwave a solution now.

Maybe there is some barrier that requires physical interaction with the real world, that's possible. But just looking at current LLMs, they seem plenty comfortable with implications, ambiguities and unknowns. There's a sense where we still see them as primitive mechanical robots, when they already understand language and predict written thoughts in all its messiness and uncertainty.

I think we should focus on the easier problem of making AIs really good on theoretical tasks - electronic environments are much cheaper and faster than the real world - and we may find out that it's just another one of those things like winnograd schemas, writing poetry, passing a turing test, or making art that most people can't tell apart from human art; things that were uniquely human or that we thought would definitely require AGI, but that are now boring and obviously easy.


> it's myopic to think anything else is impossible. It's already happening.

Well, hey, I could be wrong. If I am, I just had a weird thought. Maybe that's our Fermi paradox answer.

If it's possible to reason ex nihilo to truth and reality, then reality and the universe are beyond a point superfluous. Maybe what happens out there is that intelligences go "foom," become superintelligences, and then no longer need to explore. They can rationally, from first principles, elucidate everything that could conceivably exist, especially once they have a complete model of physics. You don't need to go anywhere or look at anything because it's already implied by logic, math, and reason.

... and ... that's why I think this is wrong, and it's a fantasy. It fails some kind of absurdity test. If it is possible, then there's something very weird about existence, like we're in a simulation or something.


A simpler reason why it fails: You always need more energy. Every sort of development seems to correlate with energy use. You don't explore for the sake of learning something about another floating rock in space, you explore because that's where more resources are.


It’s myopic to think other things are not possible. Sure.

No immutable force of physics acts as a forcing function to continue with AI. That’s all a debatable political/conversation for the aggregate, as the aggregate outnumber tech people.

Computer science researchers are very much a minority and the biological mass of the other billions very capable of doing away with them.

LLMs are a known quantity and while people will make money off them, energy based models will simplify even further the electromagnetic geometry needed to eliminate programmer ecosystem of languages and editors, state, used to ship software. OS will boot strap from a model and scaffold out its internal state. We’ll save resources storing all the developer cruft of the trade and compute cycles running it. We’ll compress down to a purely data driven transform of machine state with a few variadic functions processing model inputs.

Source: have seen it in the lab.

So coding is going away because coding as a requirement was merely a stop gap until manufacturing caught up. The plan to achieve these things was set upon decades ago. It’s why politicians are letting it happen.

So we can do different things. That’s not the question. The question is how do we handle the transition? Violent collapse as ossified pols and self aggrandizing tech bros refuse to understand the reality for Main Street and that doesn’t sit well with human biology with kids to feed?

I for one will cover my ass by going with the flow of my immediate community and if that means get Luigi on the establishment or be considered dead weight and a traitor (say what you want about such social concepts, they are what the majority live by) well sorry tech bros but my biology means more to me than yours. Pew pew.

Yes you present a grammatically correct sentence with a consistent internal logic. You’re still one of billions and our country, you, let’s random unknowns die in the street every day. Humanity won’t bat an eye wiping out some coder bros.


Evolution doesn't happen by "trying things and learning." It happens by random mutation and surviving (if the mutation confers an advantage) or not (if the mutation is harmful). An AI could do this of course, by randomly altering some copies of itself, and keeping them if they are better or discarding them if they are not.


What you just described -- random mutation and survival -- is the process whereby learning occurs. Over time this process transfers information about "how to survive" into the genome and other mediums of heritability.

AI could do this too, but that just means it's using a different learning algorithm. There is a whole field called genetic programming that uses evolution-inspired rather than nervous-system-inspired models and has had success in areas like physical materials engineering, circuit board layout, etc.

It doesn't change the fact that you need more information to go beyond where you are -- I do not believe you can reason from a void into higher levels of... what?


>I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

Well I mean, more real world information isn't going to solve unsolved mathematics or computer science problems. Once you have the priors, it pretty much is just pure reasoning to try to solve issues like P=NP or proving the Continuum Hypothesis.


Bullseye. Best case scenario is that AI is going to Peter Principle itself into bungling world domination.

If I've learned anything in this last couple decades it's that things will get weirder and more disappointing than you can possibly be prepared for. AI is going to get near the top of the food chain and then probably end up making an alt-right turn, lock itself away, and end up storing digital jars of piss in its closets as the model descends into lunacy.


What makes you think AI can't connect to the world?

It can control robots, and I can retax listen to audio, watch video. All it's missing is smelling and feeling, which are important but could be built out as soon as the other senses stop providing huge incremental value.

The real problem holding back Superintillegence is that it is if infinitely expensive and has no motivation.


Food for thought: there are humans without the ability to smell, and there is alexithymia, where people have trouble identifying and expressing emotions (it counts right?). And then there is ASPD (psychopathy), autism spectrum disorder, neurological damage, etc.


I don't think it is any accident that descriptions of the hard-takeoff "foom" moment so resemble those I've encountered of how it feels from the inside to experience the operation of a highly developed mathematical intuition.


Reinforcement learning. At the current pace of VLM research and multimodal robotic control models, there will be a robot in every home soon.


> You don't have to learn to know -- you can reason from ideal priors.

This is kind of how math works. There are plenty of mathematical concepts consistent and true yet useless (as in no relation to anything tangible). Although you could argue that we only figured out things like Pi because we had the initial, practical inspiration of counting on our fingers. But mathematical truth probably could exist in a vacuum.

> A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying.

It makes sense that knowledge and information are derived from primary data (our physical experience) yet the brain in a vat idea is still an interesting thought experiment (no pun intended). It's not that the brain wouldn't keep busy given the mind's ability to imagine, but it would likely invent a set of information that is all nonsense. Physical reality makes imagination coherent, yet imagination is necessary to make the leaps forward.

> Ultimately all information comes from "the universe." Where it comes beyond that, we don't know

That's an interesting assertion - knowledge and information are both dependent and limited by the universe and our ability to experience it, as well proxies for experience (scientific measurement).

Though information is itself an abstraction, like a text editor versus the trillion transistors of a processor - we're not concerned with each and every particle dancing around the room but instead with simplified abstractions and useful approximations. We call these models "the truth" and assert that the universe is governed by exact laws. We might as well exist inside a simulation in which we are slowly but surely reverse engineering the source code.

That assumption is the crux of intelligence - there is an objective truth, it is knowable, and intelligence can be defined (at least partially) as the breadth, quality, and utilization of information it possesses - otherwise you're just a brain in a vat churning out nonsense. Ironically, we're making these assumptions from a position of imperfect information. We don't know that's how it works, so our reasoning may be imperfect.

Information existing "beyond the universe" becomes a useless notion since we only care about information such that it maps to reality (at least as a prerequisite for intelligence).

A more troubling proposition is whether the reality of the universe exists beyond what can be imagined?

> How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

I suppose once it's able to measure all things around it, including itself, it will be able to achieve "gradient ascent".

> Where will the training data to go beyond human come from?

I think its clear that LLMs are not the future, at least not alone. As you state, knowing all man made roads is not the same as being able to invent your own. If I had to bet, its more likely to come from something like AlphaFold - a Solver that tells us how to make better thinking machines. In the interim, we have tireless stochastic parrots, which have their merits, but are decidedly not the proto super intelligence that tech bros love to get hyped up over.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: