Generative AI can absolutely extrapolate, that's the whole reason it works.
The whole point of machine learning is to derive the underlying rules relating your input data. Extrapolation is just extending where you follow those "curves" beyond bounds of known data points.
Oh but it can’t, but with a sufficiently complex vector space it can seem like it can. What seems like extrapolation is an interpolation in the semantic vector space, particularly in the transformer / attention model. This is a key difference between human intelligence and current AI, it’s not able to “create” and see beyond what it’s been trained on. Any approximation of that is simply indicative of a very complete training set, and it is sufficiently powerful enough to fool people with its expectation based inference - but when you dig into the details of cutting edge stuff you’re an expert in and ask it conceptual questions that extend beyond the semantic corpus embedded in its vector space, it will hallucinate, or if well fine tuned admit lack of knowledge, because the best it can do is interpolate within its own semantic vector space.
But listen, I’m a big buyer of generative AI, what it does is incredible. But it’s useful to not ascribe more power to a tool than the math allows.
And there are very few machine learning algorithms that do extrapolation at all with any precision. Generally they project an expectation,often of some complex highly dimensional non linear system, which is amazing, but when they are confronted with a novel input pattern they are thrown off. The issue is they’re at their core probabilistic systems, and if the data experiences a regime change that’s unexpected the model will misbehave and output garbage.
Enlighten us then how a generative AI model behaves when confronted with data outside its training space? Where in the model does it allow for the vector space to extend dynamically based on some other process to adapt to new regimes never seen before? Or does it necessarily construct its response by sampling the vector space, and in the case of transformers, apply attention / self attention to boost / dampen dimensions based on the semantic context? Extrapolation means being able to extend your decision space into new areas through synthesis and creativity, interpolation means walking within the trained vector space of the model. Clearly generative AI models as implemented today can’t extrapolate and always interpolate.
I think confusion comes from the idea that you can take a regression or expectation and extend it into the future and is that extrapolation. It isn’t - it’s interpolation still. You’re interpolating between a and a’ using the same function. Extrapolation takes the new regime and data and your existing training and adapts a new behavior. We don’t really understand how humans do this, and we don’t have any machine learning models that can.
To be clear, again, I’m not poopooing ML or generative AI. I think it’s the most powerful thing we’ve created with computers so far. But it’s far from general intelligence, even if it’s a necessary part.
>Enlighten us then how a generative AI model behaves when confronted with data outside its training space?
It behaves just fine.
>I think confusion comes from the idea that you can take a regression or expectation and extend it into the future and is that extrapolation.
Congratulations, you've just defined extrapolation. Someone is definitely confused here but it isn't me.
Of course you can make any claim about what something can or can't do when you make up your definitions.
There are many many clear examples of a language model extrapolating. Rather than accept this, you've opted to conjuring up vague and meaningless definitions and distinctions on the fly.
This is so simple to see. Untestable Definitions are meaningless. Please give us a test of "extrapolation" that all humans can perform and let's see how the Language Model does. You won't be able to but by all means, give it a go.
>Extrapolation takes the new regime and data and your existing training and adapts a new behavior.
By this metric, a large number of humans can’t extrapolate either. In fact if you imagine your first paragraph were written about humans, it lines up pretty well.
Except all humans can extrapolate even if they don’t. Current generative models fundamentally can not, even you want them to.
However I would hold that I can prove you’re wrong. Have you ever seen a human play make believe when they’re young? Draw? You’re judging humanity by post indoctrination crushing of the soul for profit. But every human being, no matter how rigid and unthinking as an adult, was a creative genius at age four.
I wouldn’t go this far, but I would say the “a lot of humans can’t either” argument in LLM convos is a bit worn now. Where it’s true (hallucinating on the edges of certain knowledge, solving math and logical reasoning through approximation and most likely thinking) and where it’s not, it’s all been said already many times.
The key though is that in these things “most humans” isn’t a very useful comment when the discussion is “all AI.” The comment, even if true, acknowledges there exists some humans that do, doesn’t refuse that all AI don’t, so doesn’t advance much of the discussion. In a parallel comment I pointed out that all humans can even if they don’t appear to, then further assert all humans have even if they don’t appear to currently or consistently, so they exist as distinct classes in this space of thinking and reasoning from generative AI.
Huh? I disagree with the premise, and explained why.
Reading over your comments for the last few days, you seem consistently aggressive. If you need to vent to someone about something, you can DM me. Happy to just listen.
If you mean when people who are being dicks make obvious glaring mistakes and can't handle when they're pointed out, I think the word you're looking for is "impatient". This community's standards are higher than the way you're participating. Have some dignity and bring your best side forward, not this petty sass.
You know they're talking about the fundamental differences in learning between advanced and specialized biological systems (humans) and relatively rudimentary digital ones (LLMs). You're not explaining your disagreements you're just demonstrating your disdain for some implied lower-quality humans. That's called "being a dick".
What on earth are you on about? You seem to have unilaterally declared that I was being sassy, when in fact this exists nowhere other than your own head. Then you run around like a sheriff on a power trip ranting about protecting the community from us sassy brats.
Brass tacks: you need to stop what you’re doing, or you’re going to get yourself penalized by the mods. They have a duty to protect the intellectual curiosity of the site. Trust me when I say it’s no fun to be in the penalty box.
You can start by re-reading the guidelines and paying particular attention to "don’t cross examine," along with realizing that it’s not okay to be calling people a dick multiple times when they’re engaging in good faith.
Your call. Either way, I wash my hands of you and this conversation.
"a large number of humans can’t extrapolate either" demonstrates that you're operating under the belief that there are people whom you consider as less-than. What's good faith about that? You don't really have the high horse you think you do here.
You know, you're not the first one to invent and deploy the plausible-deniability-/just-being-civil-style sass. Maybe I'm the first one to call you out on it as transparent, though.
It's ironic that your accusations are followed immediately by your playing mods' deputy. Don't worry they already know me, and I know the limits past which they choose to intervene.
Again, I hope you can understand that self-awareness and good faith are pillars of fruitful conversation here. It helps no one to avoid acknowledging the kinds of rhetoric and tactics you engage in. Good faith entails understanding the fundaments of the belief systems you portray and propagate, or at least being humble when you don't.
At no point have you made a coherent reply to anything that I’ve said. You haven’t explained your position. You haven’t explained what you’re arguing against. And I have not one clue what you could possibly mean by “people whom you consider as less-than." And I don’t care to know, because you’re off in la la land fighting the good fight against demons that exist only in your dreams. I’m trying to snap you out of it. Far from being on a high horse or playing mods’ deputy, I was trying to look out for you as one community member to another.
You are speaking to someone who has been active in this community since day two of its public launch, back when it was called startup news. Normally I don’t appeal to authority, but I’m hoping that whatever delusion you’re under will be dispelled by the realization. When you say I’m not operating in good faith, not only is that mistaken, but it’s plainly mistaken. You can ask any of the hundreds of people I’ve engaged with over the years whether I have ever once done what you seem convinced I’m doing here.
Since self preservation doesn’t seem to rank highly on your priority list, I urge you to chill out before you get to know the mods a lot better than you currently do. Because my instincts are screaming “this person is on a collision course with the mods, and they’ll be busting out the paddle sooner or later." Just because they don’t notice you breaking the rules doesn’t mean diddly-squat. It just means they’re not omniscient, and if you keep rolling the dice like this you’re going to hit snake eyes sooner or later.
I’m going to sleep. I have a newborn to care for. I’ve tried to help you as much as I can. I genuinely wish you the best of luck, and I hope you’ll stop going around poisoning otherwise interesting conversations with baseless accusations. It’s a huge distraction, an emotional drain, and does a lot more harm than whatever you think I was doing above.
It’s fine to disagree with someone. Running around calling them a sassy dick three times in a row isn’t disagreeing. One tip to avoid such comments is to wait until you feel curious about something rather than fulminating. It works for me at least.
Thanks for inspiring me to use Sassy Dick as my porn name if I ever get into the industry though.
Something you appear to be confused about is I'm not disagreeing with your opinion. I am calling your opinion morally abhorrent by disagreeing with the very premise.
You don't have to worry yourself about my preservation.
A more productive use of your time would be to focus more on actually understanding the details of the topic at hand, then maybe you'll comprehend a little better the anti-humanistic nature of your (evidently so implicit as to be invisible to you) bias. To repeat myself: self-awareness is mandated in good-faith discussions.
It can to a point, but it can’t to the extent that humans can with sufficiently complex problems.
Deep learning models like this can theoretically approximate pretty much any problem that can be expressed as a function.
It’s entirely possible that there just doesn’t exist a function from visual data (maybe even including LIDAR and RADAR etc) to correct driver decisions.
Humans can also intuit the behavior of other humans to an extent, even while driving (knowing that someone who is driving erratically is probably fucked up and will be dangerous to stay near). Kind of like a really shitty gossip protocol.
It can only approximate any function for which it’s seen data in the local feature spaces of the function. For anything it’s not seen features for it will do some maladapted interpolation through the feature space it has been trained on. It can’t be creative or synthesize a novel technique based on some more abstract reasoning over the new regime - it literally must attempt to fit its past observations as best it can to the new regime. Humans certainly do that too, but they are also able to step back and synthesize completely new behaviors given completely new data that isn’t just adapting old behavior based on some optimization function telling it that behavior is most appropriate in the new situation.
People are confused because interpolation is actually fairly powerful and is often entirely sufficient. Especially with the GPT4 model it’s so well trained with such a large and varied corpus that it is able to handle many things well, even unexpected things, and seems like it is extrapolating at times. But it still hallucinates, and these are the most obvious symptoms of its inability to extrapolate. It’s just fitting within its trained vector space as best it can.
The whole point of machine learning is to derive the underlying rules relating your input data. Extrapolation is just extending where you follow those "curves" beyond bounds of known data points.