I don’t get what people are trying to say when they say these kinds of things about AI. That human-level writing is as simple as a linear regression? That we could’ve had computer programs capable of human-level writing decades ago? Have they not used these AIs enough to see how powerful they are? Are they seeing the bad outputs and thinking that AIs are always doing that poorly?
Like seriously, if you’re telling me that it was obvious that a “linear regression” could pass the LSAT I’ve got a macvlan to sell you.
Formally it's a generalized linear model with a constructed feature set.
A "kitchen sink" regression with enough polynomial terms (x^2, x^3, etc.) and interaction terms (ab, (ab)^2, etc.) will be a function approximator the same way a neural net is.
The computational mechanics are different (there's a reason we dont use it) but in the land of infinite computational power it can be made equivalent.
>The computational mechanics are different (there's a reason we dont use it) but in the land of infinite computational power it can be made equivalent.
In the land of infinite computational power every computation is just a series of 1s and 0s added and subtracted. You can implement everything with just few more operations. But we don't live in a land of infinite computational power and it took us (as humanity) quite a while to discover things like transformer models. If we had the same hardware 10 years ago would we have discovered them back then? I very much doubt it. We didn't just need the hardware, we needed the labelled data sets, prior art in smaller models etc.
Personally I think current AI/ML (LLMs, ESRGANs, and diffusion models) have huge capability to increase people's productivity, but it will not happen overnight and not for everyone. People have to learn to use AI/ML.
This brings me to the "dangers of AI". I laugh at all these ideas that "AI will become sentient and it will take over the world", but I'm genuinely fearful of the world where we've became so used to AI delivered by few "cloud providers" that we cannot do certain jobs without it. Just like you can't be a modern architect without Cad software, there may be time when you'll not be able do any job without your "AI assistant". Now, what happens when there is essentially a monopoly on the market of "AI assistants"? They will start raising prices to the point in future paying off your "AI assistant" bill may be higher than your taxes and you'll have a choice of paying or not working at all.
This is why we have to run these models locally and advance local use of them. Yes (not at all)OpenAI will give you access to a huge model for a fraction of the cost, but it's like with the proverbial drug dealers that gives you the first hit for free, you'll more than make up for the cost of it once you get hooked up. The "dangers of AI" is that it becomes too centralised, not "uncontrollable"
Like seriously, if you’re telling me that it was obvious that a “linear regression” could pass the LSAT I’ve got a macvlan to sell you.
Edit: they’re also literally not linear regression! https://youtu.be/Ae9EKCyI1xU?feature=shared