You might think the people doing politics are manipulative ladder climbers, but they're climbing the same ladders available to you, so you should be one too.
Well, yeah, they're worms. People are irrationally afraid of all sorts of stuff, that doesn't make everything "nightmare fuel". They're just worms that look like worms.
>ones thinking power is diminished overtime by interacing with LLMs etc.
Sometimes I reflect on how much more efficiently I can learn (and thus create) new things because of these technologies, then get anxiety when I project that to everyone else being similarly more capable.
Then I read comments like this and remember that most people don't even want to try.
I don't think there's anything I could tell you about the companies I've built that would dissuade you from your perspective that everyone is as intellectually lazy as your projection suggests.
Not GP, but I really want to know how your process is better than anyone else. People have produced quite good software (as in solving problems) on CPU that’s less powerful than what’s on my smart plug. And whose principles is still defining today’s world.
My companies have posted an awful lot of job ads in earnest over the years that haven't found suitable candidates, despite an absolute barrage of slop resumes.
Yes, it was a sad moment, years ago now, the day I realized that I'd likely never travel to China. Sadder still to acknowledge I'd feel safer doing so than entering the modern US.
No matter how right the AI crowd is, elements of it are/will be a bubble.
No matter how right the bubble crowd is, the market becoming irrationally exuberant for a brief period of time does not invalidate the technology or the rapid change we'll see as a result of it.
Yeah, this is what all the bubbleists miss. We got FAANG++ out of the last bubble, and they literally rule the world with the tech that was promised during said bubble. Catsdotcom and dogsdotcom failing had 0 impact on the tech itself.
It's the same today. A lot of the hyperVCfunded startups will fail, without a doubt. But the tech giants will get their money from this tech, and it will be ubiquitous in ways we can't even imagine now.
I'm not sure that the dotcom bubble leaving behind the web means that AI (and by AI I mean LLMs) are going to be transformative in the same way the internet was. Just because it happened once doesn't necessarily mean it's going to happen again.
To be clear, I don't think LLMs are going to vanish, there's clearly some things they're good for, but there's also some really big differences between the AI bubble and the dotcom bubble. People are very skeptical and worried about AI, they (the general public) aren't really using it for anything more than search, there hasn't been any clear economic data indicating that it improves productivity in a meaningful way, and a killer app hasn't really emerged that isn't a free chatbot. Plus, the majority of the money is concentrated in the same 5 or 10 companies just passing it back and forth between each other before eventually handing it over to NVIDIA. Maybe I was just too young to pay attention to the dotcom bubble, but the vibe seems completely different.
I think the comparison between AI and the web is interesting. I still feel like AI has this assumption that it's going to surmount a gap that it may not surmount.
The internet basically just got gradually better at all the things around it. Better graphics, faster internet, better programming languages, bigger hard drives.
But we still don't know what LLM's are. They're maybe just a feature, not a whole suite of technologies creating an ecosystem, the idea that they'll make the leap from Fancy Lisa AI Companion to colleague is fanciful. I don't think there's a direct line that shows that up and to the right automatically means superintelligence.
i think my hope is that interviews for ML engineering would focus on domain knowledge specific to ML and not intricate questions about using typescript and react, which is the sort of stuff i simply cannot bring myself to care about enough to memorize well enough to discuss well in an interview
The premise is a basic understanding of matrices and stats. Then go through a course on machine learning (supervized, unsupervised, deep and reinforcement learning). That’s for the theory. Easy if you know your math and python.
The practical side is endless tweaking of the data and the models. And keeping yourself informed about new models and techniques (throug scientific papers)
thanks! i think i understand the nature of the job okay, i think its more my ability to realistically get there in 6-12 months is what i'm concerned about
And chess players stream as their primary income, because there's no money in Chess unless you're exactly the best player in the world (and even then the money is coming from sponsors/partners, not from chess itself).
My personal framing of "Agents" is that they're more like software robots than they are an atomic unit of technology. Composed of many individual breakthroughs, but ultimately a feat of design and engineering to make them useful for a particular task.
You might think the people doing politics are manipulative ladder climbers, but they're climbing the same ladders available to you, so you should be one too.