I can't believe this is so unpopular here. Maybe it's the tone, but come on, how do people rationally extrapolate from LLMs or even large multimodal generative models to "general intelligence"? Sure, they might do a better job than the average person on a range of tasks, but they're always prone to funny failures pretty much by design (train vs test distribution mismatch). They might combine data in interesting ways you hadn't thought of; that doesn't mean you can actually rely on them in the way you do on a truly intelligent human.
I think it’s selection bias - a y-combinator forum is going to have a larger percentage of people who are techno-utopianists than general society, and there will be many seeking financial success by connecting with a trend at the right moment. It seems obvious to me that LLMs are interesting but not revolutionary, and equally obvious that they aren’t heading for any kind of “general intelligence”. They’re good at pretending, and only good at that to the extent that they can mine what has already been expressed.
I suppose some are genuine materialists who think that ultimately that is all we are as humans, just a reconstitution of what has come before. I think we’re much more complicated than that.
LLMs are like the myth of Narcissus and hypnotically reflect our own humanity back at us.