When thinking about humans, no matter their age and experience, we have no problem considering them generally intelligent.
But still, humans are not omniscient, make things up (hallucinate) and sometimes lack proper reasoning.
In contrast LLMs already have way more knowledge than the average human, have mostly good reasoning and occasionally hallucinate.
Surely they aren't artificial super intelligences, but it feels like the term AGI could apply.
My prediction is that over the course of the next 6-48 months, we'll see the emergence of LLMs with "working memory," "short term memory," and "long term memory," with working memory being more or less current LLM capabilities, short term memory being made up of a fast one-shot summarization which then gets temporarily stored raw on disk, and long-term storage getting transcribed into a LORA-like module overnight based on perceived importance of the short term memories.
I think emotion analogues will be important for the last part, as emotion processing plays a big role in memory formation (this is an adaptation: we more strongly remember things that we had strong emotions about because they're more important to us.)
So, 6-48 months to computer systems that feel (/have emotion analogue) and sleep to dream (/summarize into long-term storage overnight.)
Those developments, I'm confident, will absolutely silence anyone who says it's not "real" AGI. But then, at that point, you can potentially have built a being that can have feelings about its own existence, and then things get Interesting.