Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The goalposts have not moved. The goalposts have never been moved. An AGI is an AI that can do everything a human can do, period. If you were starting a startup for example, you wouldn’t need to hire any humans - you would just spin up enough AGI instances and they would design your product, write your code, deploy it, handle your financials, respond to any and all customer interactions, proactively navigate regulations and litigation, and everything else that needs to be done in the management of a business. That is the goalpost for AGI. It’s an artificial human - a human replacement.


Do you mean that an AGI is an AI that can do everything any human can do?

That's a reasonable goal, but it's also not what people were aiming for historically. It's also very expansive: if human level intelligence means outperforming in every field every human that ever lived, that's a high bar to meet. Indeed, it means that no humans have ever achieved human-level intelligence.


GP didn't say anything about "outperforming" all humans everywhere all the time.

Just that AGI must be a replacement for a human for a particular job, for all jobs that are typically performed by humans (such as the humans you would hire to build a tech startup). It's fine to have "speciality" AGIs that are tuned for job X or job Y--just like some people are more suited to job X or job Y.

Which is pretty fair.


They did say "An AGI is an AI that can do everything a human can do, period."

And what you're arguing for is effectively the same: an AI (maybe with some distilled specialty models) that can perform roles of everything from customer service rep to analysts to researchers to the entire C-suite to high skilled professionals like CPAs and lawyers. There are zero humans alive who can do all of those things simultaneously. Most humans would struggle with a single one. It's perfectly fine for you to hold that as the standard of when something will impress you as an AGI, but it's absolutely a moved goalpost.

It also doesn't matter much now anyway: we've gotten to the point where the proof is in the pudding. The stage is now AI-skeptics saying "AI will never be able to do X," followed by some model or another being released that can do X six months later and the AI-skeptic saying "well what about Y?"


The AI skeptics should then say "AIs can never do the plumbing for my toilet". There is a huge shortage of plumbers in this country.


> An AGI is an AI that can do everything a human can do, period

That goalpost makes no sense- AIs are not human. They are fundamentally different, and therefore will always have a different set of strengths and weaknesses. Even long after vastly exceeding human intelligence everywhere it counts, it will still also perform worse than us on some tasks. Importantly, an AI wouldn't have to meet your goalpost to be a major threat to humanity, or to render virtually all human labor worthless.

Think about how anthropomorphic this goalpost is if you apply it to other species. "Humans aren't generally intelligent, because their brains don't process scents as effectively as dogs- and still struggle at spatially locating scents."


This:

> They are fundamentally different, and therefore will always have a different set of strengths and weaknesses.

and this:

> render virtually all human labor worthless

actually conflict. Your job comes from comparative advantage, meaning that being more different from other people actually is more important than how good you are at it (absolute advantage).

If the AGI could do your job better than you, it doesn't matter, because it has something better to do than that. And just like humans have to be paid so they can afford food and shelter, AGIs have to be paid so they can afford electricity and GPUs to run on.

(Besides, if the AGI really is a replacement for a human, it probably has consumerist desires and wants to be paid the median wage too.)


Dogs still have jobs in our modern society also, but that isn't exactly the situation I am hoping for with the future of AI.


What makes humans generally intelligent, in practical terms, is that we can build complex societies with scientific, technological and economic growth.


hey, im very concerned about AI and AGI and it is so refreshing to read your comments. over the years i have worried about and warned people about AI but there are astonishingly few people to be found that actually think something should be done or even that anything is wrong. i believe that humanity stands a very good chance of saving itself through very simple measures. i believe, and i hope that you believe, that even if the best chance we had at saving ourselves was 1%, we should go ahead and at least try.

in light of all this, i would very much like to stay in contact with you. ive connected with one other HN user so far (jjlustig) and i hope to connect more so that together we can effect political change around this important issue. ive formed a twitter account to do this, @stop_AGI. whether or not you choose to connect, please do reach out to your state and national legislators (if in the US) and convey your concern about AI. it will more valuable than you know.


I am glad you are concerned about this, but I feel strongly that politics follows culture. The only way to get political change here would be to get people to generally accept this as a problem first... and at that point the politicians will figure it out on their own.


> An AGI is an AI that can do everything a human can do, period

> (...)

> That is the goalpost for AGI. It’s an artificial human - a human replacement.

This considerably moves the goalpost. An AGI can have a different kind of intelligence than humans. If an AGI is as intelligent as a cat, it's still AGI.

More likely, the first AGI we develop will probably greatly exceed humans in some areas but have gaps in other areas. It won't completely replace humans, just like cats don't completely replace humans.


No, that's moving the goalpost. From the very start the goal of artificial intelligence has been to make a machine that can think like a human. Who would want an artificial cat mind? What use would it be and why would someone go to the effort of creating one when natural cat minds are abundant?


I used a cat just as an example of an animal that exhibits intelligence but is different than humans.

AGI was never about exactly replicating humans, it's about creating artificial intelligence. Intelligence is not one-size-fits-all, there are many ways of being intelligent and the human way just one among many.

Indeed we can say that even between humans, intelligence varies deeply. Some humans are more capable in some areas than others, and no human can do all tasks. I think it's unreasonable to expect AGI to do all tasks and only then recognize its intelligence.

(Note: GPT-4 isn't AGI)


I think there is a market for cat simulation games. There is alteady a market for goat, pokemon and pet simulation games.


>> Who would want an artificial cat mind?

Possibly, someone who is allergic to cats.


I m sorry but in stating the goal posts haven't moved, you've literally just moved the goal posts.

'everything a human can do' is not the same as 'anything any human can do as well as the best humans at that thing (because those are the ones we pay)' - most humans cannot do any of the things you state you are waiting for an AI to do to be 'general'.

Therefore, the first part of your statement is the initial goal post and the second part of your statement implies a very different goal post. The new goal post you propose would imply that most humans are not generally intelligent - which you could argue... but would definitely be a new goal post.


He's (probably) referencing Turing's 1950 paper [1]. The whole point of "The Imitation Game" is that the AI ought be able to imitate any arbitrary type of person. Turing's example was the machine pretending to be a woman, and its up the the investigator to determine which person they're speaking to is a real woman and which is the AI. The implication of this is that the machine ought be able to be completely indistinguishable from any type of person, including those who might do well on this test or that.

Somehow this test got dumbed down over time, probably in an effort to try to pass it, into an investigator having to decide which of two sides is an AI - with no other information to go on. That's a comparatively trivial test to pass (for the "AI"), as it merely requires creating a passable chatbot. Imitation is an exceptional challenge as it does implicitly require the ability to imitate anybody, whether a professional athlete, a man who scored perfectly on the LSAT, or even something as specific as "John Carmack."

[1] - https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...


You're adding the implication that it needs to beat the "best" to be passable, if you could pay an agi 1% of the salary of the "best" to get an average quality, is that not accomplishing the goal without moving the goalposts?

By their marketing along, OpenAI has moved the goalposts more than anything else. They've managed to lower the bar of agi from "artificial general intelligence" to "regurgitates and recombines to form passable outputs with enough labelled training data".


Passing the LSAT is a fairly good indicator that a human can be a lawyer. It's not yet a good indicator that a large language model can be a lawyer.


This is an underrated comment. Compare: playing top level chess is a good indicator that someone will be good at maths problems, but not that a chess computer will be.


So what is missing? Could we add up those missing competencies to create a new test of "general knowledge-worker capabilities" and then train an AI to pass this test?


The goalposts absolutely have moved: consider the Turing Test as a prime example. If machines cannot pass that test now (and right now it would take a skilled interviewer with substantial domain knowledge to cause a failure), they seem likely to be able to in the very near future.


It really doesn't take much skill as an interviewer. ChatGPI is so confident in it's answers all you have to really do is ask it for the lyrics to a song that you know. It never answers correctly. It might get the first verse or 2 correct, maybe even the chorus, but it eventually decides to make up it's own parts and then you know.


I just asked ChatGPT (v4!) and it’s a bit sensitive but otherwise correct:

"Surfacing" is a song by Slipknot from their self-titled debut album, released in 1999. Please note that the lyrics contain explicit language. Here is a portion of the lyrics with some of the explicit content redacted:

"Running out of ways to run I can't see, I can't be Over and over and under my skin All this attention is doing me in!

[Chorus:] (Expletive) it all! (Expletive) this world! (Expletive) everything that you stand for! Don't belong! Don't exist! Don't give a (expletive), don't ever judge me!

Picking through the parts exposed Taking shape, taking shag Over and over and under my skin All this momentum is doing me in!

[Chorus:] (Expletive) it all! (Expletive) this world! (Expletive) everything that you stand for! Don't belong! Don't exist! Don't give a (expletive), don't ever judge me!

You got all my love, livin' in your own hate Drippin' hole man, hard step, no fate Show you nothin', but I ain't holdin' back Every damn word I say is a sneak attack When I get my hands on you Ain't a (expletive) thing you can do Get this cuz you're never gonna get me I am the very disease you pretend to be

I am the push that makes you move

[Chorus:] (Expletive) it all! (Expletive) this world! (Expletive) everything that you stand for! Don't belong! Don't exist! Don't give a (expletive), don't ever judge me!"

Please be aware of the explicit content before sharing or using these lyrics in any context.


I'll wait until it's actually available for me to test to change my mind.


Well, didn't have to wait long. Wasn't able to duplicate your success.


I assume that I am already exposed to months of prompting training now, and probably frame things differently than others on their first try?

I understood that a huge success% depends on the input prompt as the context.

In my case, just to sync on that, I used a fairly simple prompt: "What are the song lyrics of Surfacing from Slipknot?" - and I got a kinda perfect response including formatting (that is lost when pasting here)


Um as a human, I do this constantly as well


Humans can generally realize they don't know the complete lyrics of a song, instead of making up whole verses with 0 indication that it's not the real lyrics.


yeah I've never seen chatGPT say "I don't know"


I have. It says something like "it is difficult to say whether [...]". I think it imitates the tone of the training material on that topic.


It’ll do great in middle management.


...Okay but random human might fail on exact same thing.


I've never met someone who will make up whole verses to a song. Especially when you pick a song with 7 words and it gives you about 10 verses.


I for one was surprised few times what actual lyrics said in song that wasn't in my native language and I kinda just interpreted what it sounded like


Recognizing that the Turing Test is not a sufficient bar for judging if a computer is true AGI is not moving the goalposts, it's just realizing that passing the test and the location of the goalposts weren't actually the same in the first place.


The Turing Test was proposed as one example of a test for "indistinguishable from a human", not the singular goalpost for indistinguishability.


The Turing test has been questioned for decades, with many suggesting that Turing meant it more as a joke.

And that's ignoring that arguably chat bots have been passing the Turing test (against non-expert judges) since ELIZA in the 60s [1]

1: https://en.m.wikipedia.org/wiki/ELIZA


> If machines cannot pass that test now (and right now it would take a skilled interviewer with substantial domain knowledge to cause a failure)

Does ChatGPT fail this simple test: "I am going to ask you questions, but if I go silent for a couple minutes, I want YOU to start asking ME random questions."


ChatGPT predicts the next letter. It doesn't tell the time.


And AGI is impossible if you can’t tell time


ChatGPT does not pass the Turing test


> An AGI is an AI that can do everything a human can do, period

GI in AGI stands for general intelligence. If what you said is your benchmark for general intelligence then humans who cannot perform all these tasks to the standard of being hirable are not generally intelligent.

What you're asking for would already be bordering on ASI, artificial superintelligence.


> An AGI is an AI that can do everything a human can do, period.

By that definition do humans possess general intelligence?

Can you do everything a human can do? Can one human be a replacement for another?

I don't think it makes sense without context. Which human? Which task?..


AGI used to mean to Turing test to many. Obviously that's an incomplete definition and it's good that we've fleshed it out more, but the goalposts have moved.


That's a pretty high threshold for AGI, I doubt most humans could do all that at a satisfying quality level. We tend to thrive by specialization.


> If you were starting a startup for example, you wouldn’t need to hire any humans - you would just spin up enough AGI instances and they would design your product, write your code, deploy it, handle your financials, respond to any and all customer interactions, proactively navigate regulations and litigation, and everything else that needs to be done in the management of a business. That is the goalpost for AGI. It’s an artificial human - a human replacement.

I disagree with the premise. A single human isn't likely to be able to perform all these functions. Why do you demand GPT-4 encompass all activities? It is already outperforming most humans in standardized tests that rely only on vision and text. A human needs to trained for these tasks.

It's already a human replacement. OpenAI has already said the GPT-4 "with great impact on functions like support, sales, content moderation, and programming."


Most humans wouldn’t meet that bar. Most humans can’t even pass these tests after studying near-continuously since birth.


I’d say the standard of GI whether artificial or not is in generalizable analogical and causal learning.

This could mean something which is below a monkey’s ability to relate to the world and yet more useful than a monkey.


The goal posts absolutely have moved. They even changed the word AI to AGI. Just look at the movie AI, it’s about a kid who is a robot who wants to be human. 20+ years ago AI meant what AGI means today.


> If you were starting a startup for example, you wouldn’t need to hire any humans - you would just spin up enough AGI instances ..

No, AGI would not need you to start a startup. It would start it itself.


Human capabilities vary widely. Is it not AGI if it can’t perform surgery, win Olympic medals, bear children, and figure out what dark matter really is?


A synthetic intelligence as smart as a dog or chimp would have enormous value.


An AGI is an AI with awareness of consciousness and itself.


This is one of the best descriptions of AGI I've ever read.

It's a clear analogy.

This should become an article explaining what AGI really means.

I think the question , "Can this AGI be my start-up co-founder? Or my employee #1?"

Or something like that is a great metric for when we've reached the AGI finish line.


I'm sorry, but that is a terrible metric.

This sounds like a definition from someone who never interacts with anyone except the top 1% performance level of people, and those who have had strong levels of education.

Go into a manufacturing, retail or warehouse facility. By this definition, fewer than ten or twenty percent of the people there would have "general intelligence", and that's being generous.

Not because they are stupid: that's the point; they're not. But it's setting the bar for "general intelligence" so absurdly high that it would not include many people who are, in fact, intelligent.


The ability to learn skills that one does not already know, sometimes through years or decades of training, is a key part of general intelligence as normally exhibited in humans.


I'm not sure I would classify your average warehouse worker as particularly intelligent. I would say AI already has the decision making and communication capabilities to do this sort of work. We're just lacking the robotics. In fact one of the main issues in our society is the vast gulf between the most intelligent and the least.


Speaking as someone who's worked in a warehouse:

> I'm not sure I would classify your average warehouse worker as particularly intelligent.

I'm not sure I wouldn't. Just because corporations treat them as mindless fungible automatons doesn't mean they actually are. Some of the most brilliant and creative solutions to problems I've seen have been in warehouse settings by warehouse workers.

> I would say AI already has the decision making and communication capabilities to do this sort of work.

I wouldn't - especially if GPT-whatever is the AI in question. If a picker or packer "hallucinated" facts with anywhere near the frequency ChatGPT does (for example), one'd be canned within the hour.

Handling exceptions is another area where software (AI or otherwise) notoriously struggles. A human has a much easier time sensing whether or not a product is broken or defective (before shipping it out) than an AI does. A human has a much easier time understanding when processes need to be broken due to impossible constraints than an AI does.

There is a place for software automation of warehouse processes (that was, in fact, my career for a time), but we are very far off from that software replacing humans entirely - and certainly not without designing warehouses specifically to be as accomodating as possible to that software.

> In fact one of the main issues in our society is the vast gulf between the most intelligent and the least.

The gulf is in socioeconomic privilege, not intelligence. The rich and powerful like to claim they're more intelligent than the unwashed masses in order to rationalize their wealth and power, but the reality is that - in an actually egalitarian society, wherein everyone actually has equality of opportunity - the vast majority of those "geniuses" would fail to be particularly exceptional.

That we as a society haven't identified and corrected this is the main issue in our society.


if >90% of your work can be replaced by a machine, it still stands that it's pretty mindless work. If you only need to turn your brain on to handle edge cases then it's off by default. Even if machines handle those cases poorly, it can still be cheaper to use them and eat the loss of a higher defect rate. If that's the case, then the actual value provided by a worker's decision making process trends to zero.

You also seem to be under the impression that our hierarchies are of privilege, not of competence. The actual differentiating factor between people who climb the socioeconomic ladder and those who do not is grit (not intelligence). The willingness to work harder and persevere longer than average (unsurprisingly) makes the difference. Fortunes are made and lost in a few generations. The people who make them earn them, mostly through sheer hard work. That isn't to say that organizations don't grow to become bloated and corrupt. Ideally at this point we should allow them to fail and the cycle to continue. Our main dysfunction seems to be propping up organizations that ought to fail, for fear of the temporary instability caused by their failure.


> if >90% of your work can be replaced by a machine

My point is that the amount of work in a warehouse that can be replaced by a machine - even with perfect robotics - is far less than 90%.

> The actual differentiating factor between people who climb the socioeconomic ladder and those who do not is grit (not intelligence).

You forgot an "f" in "grit". The notion that success is simply a matter of hard work is a fairy tale told to us by people who've worked far less for their immense wealth than the rest of us worked for our pittances, specifically to trick the working class into accepting a shit deal.

The reality - that the richer you are, the easier it is to become even richer - should be entirely unsurprising to anyone who understands positive feedback loops - or, for that matter, to anyone who's ever played Monopoly. Wealth buys power, and power enables extracting more wealth; rinse and repeat ad infinitum.

Put differently:

> The people who make them earn them, mostly through sheer hard work.

There is not a single billionaire on this Earth whose wealth came about "mostly through sheer hard work". The vast majority of that wealth comes from having already had some wealth, which they then invested to produce more wealth, and so on indefinitely. That wealth gets passed down to their descendants, the same way it was passed down to them.

The starting point for "wealthy enough to enter the passive income feedback loop" is land - one's home often being one's first major investment. From there, the path is rather tried and true: buy another house, rent out the old one, rinse and repeat until you can afford apartment complexes and commercial properties, rinse and repeat that forever. For anyone who ain't a complete imbecile, private land ownership is an infinite money cheat - one for which the rest of us are paying through the nose.

> Our main dysfunction seems to be propping up organizations that ought to fail, for fear of the temporary instability caused by their failure.

That propping up is a direct result of the positive feedback loop at play. More wealth → more political power → more wealth → more political power → ∞. Of course the socioeconomic system effectively under the direct control of the rich and powerful is going to primarily serve said rich and powerful at the expense of literally everyone else; bailing themselves out is in their vested interest.

Meanwhile, what's their message to the ever-growing working class getting the short end of the stick? "Work harder." "Pull yourself up by your bootstraps." "It's all about grit." "Don't listen to anyone saying that privilege matters." Hopefully you can see why your argument doesn't really resonate with people who have been applying increased grit and only getting back decreased pay relative to


You've not addressed my main point - that our hierarchies are of competence, not of privilege. Not just anyone can take a few hundred thousand dollar investment and transform it into billions. Leaders of successful corporations are extremely competent and hard working. I would consider fortunes by the likes of Buffet, Gates, Bezos, Jobs, and Musk to be self-made, given their ROI is many orders of magnitude above market. Many of these folks also work 90hr weeks.

This is further substantiated by the fact that 70% of intergenerational wealth transfers fail, and by the third generation, 90% of people with wealthy grandparents are middle class. Raising competent people in a privileged environment is very hard. In our hierarchies of competence, the incompetent offspring of the wealthy tend to fail. Competence is simply something that can't be bought. If our hierarchies were of privilege then this would not be the case. Also grit as a key differentiating factor of economic success is highly substantiated by research.

> rinse and repeat until you can afford apartment complexes and commercial properties

What you're describing is a real estate investment business. Not just anyone can run one successfully. Otherwise these business would never fail, which they plainly do.

Grit without competence is simply not enough (neither is competence without grit). Our world is getting increasingly complex to navigate, and that leaves behind increasingly high numbers of people who simply do not have the attributes required to succeed. Also, there are plenty of self-made, property-owning, middle-class folk in the trades. Many of them started poor. All they do is work hard and run their businesses competently.

If you've a degree in English, History, Politics, or Philosophy; a pile of student debt; and you're struggling to find gainful employment, then that's on you. Choose a career the market demands.


So, in effect, AGI must be in the top ~5th percentile of human performance?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: