Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't know exactly where on HN I read it but it was "Artificial Intelligence is an ideology, not a technology."

What is this? https://www.wired.com/story/opinion-ai-is-an-ideology-not-a-...

That's a fantastic observation. I'd even hazard to say that for some Artificial Intelligence is closer to a religion, not just an ideology.



As someone who works in the field and works with LLMs on the daily - I feel like there are two camps at play. The field is bimodally distributed:

- AI as understandable tool that power concrete products. There's already tons of this on the market - autocorrect, car crash detection, heart arrythmia identification, driving a car, searching inside photos, etc. This crowd tends to be much quieter and occupy little of the public imagination.

- AI as religion. These are the Singularity folks, the Roko's Basilisk folks. This camp regards the current/imminent practical applications of AI as almost a distraction from the true goal: the birth of a Machine-God. Opinions are mixed about whether or not the Machine-God is Good or Bad, but they share the belief that the birth of Machine-God is imminent.

I'm being a bit uncharitable here since as someone who firmly belongs in the first camp I have so little patience for people in the second camp. Especially because half of the second camp was hawking monkey JPEGs 18 months ago.


AI as understandable tool that power concrete products.

This why my wariness.

Contemporary AI stands upon mechanical turks.

In contrast spellcheckers, checkers engines, and A* were built solely by people with employer provided health insurance.

In the old days, the hard work for professional pay was the justified means.

Today, taking advantage of the economically desperate is the justified means.

There’s no career path from mechanical turk to Amazon management because mechanical turk is not an Amazon position. It’s not even employment. No minimum wage. No benefits. No due process.

Just eggs that gotta be cracked.


There's a blur between the two camps once you get to the so called "AGI" thing.

People think creating super-human intelligence is a technological challenge, but given that we aren't able to consistently rank human-level intelligence, the *recognition* that some AI has attained "super-human" levels is going to be a religious undertaking rather than a technological one.

And we're kind of close to the edges of that already. That's why discussions feel a bit more religious-y than in the past.

tl;dr: until there's a religion that worships some AI as a "god", there won't be any "AGI".


> tl;dr: until there's a religion that worships some AI as a "god", there won't be any "AGI".

I fear you may be correct. Though now I'm thinking of how AI have been gods in fiction, and hoping that this will be more of a Culture (or Bob) scenario than a I Have No Mouth scenario.

(And if the AI learns what humans are like and how to behave around them from reading All The Fiction, which may well be the case… hmm. Depends what role the AI chooses for itself: I hear romance is the biggest genre, so we may well be fine…)


I agree, there is a rather vocal crowd of people who don't sound much different from evangelically-minded religious folk.


An old but good counterpoint to the "rapture of the nerds": http://web.archive.org/web/20110718031848/http://www.acceler....

There's a good breakdown and cliche-by-cliche comparison in there, but I find the penultimate paragraph both memorable and quotable:

> It’s also interesting to think about what would happen if we applied “Rapture of the Nerds” reasoning more widely. Can we ignore nuclear warfare because it’s the Armageddon of the Nerds? Can we ignore climate change because it’s the Tribulation of the Nerds? Can we ignore modern medicine because it’s the Jesus healing miracle of the Nerds? It’s been very common throughout history for technology to give us capabilities that were once dreamt of only in wishful religious ideologies: consider flight or artificial limbs. Why couldn’t it happen for increased intelligence and all the many things that would flow from it?


We cannot ignore those other things you list, because they are here already.

AGI is not, and there is no evidence that it is even possible. So it, we can safely ignore for now. Once some evidence exists that it may actually be achievable, we'll need to pay attention.

People in 1000 CE could (and did) safely ignore all those things, for this exact reason.


> AGI is not, and there is no evidence that it is even possible.

We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.

> People in 1000 CE could (and did) safely ignore all those things

Whereas the people, and specifically leadership, of Japan unsafely ignored one of them on the 5th August 1945. Some of the leadership were still saying it couldn't possibly have been a real atomic bomb as late as the 7th, which is ultimately why the second bomb fell on the 9th.


>> AGI is not, and there is no evidence that it is even possible.

> We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.

I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.


> I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.

I don't know what you mean by "as-yet circular assumption". (Though in the philosophy of knowledge, the Münchhausen trilemma says that everything is ultimately either circular, infinite regression, or dogmatic).

> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.

Sounds like you're arguing against ASI not AGI: G = General like us; S = Super-, exceeding us.

That said, there's evidence that ASI is also possible: All the different ways in which we've made new minds that do in fact greatly exceed ours in capability.

When I was a kid, "intelligent" was the way we described people who were good at maths, skilled chess players, good memories, having large vocabularies, knowing many languages, etc. Even ignoring the arithmetical component of maths (where a Pi Zero exceeds all of humanity combined even if each of us were operating at the standard of the current world record holder), we have had programs solving symbolic maths for a long time; Chess (and Go, Starcraft, Poker,…) have superhuman AI; even before GPT, Google Translate already knew (even if you filter the list to only those where it was of a higher standard than my second language) more languages than I can remember the names of (and a few of them even with augmented reality image-to-image translations).

And of course, for all the flaws the current LLMs have in peak skill, most absolutely have superhuman breadth of knowledge: I can beat GPT-3.5 as a software engineer, maths and logic puzzles, or when writing stories, but that's basically it.

What we have not made is anything that's both human (or superhuman) skill-level while also human-level generality — but saying the two parts separately isn't evidence that it can be done is analogous to looking at 1 gram of enriched uranium and a video of a 50 kg sphere of natural uranium being forced to implode spherically, and saying "there no evidence that humans are capable of designing an atom bomb or that it's possible to make an atom bomb that greatly exceeds chemical bombs in yield."


You won't get a proof until the deed is done. But that's the same with nuclear armageddon - you can't be sure it'll happen until after the planet's already glassed. Until then, evidence for probability of the event is all you have.

> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability

There's plenty of good reasons to assume it's possible, all while there's no evidence suggesting it's not.


"good reasons" sounds like another way of saying "no actual evidence, but a lot of hope". There is no actual evidence that it's possible, certainly not anytime soon. People pushing this narrative that AGI is anywhere close are never people working in the space, it's just the tech equivalent of the ancient aliens guys.


> People pushing this narrative that AGI is anywhere close are never people working in the space

Apart from the most famous AI developer group since near the beginning of this year, on the back of releasing an AI that's upset a lot of teachers and interview-question writers because it can pass so many of their existing quizzes without the student/candidate needing to understand anything.

I suppose you could argue that they are only saying "AGI could happen soon or far in the future" rather than "it will definitely be soon"…


Yes, the people selling the hammer want you to believe it's a sonic screwdriver. What else is new? You sort of prove my point when your evidence of who is making those claims are the people with a vested interest, not the actual scientists and non-equity developers who do the actual coding.

"But a company said the tech in their space might be ground-breaking earth-shattering life-changing stuff any minute now! What, you think people would just go on the internet and lie!?"


"No Scotsman puts sugar on his porridge."

"But my uncle Angus is a Scotsman and he puts sugar on his porridge."

"But no true Scotsman puts sugar on his porridge."


I haven't set up a No True Scotsman proposition, I made a very clear and straightforward assertion, that I've challenged others to disprove.

Show me one scientific paper on Machine Learning that suggests it's similar in mechanism to the human brain's method of learning.

It's not a lack of logical or rhetorical means to disprove that's stopping you (i.e. I'm not moving any goalposts), it's the lack of evidence existing, and that's not a No True Scotsman fallacy, it's just the thing legitimately not existing.


This is a myth; Japan was not in denial that the US had atomic bombs, it had its own atomic bomb program (though incredibly in-advanced), and was aware of Germany's program as well. It just didn't care.

What caused Japan to surrender was not the a-bombs, it was the USSR declaring war on them.

That aside, that still supports my point, which is that they should not ignore things that exist, while they can ignore things that don't. Like AGI.


I could've phrased it better, it sounds like you're criticising something other than what I meant.

One single plane flies over Hiroshima, ignored because "that can't possibly be a threat". The air raid warning had been cleared at 07:31, and many people were outside, going about their activities.

> it had its own atomic bomb program

Two programs; it was because they were not good enough that they thought the US couldn't have had the weapons:

--

The Japanese Army and Navy had their own independent atomic-bomb programs and therefore the Japanese understood enough to know how very difficult building it would be. Therefore, many Japanese and in particular the military members of the government refused to believe the United States had built an atomic bomb, and the Japanese military ordered their own independent tests to determine the cause of Hiroshima's destruction.[0] Admiral Soemu Toyoda, the Chief of the Naval General Staff, argued that even if the United States had made one, they could not have many more.[86] American strategists, having anticipated a reaction like Toyoda's, planned to drop a second bomb shortly after the first, to convince the Japanese that the U.S. had a large supply.[1]

[0] Frank, Richard B. (1999). Downfall: the End of the Imperial Japanese Empire. New York: Penguin. ISBN 978-0-14-100146-3

[1] Hasegawa, Tsuyoshi (2005). Racing the Enemy: Stalin, Truman, and the Surrender of Japan. Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-01693-4

--

> AGI

You personally are a General Intelligence; we have Artificial Intelligence. Is GPT-4 a "general" "intelligence"? That depends on the standards for the words "general" and "intelligence". (Someone is probably arguing that anything trained by an evolutionary algorithm isn't necessarily "artificial", not that I know how it was trained, nor even care given I don't use that standard).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: