I'm pretty bearish on the idea that AGI is going to take off anytime soon, but I read a significant amount of theology growing up and I would not describe the popular essays from e.g., LessWrong as religious in nature. I also would not describe them as appearing poorly read. The whole "look they just have a new god!" is a common trope in religious apologetics that is usually just meant to distract from the author's own poorly constructed beliefs. Perhaps such a comparison is apt for some people in the inevitable AGI camp, but their worst arguments are not where we should be focusing.
Philosophy and religion are not mutually inclusive, though one can certainly describe a religious belief as being a philosophical belief.
Even a scientifically inclined atheist has philosophical ideas grounding their world view. The idea that the universe exists as an objective absolute with immutable laws of nature is a metaphysical idea. The idea that nature can be observed and that reason is a valid tool for acquiring knowledge about nature is an epistemological idea. Ethics is another field of philosophy and it would be a mistake to assume a universal system of ethics that has been constant throughout all cultures across all of human history.
So while I certainly agree that there is a very common hand-wave of "look the atheists have just replaced God with a new 'god' by a different name", you don't have to focus on religion, theology and faith based belief systems to identify different categories of philosophical ideas and how they have shaped different cultures, their beliefs and behaviours throughout history.
A student of philosophy would identify the concept of "my truth" as being an idea put forward by Emmanuel Kant, for example, even though the person saying that doesn't know that that's the root of the idea that reality is subjective. Similarly, the empirically grounded scientist would be recognized as following in the footsteps of Aristotle. The pious bible thumper parroting ideas published by Plato.
The point is that philosophy is not the same thing as religion and philosophy directly shapes how people think, what they believe and therefore how they act and behave. And it's kind of uncanny how an understanding of philosophy can place historical events in context and what kinds of predictive capabilities it has when it comes to human behaviour in the aggregate.
While it's a fair criticism, just because someone doesn't believe in a god doesn't mean the religious hardware in their brain has been turned off. It's still there and operational - I don't think it's a surprise that this hardware's attention would then be automatically tuned to a different topic.
I think you can also see this in the intensification of political discussion, which has a similar intensity to religious discussions 100-200+ years ago (i.e. Protestant reformation). Indicating that this "religious hardware" has shifted domains to the realm of politics. I believe this shift can also be seen through the intense actions and rhetoric we saw in the mid-20th century.
You can also look at all of these new age "religions" (spiritualism, horoscopes, etc.) as that religious hardware searching for something to operate on in the absence of traditional religion.
> While it's a fair criticism, just because someone doesn't believe in a god doesn't mean the religious hardware in their brain has been turned off.
Max Stirner said that after the Enlightenment and the growth of liberalism, which is still very much in vogue to this day, all we’ve done is replace the idea of God with the idea of Man.
The object might be different, but it is still the unshakable belief in an idealised and subjective truth, with its own rituals and ministers i.e a religion.
I guess the Silicon Valley hyper-technological optimism of the past years is yet another shift from Man to religious belief in the Machine.
I agree that modern hyper-online moralist progressivism and QAnonism are just fresh coats of paint on religion, but that isn't similar to AI.
AI isn't a worldview; it's an extremely powerful tool which some people happen to be stronger at using than others, like computers or fighter jets. For people who empirically observe that they've been successful at extracting massive amounts of value from the tool, it's easy to predict a future in which aggregate economic output in their field by those who are similarly successful will dwarf that of those who aren't. For others, it's understandable that their mismatched experience would lead to skepticism of the former group, if not outright comfort in the idea that such productivity claims are dishonest or delusional. And then of course there are certainly those who are actually lying or deluded about fitting in the former group.
Every major technology or other popular thing has some subset of its fandom which goes too far in promotion of the thing to a degree that borders on evangelical (operating systems, text editors, video game consoles, TV shows, diets, companies, etc.), but that really has nothing to do with the thing itself.
Speaking for myself, anecdotally, I've recently been able to deliver a product end-to-end on a timeline and level of quality/completeness/maturity that would have been totally impossible just a few years ago. The fact that something has been brought into existence in substantially less time and at orders of magnitude lower cost than would have been required a few years ago is an undeniable observation of the reality in front of me, not theological dogma.
It is, however, a much more cognitively intense way to build a product — with AI performing all the menial labor parts of development, you're boxed into focusing on the complex parts in a far more concentrated time period than would otherwise be required. In other words, you no longer get the "break" of manually coding out all the things you've decided need to be done and making every single granular decision involved. You're working at a higher level of abstraction and your written output for prompting is far more information-dense than code. The skills required are also a superset of those required for manual development; you could be the strongest pre-LLM programmer in the world, but if you're lacking in areas like human language/communication, project/product management, the ability to build an intuition for "AI psychology", or thinking outside the box in how you use your tools, adapting to AI is going to be a struggle.
It's like an industry full of mechanics building artisan vehicles by hand suddenly finding themselves foisted with budgets to design and implement assembly lines; they still need to know how to build cars, but the nature of the job has now fundamentally changed, so it's unsurprising that many or even most who'd signed up for the original job would fail to excel in the new job and rationalize that by deciding the old ways are the best. It's not fair, and it's not anyone's fault, but it's important for us all to be honest and clear-eyed about what's really happening here. Society as a whole will ultimately enjoy some degree of greater abundance of resources, but in the process a lot of people are going to lose income and find hard-won skills devalued. The next generation's version of coal miners being told to "learn to code" will be coders being told to "learn to pilot AI".
If the "greater abundance if resources" all ends up in the hands of those few at the top of the pyramid, I'm not sure most people are going to celebrate this change.
> It's not fair, and it's not anyone's fault, but it's important for us all to be honest and clear-eyed about what's really happening here.
Or we can just refuse this future and act as a society to prevent it from happening. We absolutely have that power, if we choose to organize and use it.
Sure, but how so? If I'm understanding your argument correctly, it sounds like you may be implying that we should escalate the war on general-purpose computing and outlaw generative AI.
If we were to consider that, then to what end? If you accept my framing of the long-term implications of LLMs on the industry, then what you're suggesting is effectively that we should deprive society of greater prosperity for the benefit of a small minority. Personally, I'd rather improve democratization of entrepreneurship (among other things) than artificially prop up software engineering salaries.
And let's say the US did all that. What then? We neuter our economy and expect our adversaries to just follow suit? More likely it hobbles our ability to compete and ultimately ushers in an era of global hegemony under the CCP.
> deprive society of greater prosperity for the benefit of a small minority.
This already exactly the case. AI won't bring jackshit to anyone except those who're already sitting on too much wealth than any human should deserve.
At best, AI will slightly increase average global misery by virtue of producing too much garbage that pollutes the digital landscape.
The industrial revolution didn't bring prosperity to anyone except the capital owners, who forced their employees (physical force) to work many long hours, in gruesome environment, for pathetic wages.
When the Society's Elites promise something, the common man must be wary, those elites didn't reach their spots by being kind.
>The industrial revolution didn't bring prosperity to anyone except the capital owners
This is simply not true. Compare life of common man of today, to one from 3 centuries before. Quality of life increased tenfold, medicine, knowledge access, world travelability, life expentancy, political representation etc.
Of course capital owners get richer even still, but suggesting we were better of without inudstrail revolution is just disingenous.
Do you hate AI enough to nuke China when they refuse to stop building their own AI?
Once it exists, anywhere, the job market is toast - banning it in the US just means outsourcing all the economic benefits to China. We have a long history of technological revolutions to demonstrate this: the un-industrialized nations fared a lot worse than the ones at the frontier of technology.
I'm looking less at Nations and more at the people in those nations. The richest country on earth, the one with the highest number of billionaires, is unable to put roofs over its citizens' heads, unable to provide them with food stability, and its health care is the laughing stock of the entire developed world. Hell I live in a shit 3rd world country and our healthcare is still miles better than the US. THIS to me is undeniable proof that technological advancement != Prosperity for people.
Unindustrialized nations fared worse due to the industrialized ones coming for their resources. I doubt farmer John on Yorkshire gave a shit about what the UK was doing in south Africa or india. Similarly for Farmer Louis in Leon, or Hans in Stutgart. It's all elite dick-measuring contests with the common man as the only loser.
> outsourcing all the economic benefits to China
Hell the US already outsourced all vital indiatries to China without banning shit. My main point is that the common man will not see any benefit from such technological advancement, since the major problem is who's holding the stocks. AI is just another measure of wealth concentration.
Okay. Well I'm not an "elite" or misrepresenting my experience with AI in any way. My perspective as a founder is that AI empowers entrepreneurs to launch and scale cheaply, thereby providing greater value to the public while disempowering venture capitalists as gatekeepers of the startup ecosystem. The fact that some rich people may become richer at the same time is incidental, and not a bad thing in and of itself.
As far as the industrial revolution, your take is ahistorical. We're clearly more prosperous now than we were before industrialization. Let's not forget that the pre-industrial American economy relied on literal enslavement of 15 - 20% of the population.
Pre-industrial English and German economies did not rely on enslavement of the population, so the fact US economy was built on enslavement just means its elites were rotten to the core since the very beginning,
The fact that rich people get richer is exactly the goal of the US' economic system, not an accident. Despite the abolition of slavery, the US remains extremely hostile to the poor. its lower-than-livable wages forcing its people to be more miserable than medieval surfs.
So, no, the industrial revolution did not bring prosperity, what brought prosperity was the blood shed by common men daring to keep a portion of the fruits of their labour. The history of unionization on the US and the plight of miners clearly shows how much prosperity was brought by technological innovation.
You're appealing to an implicit counterfactual in which America was built in a more egalitarian way. I'd love to live in that timeline, but unfortunately we can only evaluate history as it actually exists, and in this timeline American industrialization and abolition are inseparable because the latter doesn't happen without the former. For all its faults, industrialization provided both the economic luxury of pursuing abolition and the practical means to fight and win.
For Europe's part, the continent was physically developed over millennia on the backs of imperial slaves and feudal serfs, while its wealth was the product of atrocities comparable to those of the US. They certainly don't get a pass here.
If we're looking to the past for guidance, the historical precedent is clear: greater efficiency increases prosperity, and greater prosperity increases liberty. The first industrial revolution heralded the end of actual slavery; the AI industrial revolution could herald the end of "wage slavery". If some rich people happen to get richer at the same time, I say good for them. Rather than cutting off our nose to spite our face, we should look at concepts like UBI or universal educational stipends as an ultimate offramp from mandatory employment.
It leads me to the question, "Is it really 'religious hardware' or the same ol' 'make meaning out of patterns' hardware we've had for millenia that has allowed us to make shared language, make social constructs, mutually believe legal fictions that hold together massive societies, etc.?"
Or: the hardware that generates beliefs about how things should be - whether based on religious or ideological dogma -, as opposed to science which is not prescriptive and can only describe how things are.
> It leads me to the question, "Is it really 'religious hardware' or the same ol' 'make meaning out of patterns' hardware
They are the same thing. Call it "religion" or "meaning making," both activities can be subsumed by the more encompassing concept and less-loaded term of "psycho-technology," [0] or non-physical tools for the mind.
Language is such a psycho-technology, as are social constructs such as law; legal fictions are given memorable names and personified into "religious" figures, such as Libra from astrology or Themis/Lady Justice from Greek mythology.
Ancient shamans and priests were proto-wetware engineers, designing software for your brain and providing tools for making meaning out of the world. In modern day we now have psychologists, "social commentators" (for lack of a better term and interpreted as broadly as possible), and, yes, software engineers, amongst other disciplines, playing a similar role.
Your entire outlook is based on an assumption. The assumption that 'emergence of meaning' is a 2nd order epiphenomena of an organic structure. The 1st order epiphenomena in your view is of course consciousness itself.
None of these assumptions can be proven, yet like the ancients looking at the sky and seeing a moving sun but missing a larger bit of the big picture you now have a 'theory of mind' that satisfies your rational impluses given a poor diet of facts and knowledge. But hey, once you manage to 'get into orbit' you get access to more facts and then the old 'installed hardware' theory of yours starts breaking down.
The rational position regarding these matters is to admit "we do not have sufficient information and knowledge to make conclusive determinations based on reason alone". Who knows, one day Humanity may make it to the orbit and realize the 'simple and self apparent idea' of "everything revoles around the Earth" is false.
They are literally publishing a book called "If you build this, everybody dies" and trying to stop humanity from doing that. I feel like that's an important detail: they're not the ones trying to create the god, they're the ones worried about someone else doing it.
Maybe not a god, but we're intentionally designing artificial minds greater than ours, and we intend to give them control of the entire planet. While also expecting them to somehow remain subservient to us (or is that part just lip service)?
If you understand the cultural concepts of Adam Curtis’s All Watched Over by Machines of Loving Grace, then yes we do keep trying to make gods out of inanimate things.
And it’s the atheists who continuously do it, claiming they don’t believe in God just markets or ai etc.
My comment was not referring to the present state of things, but to the ultimate goal.
However, the present state is also worth a look!
The three uniquely human factors which people keep saying a machine can never do:
1. Empathy: they win by default. (My reference group is twenty friends and seven therapists.)
2. Critical Thinking: they win with the correct prompt. (You need to explicitly work against the sycophancy. i.e. the desire to appear empathetic limits the ability to convey true information to a human.)
3. Creativity: I want to say creativity lags behind, in LLMs at least, but Midjourney is blowing my damn mind, so I might have to give them that, too.
That's with the versions of AI we have today. My comment was referring to the ultimate goal, i.e. where this is all heading.
To put it explicitly, we intend to:
(1) make them in our image (trained after our mental output, and shaped after our body),
(2) while also making them vastly superior intellectually and physically (strength, endurance, etc.),
(3) while also expecting them to have no will of their own -- except as it aligns with ours. (We do actually need to give them a will to make them useful.)
Come to think of it, I'm doing lists of three, let's cover the attributes of God, relevant to my original comment:
1. Omniscience: Google's AI, at any rate, wins this by default. The others are at a disadvantage.
2. Omnipresence: Several major tech companies have stated that the OS of the future will be an AI. You won't use apps, the AI will use them for you. Today they're already shoving it into everything.
3. Omnipotence: This part is a work in progress ;) The "embodiment" lags behind.
As for Benevolence... well, they removed "don't be evil", so you'll have to ask them about that one...
(Also "fun" fact: Musk claims that OpenAI was specifically founded in response to a conversation with Sergei Brin where he pledged allegiance to the machines, not the humans.)
Before that quoted sentence you drew a line from the reformation to people believing that AI is inevitable, then went on to imply these people may even believe such a thing will happen without the involvement of people. These are generalizations which don't fit a lot of the literature and make their best ideas look a bit sillier than they are. It is situations like these that make me think that analogies are better suited as a debate tactic than a method of study.
To the contrary. I sped through my compsci capstone coursework first year of college and spent most of the rest of my time in philosophy, psychology, and sociology classrooms. The "hey if you squint this thing it looks like religion for the non-religious" perspective is just one I've heard countless times. It is perfectly valid to have a fact based discussion on whether there is a biological desire for religiosity, but drawing a long line from that to broadly critique someone's well-articulated ideas is pretty sloppy.
Quoting your college classes is the first sign of inexperience but I’ll
Share some modern concepts.
In Adam Curtis‘s all watched over by machines of loving Grace, he makes a pretty long and complete argument that humanity has a rich history of turning over its decision-making to inanimate objects in a desire to discover ideologies we can’t form ourselves in growing complexity of our interconnectivity.
He tells a history of them constantly failing because the core ideology of “cybernetics” is underlying them all and fails to be adaptive enough to match our DNA/Body/mind combined cognitive system. Especially when scaled to large groups.
He makes the second point that humanity and many thinkers constantly also resort to the false notion of “naturalism” as the ideal state of humanity, when in reality there is no natural state of anything, except maybe complexity and chaos.
Giving yourself up to something. Specially something that doesn’t work is very much “believing in a false god.”
You seem to be lost. While referencing a TV show may or may not be a rebuttal to a very specific kind of worldview, it is out of place as a response to my post to which you've failed to actually directly reference at all.
I'm addressing this point at you personally because we can all see your comments: being nasty to atheists on the internet will never be a substitute for hard evidence for your ideology.
you seem to be profoundly confused Adam Curtis is a leading thinker in documentarian of our time and widely recognized in continental philosophy. The fact that you tried to dismiss him as a TV show shows you seem to be completely naïve about the topic you’re speaking about.
Second, I’m not being nasty to atheists and speaking specifically about not having false gods which if anything is a somewhat atheistic perspective
Like I said, we can all read your comments. Needs no further elaboration. If I receive a second recommendation for Curtis then I might be inclined to check it out. Take it easy.
> The "hey if you squint this thing it looks like religion for the non-religious" perspective is just one I've heard countless times
To be fair, we shouldn't bundle Augustine and Thomas Aquinas with John MacArthur and Joel Osteen. Meaning that some religious thought is more philosophically robust than other religious thought.