Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you make a lot of assumptions that you should perhaps reexamine.

> Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?

Here are some of your assumptions:

1. Human intelligence is entirely explicable in evolutionary terms. (It is certainly not the case that it has been explained in this manner, even if it could be.) [0]

2. Human intelligence assumed as an entirely biological phenomenon is realizable in something that is not biological.

And perhaps this one:

3. Silicon is somehow intrinsically bound up with computation.

In the case of (2), you're taking a superficial black box view of intelligence and completely ignoring its causes and essential features. This prevents you from distinguishing between simulation of appearance and substantial reality.

Now, that LLMs and so on can simulate syntactic operations or whatever is no surprise. Computers are abstract mathematical formal models that define computations exactly as syntactic operations. What computers lack are semantic content. A computer never contains the concept of the number 2 and the concept of the addition operation even though we can simulate the addition of 2 + 2. This intrinsic absence of a semantic dimension means that computers already lack the most essential feature of intelligence, which is intentionality. There is no alchemical magic that will turn syntax into semantics.

In the case of (3), I emphasize that computation is not a physical phenomenon, but something described by a number of formally equivalent models (Turing machine, lambda calculus, and so on) that aim to formalize the notion of effective method. The use of silicon-based electronics is irrelevant to the model. We can physically simulate the model using all sorts of things, like wooden gears or jars of water or whatever.

> I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc). [...] As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.

How on earth did you conclude there is any agency here, or that it's just a "matter of time"? This is textbook magical thinking. You are projecting a good deal here that is completely unwarranted.

Computation is not some kind of mystery, and we know at least enough about human intelligence to notes features that are not included in the concept of computation.

[0] (Assumption (1), of course, has the problem that if intelligence is entirely explicable in terms of evolutionary processes, then we have no reason to believe that the intelligence produced aims at truth. Survival affordances don't imply fidelity to reality. This leads us to the classic retorsion arguments that threaten the very viability of the science you are trying to draw on.)



I understand all the words you've used but I truly do not understand how they're supposed to be an argument against the GP post.

Before this unfolds into a much larger essay, should we not acknowledge one simple fact: that our best models of the universe indicate that our intelligence evolved in meat and that meat is just a type of matter. This is an assumption I'll stand on, and if you don't disagree, we need to back up.

Far too often, online debates such as this take the position that the most likely answer to a question should be discarded because it isn't fully proven. This is backwards. The most likely answer should be assumed to be probably true, a la Occam. Acknowledging other options is also correct, but assuming the most likely answer is wrong, without evidence, is simply contrarian for its own sake, not wisdom or science.


I don't know what else I can write without repeating myself.

I already wrote that even under the assumption that intelligence is a purely biological phenomenon, it does not follow that computation can produce intelligence.

This isn't a matter of probabilities. We know what computation is, because we defined it as such and such. We know at least some essential features of intelligence (chiefly, intentionality). It is not rocket science to see that computation, thus defined, does not include the concepts of semantics and intentionality. By definition, it excludes them. Attempts to locate the latter in the former reminds me of Feynman's anecdote about the obtuse painter who claimed he could produce yellow from red and white paint alone (later adding a bit of yellow paint to "sharpen it up a bit").


You don't know how to elaborate on your claims?

> I already wrote that even under the assumption that intelligence is a purely biological phenomenon, it does not follow that computation can produce intelligence.

See this demands definitions because I do not understand how you can say this. If intelligence is an emergent property of a physical process then I'd feel insane to assume such processes can only happen in certain meats but never in artificially created computers. I fail to see any way they can be fundamentally separated without a non-testable, immaterial "spirit" bestowing intellect on the meat computer. They're just different substrates for computation.

> We know what computation is, because we defined it as such and such.

As what? This is the cornerstone of your entire argument and you've left it as an assumption. I can infer you define computation as "those processes which happen in non-biological systems and which do not lead to intelligence." But that would be worse than useless.


What.

Are you saying that "intentionality", whatever you mean by it, can't be implemented by a computational process? Never-ever? Never-ever-ever?


If you don't know what intentionality is, a commonplace term in the philosophy of mind, then I suggest you take some time to better acquaint yourself with the subject matter. This is probably the major blow against any kind of AI fantasy, and you don't do yourself any favors by wading in the shallows and confidently holding to misinformed opinions.

So, one last time. A computer program is a set of formal rules that takes one sequence of uninterpreted symbols and produces another sequence of uninterpreted symbols. Adding more rules and more uninterpreted symbols doesn't magically cause those symbols to be interpreted, and it cannot, by definition.


I don't think that "philosophy of mind" contains anything useful for AI development. Most of what I've come across in the field was worthless wordcel drivel, far divorced from reality and any practical applications.


I'm just assuming materialism, and that assumption is basically for complete lack of convincing alternatives (to me).

With "agency" I just mean the ability to affect the physical world (not some abstract internal property).

Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.

> you're taking a superficial black box view of intelligence

Yes. Human cognition is to me simply an emergent property of our physical brains, and nothing more.


This is all very hand wavy. You don't address in the least what I've written. My criticisms stand.

Otherwise...

> I'm just assuming materialism, and that assumption is basically for complete lack of convincing alternatives (to me).

What do you mean by "materialism"? Materialism has a precise meaning in metaphysics (briefly, it is the res extensa part of Cartesian dualism with the res cogitans lopped off). This brand of materialism is notorious for being a nonstarter. The problem of qualia is a big one here. Indeed, all of what Cartesian dualism attributes to res cogitans must now be accounted for by res extensa, which is impossible by definition. Materialism, as a metaphysical theory, is stillborn. It can't even explain color (or as a Cartesian dualism would say, the experience of color).

Others use "materialism" to mean "that which physics studies". But this is circular. What is matter? Where does it begin and end? And if there is matter, what is not matter? Are you simply defining everything to be matter? So if you don't know what matter is, it's a bit odd to put a stake in "matter", as it could very well be made to mean anything, including something that includes the very phenomenon you seek to explain. This is a semantic game, not science.

Assuming something is not interesting. What's interesting is explaining how those assumptions can account for some phenomenon, and we have very good reasons for thinking otherwise.

> With "agency" I just mean the ability to affect the physical world (not some abstract internal property).

Then you've rendered it meaningless. According to that definition, nearly anything physical can be said to have agency. This is silly equivocation.

> Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.

This is total gibberish. We're not talking about how we might represent or model aspects of a concepts in some vector space for some specific purpose or other. That isn't semantic content. You can't sweep the thing you have to explain under the rug and then claim to have accounted for it by presenting a counterfeit.


By "materialism" I mean that human cognition is simply an emergent property of purely physical processes in (mostly) our brains.

All the individual assumptions basically come down to that same point in my view.

1) Human intelligence is entirely explicable in evolutionary terms

What would even be the alternative here? Evolution plots out a clear progression from something multi-cellular (obviously non-intelligent) to us.

So either you need some magical mechanism that inserted "intelligence" at some point in our species recent evolutionary past, or an even wilder conspiracy theory (e.g. "some creator built us + current fauna exactly, and just made it look like evolution").

2) Intelligence strictly biological

Again, this is simply not an option if you stick to materialism in my view. you would need to assume some kind of bio-exclusive magic for this to work.

3) Silicon is somehow intrinsically bound up with computation

I don't understand what you mean by this.

> It can't even explain color

Perceiving color is just how someones brain reacts to a stimulus? Why are you unhappy with that? What would you need from a satisfactory explanation?

I simply see no indicator against this flavor of materialism, and everything we've learned about our brains so far points in favor.

Thinking, for us, results in and requires brain activity, and physically messing with our brains operation very clearly influences the whole spectrum of our cognitive capabilities, from the ability to perceive pain, color, motion, speech to consciousness itself.

If there was a link to something metaphysical in every persons brain, then I would expect at least some favorable indication before entertaining that notion, and I see none (or some plausible mechanism at the very least).


> By "materialism" I mean that human cognition is simply an emergent property of purely physical processes in (mostly) our brains.

Again, this doesn't say what a "physical process" is, or what isn't a physical process. If "physical process" means "process", then the qualification is vacuous.

> All the individual assumptions basically come down to that same point in my view.

You're committing the fallacy of the undistributed middle. Just because both the brain and computing devices are physical, it doesn't follow that computers are capable of what the brain does. Substitute "computing devices" with "rocks".

> So either you need some magical mechanism that inserted "intelligence" at some point in our species recent evolutionary past, or an even wilder conspiracy theory (e.g. "some creator built us + current fauna exactly, and just made it look like evolution").

How intelligence came about is a separate subject, and I regrettably got sidetracked. It is irrelevant to the subject at hand. (I will say, at the risk of derailing the main discussion again, that we don't have an evolutionary explanation or any physical explanation of human intelligence. But this is a separate topic, as your main error is to assume that the physicality of intelligence entails that computation is the correct paradigm for explaining it.)

> Again, this is simply not an option if you stick to materialism in my view. you would need to assume some kind of bio-exclusive magic for this to work.

This is very difficult to address if you do not define your terms. I still don't know what matter is in your view and how intentionality fits into the picture. You can't just claim things without explanation, and "matter" is notoriously fuzzy. Try to get a physicist to define it and you'll see.

> Perceiving color is just how someones brain reacts to a stimulus? Why are you unhappy with that? What would you need from a satisfactory explanation?

I already explained that materialism suffers from issues like the problem of qualia. I took the time to give you the keywords to search for if you are not familiar with the philosophy of mind. In short, if mind is matter, and color doesn't exist in matter, then how can it exist in mind? (Again, this is tangential to the main problem with your argument.)

> Thinking, for us, results in and requires brain activity, and physically messing with our brains operation very clearly influences the whole spectrum of our cognitive capabilities, from the ability to perceive pain, color, motion, speech to consciousness itself.

I never said it doesn't involve physical activity. In fact, I even granted you, for the sake of argument, that it is entirely physical to show you the basic error you are making.

> If there was a link to something metaphysical in every persons brain, then I would expect at least some favorable indication before entertaining that notion, and I see none (or some plausible mechanism at the very least).

I don't think you know what metaphysics is. Metaphysics is not some kindof woo. It is the science of being and what must be true about reality for the observed world to be what and how it is in the most general sense. So, materialism is a metaphysical theory that claims that all that exists is matter, understood as extension in space (this is what "res extensa" refers to). But materialistic metaphysics is notoriously problematic, and I've given you one of the major problems it suffers from already (indeed, eliminativism was confected by some philosophers as a desperate attempt to save materialism from these paradoxes by making a practice out of denying observation in Procrustean fashion).


> Just because both the brain and computing devices are physical, it doesn't follow that computers are capable of what the brain does.

My position is: Physical laws are computable/simulatable. The operation of our brains is explained by physical laws (only-- I assume). Thus, object classification, language processing, reasoning, human-like decisionmaking/conscious thought or any other "feature" that our brains are capable of must be achievable via computation as well (and this seems validated by all the partial success we've seen already-- why would human-level object classification be possible on a machine, but not human-level decisionmaking?).

Again: If you want human cognition to be non-replicable on paper, by algorithm or in silicon, you need to have some kernel of "magic" somewhere in our brains, that influences/directs our thoughts and that can not be simulated itself. Or our whole "thinking" has to happen completely outside of our brain, and be magically linked with it. There is zero evidence in favor of either of those hypotheses, and plenty of indicators against it. Where would you expect that kernel to hide, and why would you assume that such a thing exists?

From another angle: I expect the whole operation of our mind/brain to be reducible to physics in the exact same way that chemistry (or in turn biology) can be reduced to physics (which admittedly does not mean that that is a good approach to describe or understand it, but that's irrelevant).

I'm not a philosopher, but Eliminativism/Daniel Dennett seem to describe my view well enough.

If I say "qualia" (or "subjective experience") is how your brain reacts to some stimulus, then where exactly is your problem with that view?

> if mind is matter, and color doesn't exist in matter, then how can it exist in mind

"color" perception is just your brains response to a visual stimulus, and it makes a bunch of sense to me that this response seems similar/comparable between similarly trained/wired individuals. It is still unclear to me what your objection to that view is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: