Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> you asked a question into a black box, received a symbolic-seeming response, evaluated its truth post hoc, and interpreted its relevance

So any and all human communication is divination in your book?

I think your point is pretty silly. You're falling into a common trap of starting with the premise "I don't like AI", and then working backwards from that to pontification.



Hacker News deserves a stronger counterargument than “this is silly.”

My original comment is making a structural point, not a mystical one. It’s not saying that using AI feels like praying to a god, it's saying the interaction pattern mirrors forms of ritualized inquiry: question → symbolic output → interpretive response.

You can disagree with the framing, but dismissing it as "I don’t like AI so I’m going to pontificate" sidesteps the actual claim. There's a meaningful difference between saying "this tool gives me answers" and recognizing that the process by which we derive meaning from the output involves human projection and interpretation, just like divination historically did.

This kind of analogy isn't an attack on AI. It’s an attempt to understand the human-AI relationship in cultural terms. That's worth engaging with, even if you think the metaphor fails.


> Hacker News deserves a stronger counterargument than “this is silly.”

Their counterargument is that said structural definition is overly broad, to the point of including any and all forms of symbolic communication (which is all of them). Because of that, your argument based on it doesn't really say anything at all about AI or divination, yet still seems 'deep' and mystical and wise. But this is a seeming only. And for that reason, it is silly.

By painting all things with the same brush, you lose the ability to distinguish between anything. Calling all communication divination (through your structural metaphor), and then using cached intuitions about 'the thing which used to be called divination; when it was a limited subset of the whole' is silly. You're not talking about that which used to be called divination, because you redefined divination to include all symbolic communication.

Thus your argument leaks intuitions (how that-which-was-divination generally behaves) that do not necessarily apply through a side channel (the redefined word). This is silly.

That is to say, if you want to talk about the interpretative nature of interaction with AI, that is fairly straightforward to show and I don't think anyone would fight you on it, but divination brings baggage with it that you haven't shown to be the case for AI. In point of fact, there are many ways in which AI is not at all like divination. The structural approach broadens too far too fast with not enough re-examination of priors, becoming so broad that it encompasses any kind of communication at all.

With all of that said, there seems to be a strong bent in your rhetoric towards calling it divination anyway, which suggests reasoning from that conclusion, and that the structural approach is but a blunt instrument to force AI into a divination shaped hole, to make 'poignant and wise' commentary on it.

> "I don’t like AI so I’m going to pontificate" sidesteps the actual claim

What claim? As per ^, maximally broad definition says nothing about AI that is not also about everything, and only seems to be a claim because it inherits intuitions from a redefined term.

> difference between saying "this tool gives me answers" and recognizing that the process by which we derive meaning from the output involves human projection and interpretation, just like divination historically did

Sure, and all communication requires interpretation. That doesn't make all communication divination. Divination implies the notion of interpretation of something that is seen to be causally disentangled from the subject. The layout of these bones reveals your destiny. The level of mercury in this thermometer reveals the temperature. The fair die is cast, and I will win big. The loaded die is cast, and I will win big. Spot the difference. It's not structural.

That implication of essential incoherence is what you're saying without saying about AI, it is the 'cultural wisdom and poignancy' feedstock of your arguments, smuggled in via the vehicle of structural metaphor along oblique angles that should by rights not permit said implication. Yet people will of course be generally uncareful and wave those intuitions through - presuming they are wrapped in appropriately philosophical guise - which is why this line of reasoning inspires such confusion.

In summary, I see a few ways to resolve your arguments coherently:

1. keep the structural metaphor, discard cached intuitions about what it means for something to be divination (w.r.t. divination being generally wrong/bad and the specifics of how and why). results in an argument of no claims or particular distinction about anything, really. this is what you get if you just follow the logic without cache invalidation errors.

2. discard the structural metaphor and thus disregard the cached intuitions as well. there is little engagement along human-AI cultural axis that isn't also human-human. AI use is interpretative but so is all communication. functionally the same as 1.

3. keep the structural metaphor and also demonstrate how AI are not reliably causally entwined with reality along boundaries obvious to humans (hard because they plainly and obviously are, as demonstrable empirically in myriad ways), at which point go off about how using AI is divination because at this point you could actually say that with confidence.


You're misunderstanding the point of structural analysis. Comparing AI to divination isn't about making everything equivalent, but about highlighting specific shared structures that reveal how humans interact with these systems. The fact that this comparison can be extended to other domains doesn't make it meaningless.

The issue isn't "cached intuitions" about divination, but rather that you're reading the comparison too literally. It's not about importing every historical association, but about identifying specific parallels that shed light on user behavior and expectations.

Your proposed "resolutions" are based on a false dichotomy between total equivalence and total abandonment of comparison. Structural analysis can be useful even if it's not a perfect fit. The comparison isn't about labeling AI as "divination" in the classical sense, but about understanding the interpretive practices involved in human-AI interaction.

You're sidestepping the actual insight here, which is that humans tend to project meaning onto ambiguous outputs from systems they perceive as having special insight or authority. That's a meaningful observation, regardless of whether AI is "causally disentangled from reality" or not.


> humans tend to project meaning onto ambiguous outputs from systems they perceive as having special insight or authority

This applies just as well to other humans as it does AI. It's overly-broad to the point of meaninglessness.

The insight doesn't illuminate.


> It's not about importing every historical association, but about identifying specific parallels that shed light on user behavior and expectations.

Indeed, I hold that driving readers to intuit one specific parallel to divination and apply it to AI is the goal of the comparison, and why it is so jealously guarded, as without it any substance evaporates.

The thermometer has well-founded authority to relay the temperature, the bones have not the well-founded authority to relay my fate. The insight, such as you call it, is only illuminative if AI is more like the latter than the former.

This mode of analysis (the structural) takes no valid step in either direction, only seeding the ground with a trap for readers to stumble into (the aforementioned propensity to not clear caches).

> That's a meaningful observation, regardless of whether AI is "causally disentangled from reality" or not.

If the authority is well-founded (i.e., is causally entangled in the way I described), the observation is meaningless, as all communication is interpretative in this sense.

The structural approach only serves as rhetorical sleight of hand to smuggle in a sense of not-well-founded authority from divination in general, and apply it to AI. But the same path opens to all communication, so what can it reveal in truth? In a word, nothing.


> That's a meaningful observation, regardless of whether AI is "causally disentangled from reality" or not.

And regardless of how many words someone uses in their failed attempt at "gotcha" that nobody else is playing. There are certainly some folks acting silly here, and it's not the vast majority of us who have no problem interpreting and engaging with the structural analysis.


> So any and all human communication is divination in your book?

Words from an AI are just words.

Words in a human brain have more or less (depending on the individual's experiences) "stuff" attached to them: From direct sensory inputs to complex networks of experiences and though. Human thought is mainly not based on words. Language is an add-on. (People without language - never learned, or sometimes temporarily disabled due to drugs, or permanently due to injury, transient or permanent aphasia - are still consciously thinking people.)

Words in a human brain are an expression of deeper structure in the brain.

Words from an AI have nothing behind them but word statistics, devoid of any real world, just words based on words.

Random example sentence: "The company needs to expand into a new country's market."

When an AI writes this, there is no real world meaning behind it whatsoever.

When a fresh out of college person writes this it's based on some shallow real world experience, and lots of hearsay.

When an experienced person actually having done such expansion in the past says it a huge network of their experience with people and impressions is behind it, a feeling for where the difficulties lie and what to expect IRL with a lot of real-world-experience based detail. When such a person expands on the original statement chances are highest that any follow-up statements will also represent real life quite well, because they are drawn not from text analysis, but from those deeper structures created by and during the process of the person actually performing and experiencing the task.

But the words can be exactly the same. Words from a human can be of the same (low) quality as that of an AI, if they just parrot something they read or heard somewhere, although even then the words will have more depth than the "zero" on AI words, because even the stupidest person has some degree of actual real life forming their neural network, and not solely analysis of other's texts.


I can only agree with you. And I find it disturbing that every time someone points out what you just said, the counter argument is to reduce human experience and human consciousness to the shallowest possible interpretation so they can then say, “look, it's the same as what the machine does”.


I think it’s because the brain is simply a set of chemical and electrical interactions. I think some believe when we understand how the brain works it won’t be some “soulful” other worldly explanation. It will be some science based explanation that will seem very unsatisfying to some that think of us as more than complex machines. The human brain is different than LLMs, but I think we will eventually say “hey we can make a machine very similar”.


It looks like you did exactly what I described in my parent comment, so it doesn't add anything of substance. Let's agree to disagree.


The logic is that you preemptively shut down dissenting opinions so any comments with dissenting opinions are necessarily not adding anything of substance. They made good points and you simply don't want to discuss them; that does not mean the other commenter did not add substance and nuance to the discussion.


Nope. I understood the counterargument the first 513 times, there's no need to repeat it.


Why bring up the argument then?


The deconstruction trick is a bit like whataboutism. It sort of works on a shallow level but it's a cheap shot. You can say "this is just a collections of bites and matrix multiplications". If it's humans -- "it's just simple neurons firing and hormones". Even if it's some object: "what's the big deal, it's just bunch of molecules and atoms".


> People without language - never learned, or sometimes temporarily disabled due to drugs, or permanently due to injury, transient or permanent aphasia - are still consciously thinking people.

There are 40 definitions of the word "consciousness".

For the definitions pertaining to inner world, nobody can tell if anyone besides themselves (regardless of if they speak or move) is conscious, and none of us can prove to anyone else the validity of our own claims to posess it.

When I dream, am I conscious in that moment, or do I create a memory that my consciousness replays when I wake?

> Words from an AI have nothing behind them but word statistics, devoid of any real world, just words based on words.

> […]

> When a fresh out of college person writes this it's based on some shallow real world experience, and lots of hearsay.

My required reading at school included "Dulce Et Decorum Est" by Wilfred Owen.

The horrors of being gassed during trench warfare were alien to us in the peaceful south coast of the UK in 1999/2000.

AI are limited, but what you're describing here is the "book learning" vs. "street smart" dichotomoy rather than their actual weaknesses.


> Human thought is mainly not based on words. Language is an add-on.

What does 'mainly' mean here ?

Language is so very human-specific that human newborns already have the structures for it, while non-human newborns do not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: