Hacker News new | past | comments | ask | show | jobs | submit login

>AI is basically text-predict combined with data mining. That’s it. It’s a super-Google that goes into the body of texts and rearranges the words into a very pleasing facsimile of a cogent argument. There’s no “intelligence” behind it, in the sense of a computer actually thinking.

>AI is more or less the same thing — it uses our wonder to convince us of a simulacrum of intelligence when what we are really witnessing is, in a sense, our own childish excitement at a trick of anthropomorphisation

People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.

What's important is if it is useful.

E.g... create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv. People would embrace AI like that instead of complaining about it.

If a pundit tried to advise the consumer who wants to avoid ads, "you know, that ad-skipping technology is _just_ fancy linear algebra and there's no _real_ intelligence behind it! You're dumbing down your brain by letting the AI mute the ads automatically instead of you doing it yourself." ... that's not a compelling argument. The usefulness of blocking ads outweighs any theoretical thresholds for real intelligence.

A lot of generative AI is not useful, so people will complain about it by falling back on the "it's not real intelligence" argument.




> People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.

It matters because you have to keep it in mind when judging its output EVERY TIME. You can't trust it to not tell you to poison yourself, or invent people/things that don't exist, or make up stories about people that _do_ exist.


Right. Because people never make statements that are mistaken or even just patently untrue.


Except that telling someone a lie, especially in professional context, can be completely discrediting, especially whenever trust is crucial. I work with people able to say: "I don't know", "I'm not sure", "Hard to say", "I need to do some extra work to verify this". Working with such people is very efficient because you don't waste context on fact-checking their every utterance.

Can they be wrong? Absolutely, but it's relatively rare. Will they decide to just lie to me? Extremely unlikely, given the stakes.


Furthermore, it also matters if we’re replacing existing systems with new ones with lower reliability, e.g. replacing technical documentation written by humans with ones generated by AI.


People can be held accountable for making wrong statements.


And now we have a machine to channel all those mistakes/lies!


but that is not relevant to ad-detecting. the worst ad-detecting can do is false positives, eg detect an ad inside a scene where someone is watching said ad on tv as part of the story or something like that. so your basic point still stands, you have to check if what it detects is really an ad, but beyond that the use of AI for ad-detecting is benign and harmless.


If you're having to constantly check the output, seems like it's not much of a stretch just to do the detecting yourself.


> You can't trust it to not tell you to poison yourself, or invent people/things that don't exist, or make up stories about people that _do_ exist.

My mum was a big beliver in homeopathy and Bach flower remedies. Kept sneakily dosing me and my dad with one "for memory". She ended up with Alzheimer's just shy of 20 years younger than her mum.

I could name a lot of elected officials over the years that made up stories about people, the hard part is picking one sufficiently uncontrovertial that nobody will object to the example.

What matters is the rate at which these things happen, given AI is now sufficiently competent at presenting as a human to be a problem for those who need to know they're discoursing with a human (job interviews, grading essays, is this video call really with your relative who really needs an emergency payment or is it all fake, political propaganda).


You've completely missed the point of the parent comment. It doesn't matter if it occasionally hallucinates, because there are many use cases where that's okay and you can generate enormous value anyway.


I got their point and disagree with it because it's a highly cherrypicked example used to broadly dismiss a real and valid concern


i disagree with that. the point is that there are good and harmless uses of AI. the possibility of dangerous uses does not necessarily warrant the complete dismissal of AI unless a complete dismissal is the only way to protect us from harm caused by use of AI.


Despite it being cherrypicked, it is a valid case. Hence it would seem you're knowingly disagreeing with a fact because it doesn't fit your purpose, which is disingenuous.


Hallucinate === Lie


Lying implies intent to deceive. An LLM has no intent.


A lie is a lie is a lie. We don't need to make excuses for computers.


> People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.

> What's important is if it is useful.

Except the usefulness expectations almost completely rely on the intelligence (it's right there in the name)! Without this you wouldn't have so much hype&money in this topic. So the complaint is rightfully directed at the core selling point


Why not just... use the product and see if it's useful? What does it matter what it's called, what does it matter if it's hyped or not, what does it matter if investors are or aren't throwing too much money into it.

You, individually, can try out the product and see if it's useful for you. Many people have done so and found it extremely useful for themselves (myself included).


Strange question, why would you "just" ignore all the information out there? But then why would you pick this specific products among a million of others? You wouldn't even know about it without the hype!


> Strange question, why would you "just" ignore all the information out there?

Some of the information out there is many, many people saying they do use Genai products and find them valuable. You shouldn't ignore that info either.

> But then why would you pick this specific products among a million of others? You wouldn't even know about it without the hype!

That's true of everything ever to different extents. You wouldn't have heard of any product if someone hadn't told you about it, either word of mouth, advertising or you seeking it out in some. I don't see how that possibly matters.

And yes, the fact that there's a lot of hype - a lot of people very excited and saying this is something that is super useful for them - is absolutely the reason most people have heard about it. That's good! A lot of people finding value in a product and telling others is pretty good evidence.


> You shouldn't ignore that info either

I'm not the one suggesting ignoring the core hyped value proposition and "just try yourself", so where am I ignoring anything?!

> to different extents

> I don't see how that possibly matters.

Sure, if you ignore the difference, you will not see what's different. The "extent" is key, few things command so much attention!


Your analysis assumes that we can readily know if it’s useful. But we often can’t.

Consider these absurdities: “I don’t need to test this new drug before releasing it to market, as long as it is safe and effective!

Or “I don’t need to worry about phishing scams. I can click on any link as long as it is safe.”

Or more to the point: “This food tastes good, so it must be good for me. There’s no need for hygiene and food safety laws.”

I don’t trust LLMs, because I have tested them, and continue to test them, and they are patently unreliable. They can still be useful in a momentary, self-contained way. But my studies have led me to use them only for answers that are easily verified.


> People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.

It matters because they called it artificial intelligence. The point I sew people bring up, though, boils down to whether its intelligent at all, not fake vs real. Debating whether "artificial" and "fake" are synonymous would be an odd stance to take, its the "intelligence" part that is the sticking point.

Beyond just the name, it matters because the risk of it going wrong is different. If it is a useful tool but not intelligent, the risk is mainly in how people will use it. That's no different than any technology.

If it is intelligent, the risk is in how it will use itself. It depends on how you want to define intelligence but, at least in my opinion, the ability to determine your own goals and desires is a prerequisite for intelligence. From that view, we can only hope that an AI is aligned with us...and given that we have all but abandoned both the alignment problem and the interpretability problem that doesn't seem likely.


I don't think we have a different usage based on the label "intelligent" or "not intelligent". I would leave this discussion to (armchair) philosophers, and focus on the utility and risks - which are exactly the same, regardless of the label we put on the box, because the products in the box do exist and act in the same way. An automaton with access to the red button can kill humanity just the same as an AI with access to the red button - the problem is giving it access to the red button, which the AI fans often seem all to happy to give.


Words matter though, if one wants to call it intelligence we first have to define what that means then use that definition. Plenty of people have tried to define it and there's isn't one answer for that, but I've never seen anyone argue that intelligence boils down only to computation. To call these tools intelligent means that there is more to it than computation, and that's an important difference.

The risks aren't the same here. Computation, if we want to say these "AI" tools are nothing more than complex math, is only as risky as the person using it.

Intelligence is potentially more dangerous, regardless of what traits you may think is required in addition to computation to make it intelligent.

We don't actually know how these LLMs work at time of inference and we can't analyze the trained dataset to understand why it would give an answer. That's fairly benign under the "its just computation" view, but that black box is full of unknown risk with any definition of intelligence because we don't actually understand how the thing works, why it does what it does, or what it will do next.


An unhinged man with a gun is significantly riskier than an unhinged man without a gun, so I'm not sure I get your argument. But if you mean it in the philosophical/categorizational way, I'll just move on because I don't have the knowledge or the interest to label big things while they're flying towards my head - I rather duck first and see afterwards what they were.


Right, an unhindged man with a gun is worse than without a gun. A gun that can itself become unhinged is likely worse, or at least more of an unknown.

When intelligence comes into the picture the question is what the intelligent thing will do, not how a person will use it. If its just a tool that a person can use it almost certainly isn't itself intelligent at all.


E.g... create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv.

We had that in 1990's VCRs with no "AI" required. It also fast-forwarded through the commercials during recorded programs.

It worked pretty good on my Panasonic.


>We had that in 1990's VCRs with no "AI" required. It also fast-forwarded through the commercials during recorded programs.

Yes, I owned several of those Panasonic VCRs with the "Commercial Advance™" feature. Also had a Hitachi VCR with same feature licensed from ADLE. The heuristics used a combination of detecting a fade-to-black screen transitions and higher audio levels of loud commercials. It was simplistic criteria but was "good enough".

The problem is that VCR required 2 separate passes of the VHS tape. The 1st pass was to record the video and then the VCR automatically rewound the tape and the 2nd pass played it back to itself to analyze and mark the ad segments.

That approach does not work to watch live feeds of sports broadcasts. To use the Panasonic VCR approach, one would have to "record" it first -- which defeats the purpose of watching the game live. To instantly block live ads without any waiting, you need technology with more "intelligence" or "smarts" or whatever people want to call it.

My point is that if you hype up a "live-tv-ads-skipping device" with "Unicorn Fairy Dust Technology" -- people won't complain that the company called it "unicorn whatever" -- as long as it actually works and improves their lives. It's when such a device does not work (e.g. Apple Intelligence fiasco) is when the meta analysis and lectures about such as "you know, that device doesn't actually have any horses inside of it with a horn coming out of its head".


That approach does not work to watch live feeds of sports broadcasts

You are correct. However, in your rush to defend your position, you failed to read what I wrote.

The VCR had both the ability to skip recorded commercials, and also mute live commercials.

Read the comment again.


> What's important is if it is useful.

Precisely why I don't use AI. It is not. I can't trust it bar for some "research this for me" followed by a thorough review of the source material, which is something I am already really adept to do myself, so no value here.


Is search, even legacy search like a table of contents or the Dewey decimal system not useful? It may not point you to what you want either.

LLMs are an ok iteration on search with up and downsides.

One of the upsides is better contextual hinting from the users input. One of the downsides is that it also makes it trivial to spew out so much bullshit content that soon I doubt it will be able to train on most of the public internet anymore.


> create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv.

OK so there are a couple of approaches to this issue, e.g. based on audio signal level, constant audio patterns, constant video patterns - and you don't really need to involve machine learning here. You can call it automation and it is perfectly fine.

People have been using the term "AI" in so many contexts for so many things that it's almost meaningless because it's so vague.


This reminds me of a very narrow set of people who consume meat. I consume meat as well, but I attempt to balance it despite knowing no animal can consent. I will use a meat substitute and I think there can be a world where synthetic meats and other changes long term displace that suffering.

But, there is a very small set of meat consumers that seem to "need" that element of the product. They want something to have suffered or died and removing that component of it is not desirable for them.

Why does anyone care to want these kinds of "real" things? If suffering can be reduced who cares?!


the problem with synthetic meat is the same problem as everything synthetic: we don' know if it is safe. it took decades to find out that some artificial sweeteners cause cancer for example.

for me the conclusion is that everything synthetic is more likely dangerous in some form than it is helpful. i'd rather give up meat before eating synthetic meat. i also avoid synthetic materials in clothing, etc...

i am all for reducing the suffering of animals, but health and the environment come first.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: