Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> they won't happen because you personally haven't seen any evidence of it yet.

Well, when talking about extraordinary claims, yes I require extraordinary evidence.

> what do those things add up to?

Apparently nothing, because we aren't seeing significant harm from any of this stuff yet, for even the non magic scenarios.

> we do in fact have scams that are already going on.

Alright, and how much damage are those scams causing? Apparently its not that significant. Like I said, if the money lost to these scam double, then yes that is something to look at.

> that's just going to get more and more accessible and cheap and powerful

Sure. They will get incrementally more powerful over time. In a way that we can measure. And then we can take action once we measure there is a small problem before it becomes a big problem.

But if we don't measure these scams getting more significant and caused more actual damage that we can see right now, then its not a problem.

> you want to touch the sun to prove it exists

No actually. What I want is for the much much much easier to prove problems become real. Long before nuke hacking happens, we will see scams. But we aren't seeing significant problems from that yet.

To go to the sun analogy, it would be like worrying about someone building a rocket to fly into the sun, before we even entered the industrial revolution or could sail across the ocean.

Maybe there is some far off future where magic AI is real. But, before worrying about situations that are a century away, yes I require evidence of the easy situations happening in real life, like scammers causing significant economic damage.

If the easy stuff isn't causing issue yet, then there isn't a need to even think about the magic stuff.



your repeated use of the word magic doesn't really hold water. what gpt-3+ does would have seemed like magic even 10 years ago, never mind SORA

I asked you for what would convince you. you said:

>I have been quite clear about what evidence I require. Show existing capabilities and show what harm could be caused if it incrementally gets better in that category

So I very clearly described a multitude of things that fit this description. Existing capabilities and how they could feasibly be used to the end of massive damage, even without AGI

Then, without finding a single hole or counter, you simply raised your bar by saying you need to see evidence of it actually happening.

Then I gave you evidence of it actually happening. highly convincing complex whatsapp group scams very much exist that didn't before

and then you raised the bar again and said that they need to double or increase in frequency

besides the fact that that kind of evidence is not exactly easy to measure or accurately report, you set up so almost nothing will convince you, I pinned you down to a standard, then you just raise the bar whenever it's hit.

I think subconsciously you just don't want to worry about it. that's fine, and I'm sure it's better for your mental health, but it's not worth debating any more


> So I very clearly described a multitude of things that fit this description

No, we aren't seeing this damage though.

That's what would convince me.

Existing harm. The amount of money that people are losing to scams doubling.

That's a measurable metric. I am not talking about vague descriptions of what you think AI does.

Instead, I am referencing actual evidence of real world harm, that current authorities are saying is happening.

> said that they need to double or increase in frequency

By increase in frequency, I mean that it has to be measurable that AI is causing an increase in existing harm.

IE, if scams have happened for a decade, and 10 billion dollars is lost every year (random number) and in 2023 the money lost only barely increased, then that is not proof that AI is causing harm.

I am asking for measureable evidence that AI is causing significant damage, more so than a problem that already existed. If amount of money lost stays the same then AI isn't causing measurable damage.

> I pinned you down to a standard

No you misinterpreted the standard such that you are now claiming that the harm caused by AI can't even be measured.

Yes, I demand actual measureable harm.

As determined by like government statistics.

Yes, the government measures how much money is generally caused by or lost by scams.

> you just don't want to worry about it

A much more likely situation is that you have zero measureable examples of harm so look for excuses why you can't show it.

Problems that exist can be measured.

This isn't some new thing here.

We don't have to invent excuses to flee from gathering evidence.

If the government does a report and shows how AI is causing all this harm, then I'll listen to them.

But, it hasn't happened yet. There is not government report saying that, I don't know, 50 billion dollars in harm is being chased by AI therefore we should do something about it.

Yes, people can measure harm.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: