Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Intentionally or not, you are presenting a false equivalency.

I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.



One person's unethical AI product is another's accessibility tool. Where the line is drawn isn't as obvious as you're implying.


It is unethical to me to provide an accessibility tool that lies.


LLMs do not lie. That implies agency and intentionality that they do not have.

LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.


No way to ever know in which condition that being somewhat accurate is going to be good enough or not. And no way to know how accurate the thing is before engaging with it so you have to babysit it... "Can do things" is carrying a lot of load in your statement. It makes the car with no brakes and you tell it not to do that so it makes you one without an accelerator either.


>That implies agency and intentionality that they do not have.

No, but the companies have agencies. LLMs lie, and they only get fixed when companies are sued. Close enough.


So provide one that "makes a mistake" instead.


Sure https://www.nbcnews.com/tech/tech-news/man-asked-chatgpt-cut...

Not going to go back and forth on thos as you inevitably try to nitpick "oh but the chatbot didn't say to do that"


If it was actually being given away as an accessiblity tool, then I would agree with you.

It kind of is that clear. It's IP laundering and oligarchic leveraging of communal resources.


1. Intellectual property is a fiction that should not exist.

2. Open source models exist.


Well yes on both counts.

The only thing worse than intellectual property is a special exception for people rich enough to use it.

I have hope for open source models, I use them.


Based.


How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.


You have reasonably available context here. "This year" seems more than enough on it's own.

I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.


>Consider my comment a reminder that ethical use of AI has been around of quite some

You can be among a a swamp and say "but my corner is clean". This is the exact opposite of the rotten barrel metaphor. You're trying to claim your sole apple is so how not rotted compared to the fermenting that is came from.


Putting aside the "useful" comment, because many find LLMs useful; let me guess, you're the one deciding whether it's ethical or not?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: