There's a difference between working on something until it's a viable and usable product vs. throwing out trash and trying to sell it as gold. It's the difference between Apple developing self driving cars in secret because they want to get it right vs. Tesla doing it with the public on public roads and killing people.
In its current state Bing ChatGPT should not be near any end users, imagine it going on an unhinged depressive rant when a kid asks where their favorite movie is playing...
Maybe one day it will be usable tech but like self driving cars I am skeptical. There are way too many people wrapped up in the hype of this tech. It feels like self driving tech circa 2016 all over again.
Imagine it going on a rant when someone’s kid is asking roundabout questions about depression or SA and the AI tells them in so many words to kill themselves.
I have to say, I'm really enjoying this future where we shit on the AIs for being too human, and having depressive episodes.
This is a timeline I wouldn't have envisioned, and am finding it delightful how humans want to have it both ways. "AIs can't feel, ML is junk", and "AIs feel too much, ML is junk". Amazing.
I think you're mixing up concerns from different contexts. AI as a generalized goal, where there are entities that we recognize as "like us" in quality of experience, yes, we would expect them to have something like our emotions. AI as a tool, like this Bing search, we want it to just do its job.
Really, though, this is the same standard that we apply to fellow humans. An acquaintance who expresses no emotion is "robotic" and maybe even "inhuman". But the person at the ticket counter going on about their feelings instead of answering your queries would also (rightly) be criticized.
It's all the same thing: choosing appropriate behavior for the circumstance is the expectation for a mature intelligent being.
Well, that's exactly the point: we went from "AIs aren't even intelligent beings" to "AIs aren't even mature" without recognizing the monumental shift in capability. We just keep yelling that they aren't "good enough", for moving goalposts of "enough".
I'm glad to see this comment. I'm reading through all the nay-saying in this post, mystified. Six months ago the complaints would have read like science fiction, because what chatbots could do at the time were absolutely nothing like what we see today.
No, the goalposts are different according to the task. For example, Microsoft themselves set the goalposts for Bing at "helpfully responds to web search queries".
Who is "we"? I suspect that you're looking at different groups of people with different concerns and thinking that they're all one group of people who can't decide what their concerns are.
In its current state Bing ChatGPT should not be near any end users, imagine it going on an unhinged depressive rant when a kid asks where their favorite movie is playing...
Maybe one day it will be usable tech but like self driving cars I am skeptical. There are way too many people wrapped up in the hype of this tech. It feels like self driving tech circa 2016 all over again.