In the journey from CEO mandate "build a product that gives parents control" to developer implementation, "parents want control" somehow turns into "What parents want is extremely fine-grained controls," which isn't the same thing.
So a bunch of product managers brainstorm a huge list of ways that parents might want "control," hand that off to some developers, and voila: Everything becomes way too complicated for everybody and the company is able to say they offer "control" while abdicating their stated obligation of giving parents the "safe" product that the parents expect.
No, I think CEO mandate goes "Build parental controls" and PMs all shake their heads and go "No problem". It hits the developers, they go Too long to do it properly and PM goes "Nah, we just prefer MVP only so we can say we have it and move on". it's also never really touched again so as features get added on, Parental Controls is poorly thought about last minute implementation.
To fix this, it's going to have to be legislation so financial incentives are present.
You’ve explained this in plain and simple language far more directly than the linked study. Score yet another point for the theory that academic papers are deliberately written to be obtuse to laypeople rather than striving for accessibility.
Vote for the Party that promises academic grants for people that write 1k character long forum posts for the laypeople instead of other experts of the field.
I don't think the parent post is complaining that academics are writing proposals (e.g as opposed to people with common sense).
Instead, it seems to me that he is complaining that academics are writing proposals and papers to impress funding committees and journal editors, and to some extend to increase their own clout among their peers. Instead of writing to communicate clearly and honestly to their peers, or occasionally to laymen.
And this critique is likely not aimed at academics so much as the systems and incentives of academia. This is partially on the parties managing grants (caring much more about impact and visibility than actually moving science forwards, which means everyone is scrounging for or lying about low hanging fruit). It is partially on those who set (or rather maintain) the culture at academic institutions of gathering clout by getting 'impactful' publications. And those who manage journals also share blame, by trying to defend their moat, very much hamming up "high impact", and aggressively rent-seeking.
Yes, thank you, exactly. It’s a culture and systems issue. Thank you for clarifying a post I wrote in the early morning while waiting for my baby to fall back to sleep!
They have. I don’t remember the specifics but I believe there was some kind of hosting provider that had basically everything in production deleted and had to shut down.
But that just proves the point - if no one in this thread can remember even one example, then (however unfortunate it might be for the users) the easy answer is "no, a security breach is very unlikely to break a company"
At some point we have to be willing to call out, at a societal level, that LLMs have been fundamentally oversold. The response to "It made defamatory facts up" of "You're using it wrong" is only going to fly for so long.
Yes, I understand that this was not the intended use. But at some point if a consumer product can be abused so badly and is so easy to use outside of its intended purposes, it's a problem for the business to solve and not for the consumer.
Maybe someone else actually made up the defamatory fact up, and it was just parroted.
But fundamentally the reason ChatGPT became so popular as opposed to its incumbents like Google or Wikipedia, is that it dispensed with the idea of attributing quotes to sources. Even if 90% of the things it says can be attributed, it's by design that it can say novel stuff.
The other side of the coin is that for things that are not novel, it attributes the quote to itself rather than sharing the credit with sources, which is what made the thing so popular in the first place, as if it were some kind of magic trick.
These are obviously not fixable, but part of the design. I have the theory that the liabilities will be equivalent if not greater to the revenue recouped by OpenAI, but the liabilities will just take a lot longer to realize, considering not only the length of trials, but the length of case law and even new legislation to be created.
In 10 years, Sama will be fighting to make the thing an NFP again and have the government bail it out of all the lawsuits that it will accrue.
Businesses can't just wave a magic wand and make the models perfect. It's early days with many open questions. As these models are a net positive I think we should focus on mitigating the harms rather than some zero tolerance stance. We shouldn't allow the businesses to be neglectful, but I don't see evidence of that.
> We shouldn't allow the businesses to be neglectful, but I don't see evidence of that.
Calling it "AI", shoving it into many existing workflows as if it's competently answering questions, and generally treating it like an oracle IS being neglectful.
Here on HN we talk about models, and rightfully so. Elsewhere though people talk about AI, which has a different set of assumptions.
It's worth noting too that how we talk about and use AI models is very different from how we talk about other types of models. So maybe it's not surprising people don't understand them as models.
Even if they had a magic wand, they still couldn't make them perfect. Because they are by nature, imperfect statistical machines. That imperfection IS their main feature.
Businesses should be able to not lie. In fact, they should be punished for lying and exaggersting much more often - both by being criticised and loosing contracts and legally.
You seem to be missing the obvious point: popularity of a product doesn't ensure the benefit of said product. There are tons of wildly popular products which have extremely negative outcomes for the user and society at large.
Let's take a weaker example, some sugary soda. Tons of people drink sugary sodas. Are they truly a net benefit to society, or a net negative social cost? Just pointing out that there are a high number of users doesn't mean it inherently has a high amount of positive social outcomes. For a lot of those drinkers, the outcomes are incredibly negative, and for a large chunk of society the general outcome is slightly worse. I'm not trying to argue sugary sodas deserve to be completely banned, but its not a given they're beneficial just because a lot of people bothered to buy them. We can't say Coca-Cola is obviously good for people because its being bought in massive quantities.
Do the same analysis for smoking cigarettes. A product that had tons of users. Many many hundreds of millions (billions?) of users using it all day every day. Couldn't be bad for them, right? People wouldn't buy something that obviously harms them, right?
AI might not be like cigarettes and sodas, sure. I don't think it is. But just saying "X has Y number of weekly active users, therefore it must be a net positive" as some example of it truly being a positive in their lives is drawing a correlation that may or may not exist. If you want to show its positive for those users, show those positive outcomes, not just some user count.
Market research says "Parents want control."
In the journey from CEO mandate "build a product that gives parents control" to developer implementation, "parents want control" somehow turns into "What parents want is extremely fine-grained controls," which isn't the same thing.
So a bunch of product managers brainstorm a huge list of ways that parents might want "control," hand that off to some developers, and voila: Everything becomes way too complicated for everybody and the company is able to say they offer "control" while abdicating their stated obligation of giving parents the "safe" product that the parents expect.
reply