Watch this week. Amazon cloud growth has been terrible (Google and Microsoft remains >30%). Amazon have basically no good offerings for AI which is where gcp is bringing to eat their lunch. Anthropic moving to TPU for inference is a big big signal.
What the fuck does this even mean? How do you test or ensure it. Because based on the actual outcomes ChatGPT is 0-1 for preventing suicides (going as far as to outright encourage one).
If you're going to make the sample size one, and use the most egregious example, you make pretty much anything that has ever been born or built look terrible. Given there are millions of people using chat, GPT and others for therapy every week, maybe even everyday, citing a record of being 0-1 is pretty ridiculous.
To be clear, I'm not defending this particular case. Chat GPT clearly messed up bad.
Since as you say this utilitarian view is rather common, perhaps it would good to show _why_ this is problematic by presenting a counterargument.
The basic premise under GP's statements is that although not perfect, we should use the technology in such a way that it maximizes the well being of the largest number of people, even if comes at the expense of a few.
But therein lies a problem: we cannot really measure well being (or utility). This becomes obvious if you look at individuals instead of the aggregate: imagine LLM therapy becomes widespread and a famous high profile person and your (not famous) daughter end up in "the few" for which LLM therapy goes terribly wrong and commit suicide. The loss of the famous person will cause thousands (perhaps millions) people to be a bit sad, and the loss of your daughter will cause you unimaginable pain. Which one is greater? Can they even be be compared? And how many people with a successful LLM therapy are enough to compensate for either one?
Unmeasurable well-being then makes these moral calculations at best inexact and at worst completely meaningless. And if they are truly meaningless, how can they inform your LLM therapy policy decisions?
Suppose for the sake of the argument we accept the above, and there is a way to measure well being. Then would it be just? Justice is a fuzzy concept, but imagine we reverse the example above: many people lose their lives because of bad LLM therapy, but one very famous person in the entertainment industry is saved by LLM therapy. Let's suppose then that this famous persons' well being plus the millions of spectators' improved well-being (through their entertainment) is worth enough to compensate the people who died.
This means saving a famous funny person justifies the death of many. This does not feel just, does it?
There is a vast amount of literature on this topic (criticisms of utilitarianism).
We have no problem doing this in other areas. Airline safety, for example, is analyzed quantitatively by assigning a monetary value to an individual human life and then running the numbers. If some new safety equipment costs more money than the value of the lives it would save, it's not used. If a rule would save lives in one way but cost more lives in another way, it's not enacted. A famous example of this is the rule for lap infants. Requiring proper child seats for infants on airliners would improve safety and save lives. It also increases cost and hassle for families with infants, which would cause some of those families to choose driving over flying for their travel. Driving is much more dangerous and this would cost lives. The FAA studied this and determined that requiring child seats would be a net negative because of this, and that's why it's not mandated.
There's no need to overcomplicate it. Assume each life has equal value and proceed from there.
> Yeah this isn’t how any of this works and you’re deluding yourself.
I am not offended (at all). But you're dismissing my (continued) positive experience with "You're deluding yourself". How do you know? It'd be a lot more unfair to people who benefit more than I do, and I can totally imagine that being not a small set of people.
> Also incredible how you framed improving your mental health as a consequence of a (pseudo) technical skill set.
It's not incredible at all. If you're lost in a jungle with predators, a marksman might reach for their gun. A runner might just rely on running. I am just using skills I'm good at.
It sounds like you’re feeling down. Why don’t you pop a couple Xanax(tm) and shop on Amazon for a while, that always makes you feel better. Would you like me to add some Xanax(tm) to your shopping cart to help you get started?
Set an alarm on your phone for when you should take your meds. Snooze if you must, but don't turn off /accept the alarm until you take them.
Put daily meds in cheap plastic pillbox container labelled Sunday-Saturday (which you refill weekly). The box will help you notice if you skipped a day or can't remember if you took them or not today. Seeing pills not taken from past days also serves to alert you if/that your "remember-to-take-them" system is broken and you need to make conscious adjustmemts to it.
You know, usually it’s positive claims which are supposed to be substantiated, such as the claim that “LLMs can be good at therapy”. Holy shit, this thread is insane.
You don't seem to understand how burden of proof works.
My claim that LLMS can do effective therapeutic things is a positive claim. My report of my wife's experience is evidence. My example of something it has done for her is something that other people, who have experienced LLMs, can sanity check and decide whether they think this is possible.
You responded by saying that it is categorically impossible for this to be true. Statements of impossibility are *ALSO* positive claims. You have provided no evidence for your claim. You have failed to meet the burden of proof for your position. (You have also failed to clarify exactly what you consider impossible - I suspect that you are responding to something other than what I actually said.)
This is doubly true given the documented effectiveness of tools like https://www.rosebud.app/. Does it have very significant limitations? Yes. But does it deliver an experience that helps a lot of people's mental health? Also, yes. In fact that app is recommended by many therapists as a complement to therapy.
But is it a replacement for therapy? Absolutely not! As they themselves point out in https://www.rosebud.app/care, LLMs consistently miss important things that a human therapist should be expected to catch. With the right prompts, LLMs are good at helping people learn and internalize positive mental health skills. But that kind of use case only covers some of the things that therapists do for you.
So LLMs can and do to effective therapeutic things when prompted correctly. But they are not a replacement for therapy. And, of course, an unprompted LLM is unlikely to on its own do the potentially helpful things that it could.
No, it is evidence. It is evidence that can be questioned and debated, but it is still evidence.
Second, you misrepresent. The therapists that I have heard recommend Rosebud were not paid to do so. They were doing so because they had seen it be helpful.
Furthermore you have still not clarified what it is you think is impossible, or provided evidence that it is impossible. Claims of impossibility are positive assertions, and require evidence.
It’s amusing how “ads” is seen as an obvious way to make profit for OAI as if Google’s (especially) and Meta’s ads businesses aren’t some of the most sophisticated machines on the planet.
Three generations of Twitter leadership couldn’t make ads on that platform profitable and that exposes far more useful user specific information than ChatGPT.
There's an absolutely massive disconnect between the technology Sam Altman is presenting in interviews and what is available. Like they're going to create an AI that will design fusion power plants, but right now they can't turn a profit on a technology that millions of people actually use in their day to day work? Can you sell enough ads to carry you through to the fusion capable AI?
More and more OpenAI is drawing parallels to the Danish scandal of IT Factory. Self-proclaimed world leading innovation and technology in the front, financial sorcery in the back.
If they really believe their AI is going to be so great, I guess they can just ask it for a business model when it gets there. So their lack of business model is at least self-consistent.
That is more or less their actual plan. They ignore or want us to ignore that the technology is commoditising so fast that even if it is great, they won't have enough of an advantage for this to provide an edge for more than a matter of months. Just as Microsoft and anyone betting on AI data centre rollouts want us to ignore that the equipment they are rolling out will be functionally inadequate to support new models in far less time than they can make money to offset the cost; the only part of this capital expenditure that will provide lasting value is the building/power/cooling infrastructure, and probably not all of that.
It's a giant money pit, funding a bunch of people who are not long off the crypto grift train if they are at all.
The LLM space is so weird. On the one hand they are spectacularly amazing tools I use daily to help write code, proofread various documents, understand my home assistant configuration, and occasionally reflect on parenting advice. On the other hand, they are the product of massive tech oligarchs, require $$$$ hardware, dumber than a box of rocks at times, and all the stuff you said. Oh yeah, and it definitely has a whiff of crypto grift all over it, but yet unlike crypto it actually is useful and produces things of value.
Like, where is this tech headed? Is it always going to be something that can only be run economically off shared hardware in a data center or is the day I can run a “near frontier model” on consumer grade hardware just around the corner? Is it always going to be trained and refined by massive centralized powers or will we someday soon be able to join a peer 2 peer training clan ran by denizens of 4chan?
This stuff is so overhyped and yet so under hyped at the same time. I can’t really wrap my head around it.
> the day I can run a “near frontier model” on consumer grade hardware just around the corner?
I suspect it is, in fact. But you can also see why a bunch of very very large, overinvested companies would have incentives to try to make sure it isn't. So it's going to be interesting.
No I just think it's the same people (because it is the same people). They jump from hype technology to hype technology, and many of them had an enormous incentive to jump from one GPU-investment-heavy technology with a bad reputation for grift to the new shiny-clean-hope-for-the-future thing that might help them make use of their capital investments.
But specifically at least one of these people — Sam Altman —- is not, IMO, off the crypto grift train, because he's still chairman of Worldcoin, which strikes me (and more importantly strikes regulators around the world [0]) as a pretty shoddy operation (not to mention creepy and weird).
> It’s amusing how “ads” is seen as an obvious way to make profit for OAI as if Google’s (especially) and Meta’s ads businesses aren’t some of the most sophisticated machines on the planet.
There is much more manipulation potential with LLMs than typical ads. I am worried. It gets more and more difficult to distinct ads and the neutral information.
Twitter executed incredibly, incredibly badly in the ads space. It came out that a majority of their business was brand advertising which just blows my mind.
They should've made so much money on direct response and yet somehow they messed it all up.
Just like they should have been a few times as large in terms of users, but they executed really, really badly.
So I'm not sure Twitters failures imply anything about OpenAIs prospects.
Eventually, yes. But they should've been huge, making substantial fractions (50% )of Meta or Google's revenue. I could never understand what went wrong, tbh.
> There’s a famous Sam Altman interview from 2019 in which he explained OpenAI’s revenue model [1] :
>> The honest answer is we have no idea. We have never made any revenue. We have no current plans to make revenue. We have no idea how we may one day generate revenue. We have made a soft promise to investors that once we’ve built this sort of generally intelligent system, basically, we will ask it to figure out a way to generate an investment return for you. [audience laughter] It sounds like an episode of Silicon Valley, it really does, I get it. You can laugh, it’s all right. But it is what I actually believe is going to happen.
> It really is the greatest business plan in the history of capitalism: “We will create God and then ask it for money.” Perfect in its simplicity. As a connoisseur of financial shenanigans, I of course have my own hopes for what the artificial superintelligence will come up with. “I know what every stock price will be tomorrow, so let’s get to day-trading,” would be a good one. “I can tell people what stocks to buy, so let’s get to pump-and-dumping.” “I can destroy any company, so let’s get to short selling.” “I know what every corporate executive is thinking about, so let’s get to insider trading.” That sort of thing. As a matter of science fiction it seems pretty trivial for an omniscient superintelligence to find cool ways make money. “Charge retail customers $20 per month to access the superintelligence,” what, no, obviously that’s not the answer.
> On a pure science-fiction suspension-of-disbelief basis, this business plan is perfect and should not need any updating until they finish building the superintelligent AI. Paying one billion dollars for a 0.2% stake in whatever God comes up with is a good trade. But in the six years since announcing this perfect business plan, Sam Altman has learned [2] that it will cost at least a few trillion dollars to build the super-AI, and it turns out that the supply of science-fiction-suspension-of-disbelief capital is really quite large but not trillions of dollars.
> [1] At about 31:49 in the video. A bit later he approvingly cites the South Park “underpants gnome” meme.
> [2] Perhaps a better word is “decided.” I wrote the other day about Altman’s above-consensus capital spending plans: “'The deals have surprised some competitors who have far more modest projections of their computing costs,’ because he is better at this than they are. If you go around saying ‘I am going to build transformative AI efficiently,’ how transformative can it be? If you go around saying ‘I am going to need 1,000 new nuclear plants to build my product,’ everyone knows that it will be a big deal.”
reply