This was maybe, 1993, I remember my step dad bought a photo copy machine and had us photo copy manuals for software he had bought and was planning to return?
I'm starting to see brigades of boys on electric bikes more and more. Some girls on scooters. Then there are the kids driving kids on electric golf carts around the neighborhood (sigh).
You're ignorant. Why wait until a person is so broken they need clinical therapy? Sometimes just a an ear or an oppertunity to write is sufficient. LLMs for therapy is as vaping is to quitting nicotine--extremely helpful to 80+% of people. Confession in the church setting I'd consider similar to talking to LLM. Are you anti-that too? We're talking about people that just need a tool to help them process what is going on in their life at some basic level, not more than just to acknowledge their experience.
And frankly, it's not even clear to me that a human therapist is any better. Yeah, maybe the guard-rails are in place but I'm not convinced that if those are crossed it'd result in some sociately consequences. Let people explorer their mind and experience--at the end of the day, I suspect they'd be healthier for it.
> And frankly, it's not even clear to me that a human therapist is any better.
A big point of therapy is helping the patient better ascertain reality and deal with it. Hopefully, the patient learns how to reckon with their mind better and deceive themselves less. But this requires an entity that actually exists in the world and can bear witness. LLMs, frankly, don’t deal with reality.
I’ll concede that LLMs can give people what they think therapy is about: lying on a couch unpacking what’s in their head. But this is not at all the same as actual therapeutic modalities. That requires another person that knows what they’re doing and can act as an outside observer with an interest in bettering the patient.
> Sometimes just a an ear or an oppertunity to write is sufficient.
People were able to write about their feelings and experiences before the invention of a chat bot that tells you everything that you wrote is true. Like you could do that in notepad or on a piece of paper and it was free
UBI is a more efficient allocation of already existing government services. Instead of a program for each and ever need a person may have, they're just given cash and those programs would be deprecated. It's more efficient. The rebuttal is sourced from those that have existing vested interest that these antiquated piece-meal programs continue to operate.
I have a feeling that the chatgpt ui does some behind-the scenes tuning as well--hidden prompt engineering if you will. I migrated to the api and 4o still seems different. Most obvious, I don't get the acks that make me feel like I should run for president.
Even ChatGPT 5 confirmed this,
why does the gpt-4o api not do this?
ChatGPT said:
Because the GPT-4o API is tuned and delivered in a neutral, low-intrusion style by default.
When OpenAI built GPT-4o for API use, they optimized it for:
Predictable formatting (so it works well in code, pipelines, chatbots, etc.)
Minimal unsolicited chatter (no “Nice!” or “Great job!” unless explicitly prompted)
Deterministic tone — so that two API calls with the same input produce consistent, professional output without extra filler.
That’s different from the ChatGPT product experience, which has its own “assistant personality” layer that sometimes adds those rapport-building acknowledgements in casual conversation.
In API mode, you’re the one defining the personality, so if you want that “Good! Looks like you’re digging in” style, you have to bake it into the system prompt, for example:
The GPT-4o you talk to through ChatGPT and the GPT-4o you access via the API are different models... but they're actually both available via the API.
https://platform.openai.com/docs/models/gpt-4o is gpt-4o in the API, also available as three date-stamped snapshots: gpt-4o-2024-11-20 and gpt-4o-2024-08-06 and gpt-4o-2024-05-13 - priced at $2.50/million input and $10.00/million output.
https://platform.openai.com/docs/models/chatgpt-4o-latest is chatgpt-4o-latest in the API. This is the model used by ChatGPT 4o, and it doesn't provide date-stamped snapshots: the model is updated on a regular basis without warning. It costs $5/million input and $15/million output.
If you use the same system prompt as ChatGPT (from one of the system prompt leaks) with that chatgpt-4o-latest alias you should theoretically get the same experience.
>> Because the GPT-4o API is tuned and delivered in a neutral, low-intrusion style by default.
But how sure are you that GPT-5 even had this data, and if it has it, it's accurate? This isn't information OpenAI has publicly divulged and it's ingested from scraped data, so either OpenAI told it what to say in this case, or it's making it up.
When an ISP assigns an IPv6 prefix, home networks / homelabs can use that prefix for internal addressing. But if the ISP later changes the prefix, all internal devices using the old addresses break. This makes the concept of globally unique IPv6 addresses seem problematic for end-users. Is there something I’m misunderstanding about how this is supposed to work?
reply