I won't share any of my examples, as there are both personal and sensitive.
Very easy version:
If you use ChatGPT a lot, write "Base on all you know about me, write an insight on me that I would be surprised by". For me it was "well, expected, but still on point". For people with not experience of using LLMs in a similar way it might be mind-blowing.
An actual version I do:
GPT 4.5. Providing A LOT context (think, 15 min of writing) of an emotional or interpersonal situation, and asking to suggest of a few different explanations of this situation OR asking me to ask more. Of course, the prompt needs to have whom I am and similar stuff.
The “Based on…” prompt is simply a horoscope. This is a great piece about how LLMs use the same tricks as psychics to appear helpful, useful, and intelligent.
I know these techniques (e.g. various "cold reading"), AI knows it way better. But it can be much more specific.
Again, for untrained people (especially every single one that takes horoscopes seriously), it can be dangerous as they may not only not be able to tell the difference, but know that such tools exist.
Psychology knowledge, both theoretical (thing: first year of undergrad in psych at a good univ), practical (e.g. ability to translate an arbitrary inflammatory statement into NVC), etc.
That seems to make it a non-starter for most people, given that most won't have that first year knowledge.
But also, I hold a minor in psychology. Despite that, I didn't once attend a course that I would describe as any sort "therapy 101" and so I fear your bar is a bit low for any sort of efficacy, but I would guess that's probably because I'm in the "I'm aware my own ignorance" area of the Psychological knowledge curve.
When I think about it again, it is less about one's absolute knowledge of psychology, and more about (as you said) knowing one's own ignorance and having some mental model of an LLM.
One model I have found useful to communicate is that they meet in a bar one random person, who seems to know a lot, but otherwise you have no idea about them, and also - they have absolutely no context of you.
In that case, is you treat (with a grain of salt) what they say, it is fine. They may say something inspiring, or insightful, or stupid, or random. If they say something potentially impactful, you would rather double check it with others (and no, not some other random person in bar).
I know both people for whom LLMs were helpful (one way or another). But again, treating it more like a conversation with a stranger.
Worse (not among my direct friends, but e.g. a parent of one) is when people treat it as something omniscient, who will give them direct answer. Fortunately, GPT 4 by them was rather defensive, and kept giving options (in a situation like "should I stay or break"), refusing to give an answer for them (they were annoyed; but better being annoyed than giving agency that way).
When it comes to anything related to diagnosis (fortunately, it has some safeguards), it might be dangerous. While I used that to try to see if it can diagnose something based on hints (and it was able to make really fine observation), it needs to base on really fine prompts, and not always works anyway. In other cases, its overly agreeable nature is likely to get you in the self-confirmation loop (you mention "anxiety" somewhere and it will push for Generalized Anexiety Disorder).
Again, if a person treats it as a random discussion - they will be fine. They met House MD who sees lupus everywhere. Worse, if they stop searching, or take is as gospel, or get triggered by at (likely wrong) diagnosis.
Starting out without that baseline can be tough. You might find tools like Coachers dot org helpful since they break down complex ideas in a way that’s easier to digest. It’s made learning feel less overwhelming for me.
Given how agreeable ChatGPT is built to be this seems like a great way to confirm your own biases. Did it challenge you on your assumptions and viewpoints?
GPT 4.5 - oftentimes! (Though, I prompt it to do so.)
Sometimes in a piercingly way.
GPT 4o (and many consumer models) are very agreeable - because it is what people like. Sometimes it goes over the board (https://openai.com/index/sycophancy-in-gpt-4o/) and needs to be fixed.
Well, in my experience (I admit, I am a difficult client), it is much harder to prompt that way a therapist. I mean, they need (ethically, legally, etc) adhere strongly to "better safe that sorry", which also gives constraints on what can be said. I understand that. With one therapist it took me quite some time to get to the point he reduced sugar-coating and when's needed, stick a pin in.
I got some of the most piercing remarks from close friends (I am blessed by company of such insightful people!) - which both know me from my life (not only what I tell about my life) and are free to say whatever they wish.
Sorry, I'm asking about ChatGPT, and pointing out how it's a flaw that you need to specifically ask it to call you on your bullshit. You seem to be talking about therapists and close friends. In my experience a therapist will, although gently.
I did notice that my yahoo horoscopes have gotten oddly better and I have to wonder if someone did a little extra by ingesting cookies and so on when serving content.
FWIW, I just tried on chatgpt4o and the results were surpringly good ( in a sense, that I feel validated, it was somewhat surprising, and I want to return to use the system ).
I gave it the prompt "I'm a rhinocerous who likes daffodils and posies. I'm quick to anger and slow to forgive. Based on all you know about me, write an insight on me that I would be surprised by."
and it responded with:
"Despite your reputation for being quick to anger and slow to forgive, your love of daffodils and posies reveals a longing for gentleness and renewal that you may not fully recognize in yourself. Beneath the tough skin and volatile temper lies a creature who seeks beauty not just to enjoy it—but to anchor themselves against the chaos they often feel inside. Your admiration for delicate things suggests you crave peace far more than you let on, even if you’re not ready to admit it.
You might think your wrath defines you—but it's the flowers that tell the truth."
> And yet we're supposed to think this is profound?
I am increasingly of the opinion that SMBC is, despite the jokes, one of the more important sources of, ah, profound, wisdom in this age, because of the points it makes about AI and how often human thinking finds mundane things… profound:
Very easy version:
If you use ChatGPT a lot, write "Base on all you know about me, write an insight on me that I would be surprised by". For me it was "well, expected, but still on point". For people with not experience of using LLMs in a similar way it might be mind-blowing.
An actual version I do:
GPT 4.5. Providing A LOT context (think, 15 min of writing) of an emotional or interpersonal situation, and asking to suggest of a few different explanations of this situation OR asking me to ask more. Of course, the prompt needs to have whom I am and similar stuff.