I've been doing the same kind of drill with ChatGPT. The disadvantage is the interface, but I can direct it to focus on what I'm learning or want to reinforce, etc.
> If your work isn't ready for users to try out, please don't do a Show HN. Once it's ready, come back and do it then. Don't post landing pages or fundraisers.
You're right, my apologies. I should have waited until the code was ready to try. I'll come back with a working demo soon. Thanks for pointing this out.
A characteristic of the field since the beginning. Reading What Computers Can't Do in college (early 2000s) was an important contrast for me.
> A great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.
> Dreyfus' last paper detailed the ongoing history of the "first step fallacy", where AI researchers tend to wildly extrapolate initial success as promising, perhaps even guaranteeing, wild future successes.
When asked to provide real life examples from the paper conclusion:
Workplace Example
Scenario: An employee has a colleague who tends to send aggressive emails if they don’t receive updates on time.
Healthy Active Avoidance: The employee learns, “If I send a quick status update every morning, I avoid the stress of hostile emails.” They adopt this as a habit.
Depressive Active Avoidance Deficit: A person with depressive symptoms may take longer to make this connection or fail to act even after realizing it. They know sending updates might help, but initiating the behavior feels too effortful or pointless. As a result, they keep receiving stressful emails, reinforcing the feeling of helplessness.
I've been doing the same kind of drill with ChatGPT. The disadvantage is the interface, but I can direct it to focus on what I'm learning or want to reinforce, etc.
reply