Perhaps you have good intentions, but some people don't. Overselling material is typical of charlatans ("you should read more, and go through me to interpret it").
I'm just providing a general skeptical counterpoint to the idea that reading a lot is always good. Many have done that before me (Buddha, Schopenhauer, etc).
It is kind of ironical that I'm name dropping old thinkers here, and providing my interpretation on how to read it. There's no way out of this paradox.
“General” skepticism can be a good attitude to have. However, take a look again at what you wrote to me. “Why should I guide my attention using a random substack post?” That was the point we both were making. We both know you shouldn’t take internet posts as advice for life right out of the box, there’s no need to be a cynic about it. I was agreeing with you all along. I even corrected myself about the book thing and tried to make a joke about it, but you doubled down. You didn’t act like a skeptic — you acted like a bully.
Looks like you got offended then. Calling skeptics bullies is very common, I am used to it.
I stated my intentions from the very beginning, I'm not challenging the wisdom, I'm challenging potentially charlatan ways of applying it. If you're not doing that, there's no reason to get offended.
Also, there was no reason to erase your posts. Now people will never know if you were being playful and agreeable or not.
Yes, since you were just needlessly acting out your skeptic points with someone agreeing with you, I did get offended. I believe it’s a common thing to feel offended when I engage people in good faith and get attacked. I feel specially offended when I try to make ammends and get attacked again.
Anyways, this could have been a great conversation. I hope you’re happy knowing you have been right along. Or have you? Oh, no! But you’re a skeptic! How can we know now? Tun-dun-duuuuun
Bye-bye
So am I. Mine is designed to discourage people from trusting charlatans. I said it from the very beginning, quite honestly.
I don't need to be right, and people don't need to follow my example. They just need to think "wait, why am I reading this thing? why does it feel compelling? am I being tricked?".
Maybe you're not used to skepticism in your life, and you usually get the things you want by putting up a show. That's actually not bad, but I'm not going to apologize for attempting to increase awareness of how charlatans work.
Hey, friend, let's see if my account helps ease your worries a little.
Three years ago I decided to learn how to code, even though I was already 35 years old. My line of work is a bit far away from tech… I'm still trying to learn everyday, but going real slow because of work and some mental health stuff. Even though LLMs have been already a thing for a couple of years, I only managed to first try my hand at them a couple of months ago. Understand that I have very low self-esteem and that social anxiety prevents me from asking for help, even online — I think I might have done so less than 5 times my whole life, and I've been online since I was kid in 1996…
Asking a complex text generator for help feels a lot more comfortable for me, even though there are ups and downs. I'm not sure if what I'm doing is the same as what everyone is calling "vibe coding", but it has been a real game changer in my self-study routine. I know LLMs sometimes (often?) write "unorthodox" code, but I like studying their output and comparing it to other stuff I find online. I'm sure there are better ways to learn, and I still wish to become like experienced programmers who learned their trade before these tools were around.
Anyway, yeah, the machine helps. But I believe you're only feeling like the machine can make most of your work (or maybe replace you?) because your experience enables you to use the machine effectively. See it from my perspective as a baginner: I managed to do more than I could before, sure, but I quickly noticed I'll never get better at it if I don't learn how to "speak" it. At best, it would feel like knowing a foreign language "instrumentally", like enough to read a text if you have a dictionary beside you, but not near enough to strike a conversation with a native speaker of that language. If everything goes well, most beginners will soon realize that they need to know much more, even if just to write better prompts when asking for code. But, if all goes bad… I don't know yet…
I worry about that too — like I worry about my young nephew and niece who barely touched a physical keyboard their entire lives and cannot touch-type to save their lives if they needed to. Whatever happens, we have to make the best of it. I would still be striving to be like more experienced engineers anyway, with or without LLMs around. But I can only speak for myself.
I hope you feel at least a bit less sad by knowing there are still people out here who appreciate the effort people like you took.
Not in and of itself, but when you have people in positions of power taking the marketing hype (and not the cold hard facts) of "A.I." as "Gospel Truth"™ and putting machines that are clearly not capable of actual decision-making in largely unsupervised control of actual real-world processes, therein lies the danger. (This same danger applies when hiring live humans for important roles that they're completely untrained and unqualified for.)
The real danger is generating hysteria to justify a regulatory crackdown that restricts the public's access to AI and, conveniently, limits their competition.
According to Mr. Yudkowsky, what it takes to get us out of danger's way is "a plan", which he surely would back up... A plan -- not direct action against the system which made this whole situation possible in the first place. I understand his preoccupation, but he makes it seem like salvation for humanity is just a matter of the industry going easy with this. It reminds me of Chinese factories and coal generators going offline for a couple of days so there can be beautiful blue skies in Beijing by the time Victory Day comes. Never mind that we, the powerless, are already shaken by the menace of nuclear war, the rise of neo-fascism, and unprecedented natural disasters. So we're supposed to believe the whole danger lies with AI now? Why should we care when presented with another danger if we never ceased to be surrounded by danger in the first place? Respectfully, I believe Mr. Yudkowsky's opinions are not radical enough. C'mon, Eliezer, what would Sarah Connor do? Me, if someone came up with a "Terminator 2: Judgement Day" kind of plan, I would surely back it up.