> I don't really think LLMs have done that much to change the actual situation around ability/outcomes
from my own experiences and many others I have seen on this site and elsewhere, I'm not sure how anyone could conclude this.
> it doesn't seem to me like it's actually leveling up mediocre programmers to "very good" ones
Oh well then if this is your metric then maybe your take is correct, but not relevant? From the top level comment I thought we were talking about the bar being lowered for building something thanks to AI and you don't need to become any better at being a programmer to do so.
I once had a vexing problem with my old Intel MacBook — macOS failed to boot, but Windows seemed totally normal. Can't possibly be a hardware failure, right? The symptoms disappeared after replacing the SATA cable!
This reminds of the infamous GPU issues of the unibody models (the last non-retina ones). I have one such 2012 15" MBP which has a dedicated GPU which, as I understand it, has developed soldering issues.
Non-Mac OSs don't know how to turn this GPU on out of the box, so it just sits there without bothering anybody. But, for some reason, MacOS turns it on and it craps the bed, rendering the machine unusable.
I had the 2010 version of this model, with the same symptoms starting in mid 2011. I would get 5-8 crashes a day from the GPU being on the fritz.
Apple ended up replacing the mainboard in a free out-of-AppleCare repair. I never had the problem again and I used the machine regularly until about 2018.
In my case, it lasted one or two more years, and I only learned about the repair after they stopped offering it. By that time, the machine had already been replaced for other, unrelated reasons.
No! Maybe I wasn't entirely clear in what I wanted to say.
The point is ChatGPT gets various info about you and it won't disclose to you that it has them.
There's the memory feature, but various reports (and my own experience) indicate that even if you disable it, some stuff you've said before (or the LLM inferred) is still fed into its sytem prompt.
We also know that AI can sometimes make up stuff. I think it might have "guessed" the user has ADHD, this got added into the system prompt and it won't be revealed to the user considering how this works. It wasn't done on purpose and wasn't malicious.
Better job prospects for more people, lower inflation, cheaper healthcare, cheaper housing (actually affordable housing), less obesity, better school programs, more distributed, less controled, and less corporate internet and web, higher fertility rates, less clouds of war and civil tension, more political legitimacy and better political climate, better nutricion (official stats show obesity at an all time high and rising - despite all the health influencer slop), less impacted rural areas and small towns, and many other stats besides. And a more stable global order too.
Most of those apply both here in Europe and the US.
And that's without even getting into more subjective QoL stuff, from cultural production to the widespread depression and the loneliness epidemic.
I know "try this other tool" is probably an eye-roll-worthy response, but as someone who's not a programmer but is in IT and has to write some scripts every once in a while and has a lot of AI-heavy dev friends - all I've ever heard about Copilot is that it's one of the worst.
I recently used Claude for a personal project and it was a fairly smooth process. Everyone I know who does a lot of programming with AI uses Claude mostly.
Nah. Everything I've read about hallucinogens says dose doesn't change effect duration, only intensity. Drugs do affect people differently, so I wouldn't jump straight to saying they're exaggerating.
Also, 12 hours is definitely not short of LSD. I'd say it's the standard duration, with the peak lasting 7 hours. Longer trips can happen, at least to some people, but the default assumption should be about half a day.
definitely not hyperbole. last time i took LSD i dropped at 5pm before going to dinner and ended up at a bread factory 15 miles away at 5am watching them run the big mixers. no idea how i got there just remember walking all night.
salt was always advised to be limited, especially for those with high blood pressure. This hasn't changed, there are just vocal diet ideologues (mostly carnivore/keto) that are trying to post-hoc rationalize otherwise.
Everybody is sodium sensitive, it’s a basic fact that your body retains additional fluids if you increase your sodium intake, just talk to some bodybuilders. Chronic long term exposure to a high sodium diet is a risk factor for all sorts of issues because of this basic fact of biology. Way more so than MSG or even artificial sweeteners. But people focus on the wrong thing.
My understanding is that most people's blood pressure does not increase in response to dietary sodium, which is the sensitivity described in this context.
from my own experiences and many others I have seen on this site and elsewhere, I'm not sure how anyone could conclude this.
> it doesn't seem to me like it's actually leveling up mediocre programmers to "very good" ones
Oh well then if this is your metric then maybe your take is correct, but not relevant? From the top level comment I thought we were talking about the bar being lowered for building something thanks to AI and you don't need to become any better at being a programmer to do so.
reply