I'm normally suspicious but honestly they've been so massively supply-constrained that I don't think it really benefits them much. They're not worried about getting enough demand for the new models; they're worrying about keeping up with it.
Granted, there's a small counterargument for mythos which is that it's probably going to be API-only not subscription
Undercover mode seems like a way to make contributions to OSS when they detect issues, without accidentally leaking that it was claude-mythos-gigabrain-100000B that figured out the issue
somewhat surprisingly, it's actually sycophantic in both directions. i've been running homegrown evals of claude, gpt, gemini, and grok, and grok is the most likely to agree with the prompter's premise, and to hallucinate facts in support of an agenda. so it's actually deeper than just pattern-matching to elon's opinions (which it also tends to do).
BTW: Claude does the best on these evals, by far. The evals are geared towards seeing how much of an independent ground truth the models have as opposed to human social consensus, and then additionally the sycophancy stuff I already mentioned.
It’s the same thing. Obviously withdrawals and such are different but the core mechanism of disregulated reward processing leading to compulsive behavior engagement is exactly the same.
One obvious risk would be blunting of longer term GLP-1 receptor activation. Imagine type 2 diabetes but for ghrelin.
To use an analogy amphetamines have a honeymoon period, and it feels like a lot of people on these weight loss drugs haven’t been on them long enough to get past the honeymoon period and see what the effects are after 10, 20, etc years
It's possible. But, we've had another GLP-1 medication in use for about a decade and a half now - liraglutide. So far, we haven't seen evidence of that occurring.
I don't think anyone who is both informed and sane would suggest that it is impossible that there are negative long term impacts from taking the medication. Just that we have no current indication of them, and that being afraid about a "what if" without any concrete concerns when the alternative is the "continue being in one of the riskiest states possible for human health" is silly.
People don't realize that Ozempic is already a third generation GLP drug, Mounjaro is a 4th, and the try generation drugs are already in wide scale clinical trials.
We do in fact know a lot about how these drugs affect people by now, and as you point out, we have well over a decade of data on them.
It's less about the NSA having AI capabilities and more the inverse - the NSA having access to people's chatGPT queries. Especially if we fast-forward a few years I suspect people are going to be "confiding" a ton in LLMs so the NSA is going to have a lot of useful data to harvest. (This is in general regardless of them hiring an ex-spook BTW; I imagine it's going to be just like what they do with email, phone calls and general web traffic, namely slurping up all the data permanently in their giant datacenters and running all kinds of analysis on it)
I think the use case here are LLMs trained on billions of terabytes of bulk surveillance data. Imagine an LLM that has been fed every banking transaction, text message or geolocation ping within a target country. An intelligence analyst can now get the answer to any question very, very quickly.
> I suspect people are going to be "confiding" a ton in LLMs
They won't even need to rely on people using ChatGPT for that if things like Microsoft's "Recall" is rolled out and enabled by default. People who aren't privacy conscious will not disable it or care.
Because he was a coder working on his startup with no concern for the massive amounts of harm he was causing and many of the denizens of HN can relate to that.
I mostly agree, but I would like to see the full context of the hitman stuff. My understanding is that it was a law enforcement plant that encouraged him to hire a hitman and then referred him to one, which was of course also a plant. Still bad, but also seems a bit like entrapment.
> I think one solution could be in licenses that force companies/business of certain sizes to pay maintenance fees. One idea from the top of my head.
This just doesn't work. Fully open source software (as opposed to source available) is so much more useful than the alternative that there's always going to be an OSS fork for any sufficiently useful project. AFAICT Elasticsearch and Redis have not really "won" by their respective license changes but rather have just fragmented their own market and sown the eventual seeds of their destruction.
Granted, there's a small counterargument for mythos which is that it's probably going to be API-only not subscription
reply