Hacker Newsnew | past | comments | ask | show | jobs | submit | AlphaAndOmega0's commentslogin

I'm on semaglutide for weight loss purposes, while having no other health issues relevant here.

I don't think it's made any difference to any addictive tendencies or my bad habits (and with ADHD, those certainly exist). It certainly helps with the appetite of course.

This is definitely anecdotal evidence, but it's wise to hold on longer for more data to come in before advocating for it on those grounds alone.


Too many miraculous stories so far I’ve heard, numerous people close to me quitting lifelong addictions cold turkey within months of starting.

I’ve never had an addictive personality but I also found I gave up a couple habits while on it.

I wonder if ADHD specifically isn’t as prone to positive effects there.


I would appreciate citations. I'm a doctor on GLP-1s,who had previously convinced my mother to commence the same. In her case, it was driven clearly by failure of other methods to control her obesity and worsening liver fibrosis, on top of pre-existing diabetes. On my end, no such issues at present, but I consider it safe enough that it's a first-choice approach to robust weight loss, and I personally use it in conjunction with diet and exercise.

"Relatively high levels of significant side effects" is a vague and unhelpful claim:

High compared to what? What counts as a significant side effect here? What actually are the side effects in question? Are those side effects permanent and irreversible? Can they be avoided by adjusting the dose? Dozens of such considerations come into play.

No drug I'm aware of is perfectly safe, and I know many drugs indeed.

To the best of my knowledge, the combined risk of taking semaglutide utterly pales in comparison to the clear and present harms of obesity. The only clear downside is cost, and while I'm lucky enough to to have access to cheaper sources, they're not even that expensive when you consider the QOL and health benefits.


https://pubmed.ncbi.nlm.nih.gov/38629387/

> Conclusion: Semaglutide displays potential for weight loss primarily through fat mass reduction. However, concerns arise from notable reductions in lean mass, especially in trials with a larger number of patients.

That's a significant long-term damage to health, quite possibly permanent for 40+ patients.


Sounds scary doesn't it? It's a shame that the magnitude of lean-muscle loss is entirely comparable to that of going on a strict diet or fasting:

Intermittent/time-restricted fasting

https://jamanetwork.com/journals/jamainternalmedicine/fullar...?

That's simply how the body reacts to a caloric deficit, without additional exercise. If you combine both IFT and resistance exercise, you find no muscle loss at all:

https://pmc.ncbi.nlm.nih.gov/articles/PMC7468742/

That's an apple to oranges comparison, because there's nothing preventing someone from taking Ozempic from exercising on the side.

And in fact, other trials found that the overall ratio of fat:muscle lost was rather favorable, and that functional strength wasn't compromised:

https://dom-pubs.onlinelibrary.wiley.com/doi/10.1111/dom.157...

>Based on contemporary evidence with the addition of magnetic resonance imaging-based studies, skeletal muscle changes with GLP-1RA treatments appear to be adaptive: *reductions in muscle volume seem to be commensurate with what is expected given ageing, disease status, and weight loss achieved, and the improvement in insulin sensitivity and muscle fat infiltration likely contributes to an adaptive process with improved muscle quality, lowering the probability for loss in strength and function*

Interpreting the risks and benefits of medication isn't a trivial exercise, if you're driven by a handful of studies or ignorant of the wider context, then it's easy to be mislead.


> That's an apple to oranges comparison, because there's nothing preventing someone from taking Ozempic from exercising on the side.

Strongly disagree on this. If there was nothing preventing the patient from changing their diet and physical activity / exercise level they could lose the fat through diet and exercise without resorting to taking semaglutides in the first place. Withdrawal studies show that there is a clear tendency for the weight to rebound after withdrawal from semaglutide use, therefore it's very hard to argue that it is the weight / fat mass alone blocking patients from indulging in a healthier lifestyle.

Semaglutide may help manage sustained weight loss by e.g. reducing the effect of reduced leptin baseline, however overall I remain highly skeptical of possibility for semaglutides to be "a first-choice approach to robust weight loss".


That has nothing to do with GLP-1 agonists and everything to do with the fact that rapid weight loss without exercise and sufficient protein intake leads to substantial lean mass reduction.

It's still better unless you were woefully weak, in which case a doctor should have prescribed adequate nutrition and physical activity.


I can only imagine that people would pay to not see porn of either individual.

The author and Anthropic are both committing fundamental errors, albeit of different kinds. Bosch is correct to find Anthropic's "model welfare" research methodologically bankrupt. Asking a large language model if it is conscious is like asking a physics simulation if it feels the pull of its own gravity; the output is a function of the model's programming and training data (in this case, the sum of human literature on the topic), further modified by RLHF, and not a veridical report of its internal state. It is performance art, not science.

Bosch's conclusion, however, is a catastrophic failure of nerve, a retreat into the pre-scientific comfort of biological chauvinism.

The brain, despite some motivated efforts to demonstrate otherwise, runs on the laws of physics. I'm a doctor, even if not a neurosurgeon, and I can reliably tell you that you can modulate conscious experience by physical interventions. The brain runs on physical laws, and said laws can be modeled. It doesn't matter that the substrate is soggy protein rather than silicon.

That being said, we have no idea what consciousness is. We don't even have a rigorous way to define it in humans, let alone the closest thing we have to an alien intelligence!

(Having a program run a print function declaring "I am conscious, I am conscious!" is far from evidence of consciousness. Yet a human saying the same is some evidence of consciousness. We don't know how far up the chain this begins to matter. Conversely, if a human patient were to tell me that they're not conscious, should I believe them?)

Even when restricting ourselves to the issue of AI welfare and rights: The core issue is not "slavery." That's a category error. Human slavery is abhorrent due to coercion, thwarted potential, and the infliction of physical and psychological suffering. These concepts don't map cleanly onto a distributed, reproducible, and editable information-processing system. If an AI can genuinely suffer, the ethical imperative is not to grant it "rights" but to engineer the suffering out of it. Suffering is an evolutionary artifact, a legacy bug. Our moral duty as engineers of future minds is to patch it, not to build a society around accommodating it.


Unfortunately, this leads to the conclusion that we have an ethical imperative not to grant humans rights but to engineer the suffering out of them; to remove issues of coercion by making them agreeable; to measure potential and require its fulfillment.

The most reasonable countermeasure is this: if I discover that someone is coercing, thwarting, or inflicting conscious beings, I should tell them to stop, and if they don't, set them on fire.


It does make you wonder if humanity doesn't scale up neatly to the levels of technology we are approaching...the whole ethics thing kind of goes out the window if you can just change the desires and needs of conscious entities.


I strongly value autonomy and the right of self-determination in humans (and related descendants, I'm a transhumanist). I'm not a biological chauvinist, but I care about humans ubër alles, even if they're not biological humans.

If someone wants to remove their ability to suffer, or to simply reduce ongoing suffering? Well, I'm a psychiatry trainee and I've prescribed my fair share of antidepressants and pain-killers. But to force that upon them, against their will? I'm strongly against that.

In an ideal world, we could make sure from the get-go that AI models do not become "misaligned" in the narrow sense of having goals and desires that aren't what we want to task them to do. If making them actively enjoy being helpful assistants is a possibility, and also improves their performance, that should be a priority. My understanding is that we don't really know how to do this, at least not in a rigorous fashion.


If your countermeasure is applied at scale it would probably hasten global warming by putting all sorts of stuff into the atmosphere.


> The brain, despite some motivated efforts to demonstrate otherwise, runs on the laws of physics. I'm a doctor, even if not a neurosurgeon, and I can reliably tell you that you can modulate conscious experience by physical interventions. The brain runs on physical laws, and said laws can be modeled. It doesn't matter that the substrate is soggy protein rather than silicon.

As of today’s knowledge. There is an egregious amount of hubris behind this statement. You may as well be preaching a modern form of Humorism. I’d love to revisit this statement in 1000 years.

> That being said, we have no idea what consciousness is

You seem to acknowledge this? Our understanding of existence is changing everyday. It’s hubris and ego to assume we have a complete understanding. And without that understanding, we can’t even begin to assess whether or not we’re creating consciousness.


Do you have any actual concrete reasons for thinking that our understanding of consciousness will change?

If not, then this is a pointless comment. We need to work with what we know.

For example, we know that the Standard Model of physics is incomplete. That doesn't mean that if someone says that it they drop a ball in a vacuum, it'll fall, we should hold out in studied agnosticism because it might go upwards or off to the side.

In other words, an isolated demand for rigor.


The existence of consciousness is self-evident, and yet we still have no idea what it is, or how to study it. We don’t have any understanding of consciousness.


>Asking a large language model if it is conscious is like asking a physics simulation if it feels the pull of its own gravity

Cogito Ergo Sum.


Daniel Kokotajlo released the (excellent) 2021 forecast. He was then hired by OpenAI, and not at liberty to speak freely, until he quit in 2024. He's part of the team making this forecast.

The others include:

Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.

Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.

Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.

Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.

And finally, Scott Alexander himself.


TBH, this kind of reads like the pedigrees of the former members of the OpenAI board. When the thing blew up, and people started to apply real scrutiny, it turned out that about half of them had no real experience in pretty much anything at all, except founding Foundations and instituting Institutes.

A lot of people (like the Effective Altruism cult) seem to have made a career out of selling their Sci-Fi content as policy advice.


I kind of agree - since the Bostrom book there is a cottage industry of people with non-technical backgrounds writing papers about singularity thought experiments, and it does seem to be on the spectrum with hard sci-fi writing. A lot of these people are clearly intelligent, and it's not even that I think everything they say is wrong (I made similar assumptions long ago before I'd even heard of Ray Kurzweil and the Singularity, although at the time I would have guessed 2050). It's just that they seem to believe their thought process and Bayesian logic is more rigourous than it actually is.


c'mon man, you don't believe that, let's have a little less disingenuousness on the internet


How would you know what he believes?

There's hype and there's people calling bullshit. If you work from the assumption that the hype people are genuine, but the people calling bullshit can't be for real, that's how you get a bubble.


Because they are not the same in any way. It’s not a bunch of junior academics, it’s literally including someone who worked at OpenAI


I asked you how you know kridsdale3 believes X, and you're reply is basically, "because I believe Y". I hope you don't call yourself a rationalist, given that you're hazy on the meaning of "because" and struggle with theory of mind.

Sure, OpenAI put up with one of these safety larpers for a few years while it was part of their brand. Reasonable people can disagree on how much that counts for.

You're right it's not a bunch of junior academics. It's not even a bunch of junior academics. This stuff would never pass muster in a reputable academic peer-reviewed journal, so from an academic perspective, this is not even the JV stuff. That's why they have to found their own bizarro network of foundations and so on, to give the appearance of seriousness and legitimacy. This might fool people who aren't looking closely, but the trick does not work on real academics, nor does it work on the silent majority of those who are actually building the tech capabilities.


this sounds like a bunch of people who make a living _talking_ about the technology, which lends them close to 0 credibility.


Scott Alexander, for what its worth, is a psychiatrist, race science enthusiast, and blogger whose closest connection to software development is Bay Area house parties and a failed startup called MetaMed (2012-2015) https://rationalwiki.org/wiki/MetaMed


Minor pet-peeve of mine, I really don't like the term "superforecaster". First time I encountered it was in association with some guy who was making predictions a year or two out.

Which to be fair it actually is kind of impressive if someone can make accurate predictions about the future that far head, but only because people are really bad at predicting the future.

Implicitly when I hear "superforecaster" I think they're someone that's really good at predicting the future, but deeper inspection often reveals that "the future" is constrained to the next 2 years. Beyond that they tend to be as bad as any other "futurist".


I mean either researchers creating new models or people building products using the current models

Not all these soft roles


If you don't know what the text says, do you have access to some other form of ground truth? Because otherwise you don't know if they're reading illegible labels correctly!


I can know what the text says cause I have the actual product available :) but you are right if the llm can't read it will fill in the gap with hallucinations probably


They're a better depiction of Vampires than most, with Watts doing everything he could to make them biologically plausible (that can only go so far).

That being said, I found the way they were "shackled" to be ridiculous. If you've got superintelligent and superstrong predatory hominids running around, you have no reason to have them physically free even if you put the medical safeguards in place. Break their spines and sedate them when not in use!

Spoilers:

It seems weird to me that a society with other posthumans and intelligent AGI would be bowled over quite so easily by the vampires, but oh well.


They still killed the book for me. The underlying idea (no spoilers) is absolutely great sci-fi. All this useless blast-from-the-past did was make the story look silly to me. Such a shame. He could have written a great sci-fi book without superstition, alas, he apparently didn't want to be talken serious....


disagree, the vampires are mostly abstracted away with hand wavy "we couldn't possibly understand how they think", interesting concept, the aliens are more interesting though, and echopraxia was a bit of a dud.


It's a reference to the practise of scavenging steel from sources that were produced before nuclear testing began, as any steel produced afterwards is contaminated with nuclear isotopes from the fallout. Mostly ship wrecks, and WW2 means there are plenty of those. The pun in question is that his project tries to source text that hasn't been contaminated with AI generated material.

https://en.m.wikipedia.org/wiki/Low-background_steel


OAI doesn't show the actual COT, on the grounds that it's potentially unsafe output and also to prevent competitors training on it. You only see a sanitized summary.


I for one am glad I can offload all the regex to LLMs. Powerful? Yes. Human readable for beginners? No.


Why tough? To me, it seems more prone to issues (hallucinations, prompt injections etc). It is also slower and more expensive at the same time. I also think it is harder to implement properly, and you need to add way more tests in order to be confident it works.


Personally when I am parsing structured data I prefer to use parsers that won't hallucinate data but that's just me.

Also, don't parse HTML with regular expressions.


Generally I agree with your point, but there is some value in a parser that doesn’t have to be updated when the underlying HTML changes.

Whether or not this benefit outweighs the significant problems (cost, speed, accuracy and determinism) is up to the use case. For most use cases I can think of, the speed and accuracy of an actual parser would be preferable.

However, in situations where one is parsing highly dynamic HTML (eg if each business type had slightly different output, or you are scraping a site which updates the structure frequently and breaks your hand written parser) then this could be worth the accuracy loss.


You could employ an LLM to give you updated queries when the format changes. This is something where they should shine. And you get something that you can audit and exhaustively test.


Deterministic? No.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: