While I enjoyed the article, it’s just another in a line of the same article with different flavors and authors that all have the same fundamental error.
The prevailing counterargument against AI consistently hinges on the present state of AI rather than the trajectory of its development: but what matters is not the capabilities of AI as they exist today, but the astonishing velocity at which those capabilities are evolving.
We've been through this song and dance before. AI researchers make legitimately impressive breakthroughs in specific tasks, people extrapolate linear growth, the air comes out of the balloon after a couple years when it turns out we couldn't just throw progressively larger models at the problem to emulate human cognition.
I'm surprised that tech workers who should be the most skeptical about this kind of stuff end up being the most breathlessly hyperbolic. Everyone is so eager to get rich off the trend they discard any skepticism.
This is confusing. We've never had a ChatGPT-like innovation before to compare to. Yes, there have been AI hype cycles for decades, but the difference is that we now have permanent invaluable and society-changing tools out of the current AI cycle, combined with hundreds of billions of dollars being thrown at it in a level of investment we've never seen before. Unless you're on the bleeding edge of AI research yourself, or one of the people investing billions of dollars, it is really unclear to me how anyone can have confidence of where AI is not going
Because the hype will always outdistance the utility, on average.
Yes, you'll get peaks where innovation takes everyone by surprise.
Then the salesbots will pivot, catch up, and ingest the innovation into the pitch machine as per usual.
So yes, there is genuine innovation and surprise. That's not what is being discussed. It's the hype that inevitably overwhelms the innovation, and also inevitably pollutes the pool with increasing noise. That's just human nature, trying to make a quick buck from the new-hotness.
There's a big difference between something that benefits productivity versus something that benefits humanity.
I think a good test for if it genuinely has changed society is if all gen AI were to disappear overnight. I would argue that nothing would really fundamentally change.
Contrast that with the sudden disappearance of the internet, or the combustion engine.
Work doesn't benefit humanity, work is the chains that keep us living the same day over and over til we die.
Your idea of benefit to humanity clearly doesn't involve the end of work, mine does.
AI can end work for most of us but that has to be what we want, can't be limiting it all the time bc of stupid reasons and expect it to have all the answers as if it weren't limited, that's silly.
If AI disappeared tonight so too would the future where nobody works in a call center or doing data entry or making button graphics to client exact specifications for a website nobody will ever see.
This is the Old World we live in rn - I don't want it to stay.
> I would argue that nothing would really fundamentally change.
I argue that there would be a huge collective sigh of relief from a large number of people. Not everybody, maybe not even a majority, but a large number nonetheless.
So I think it has changed society -- but perhaps not for the better overall.
Wow. Just the fact that the Internet existed at the library was enough for me to know I could know anything as a child - once we got that Internet in 95 and Win 95 PC, everything changed for me. I was quite natural to the online world by Win 98.
MY entire worldview and daily life habits would have changed.
Two things can both be true. I keep arguing both sides because:
1 Unless you’re aware of near term limits you think AI is going to the stars next year.
2 Architectures change. The only thing that doesn’t change is that we generally push on, temporarily limits are usually overcome and there’s a lot riding on this. It’s not a smart move to bet against progress over the medium term. This is also where the real benefits and risks lie.
Is AI in general more like going to space, or string theory? One is hard but doable. Other is a tar pit for money and talent. We are all currently placing our bets.
point 2 is the thing that i think is most important to point out:
"architectures change"
sure, that's a fact. let me apply this to other fields:
"there could be a battery breakthrough that gives electric cars a 2,000 mile range."
"researchers could discover a new way to build nanorobots that attacks cancer directly and effectively cures all versions of it."
"we could invent a new sort of aviation engine that is 1,000x more fuel efficient than the current generation."
i mean, yeah, sure. i guess.
the current hype is built on LLMs, and being charitable "LLMs built with current architecture." there are other things in the works, but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth. nothing has appeared that had the initial "wow" factor of the early versions of suno, or gpt, or dall-e, or sora, or whatever else.
this is clearly and plainly a tech bubble. it's so transparently one, it's hard to understand how folks aren't seeing it. all these tools have been in the mainstream for a pretty substantial period of time (relatively) and the honest truth is they're just not moving the needle in many industries. the most frequent practical application of them in practice has been summarization, editing, and rewriting, which is a neat little parlor trick - but all the same, it's indicative of the fact that they largely model language, so that's primarily what they're good at.
you can bet on something entirely new being discovered... but what? there just isn't anything inching closer to that general AI hype we're all hearing about that exists in the real world. i'm sure folks are cooking on things, but that doesn't mean they're near production-ready. saying "this isn't a bubble because one day someone might invent something that's actually good" is kind of giving away the game - the current generation isn't that good, and we can't point to the thing that's going to overtake it.
> but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth.
100% agree, but I think those who disagree with that are failing on point 1. I absolutely think we'll need something different, but I'm also sure that there's a solid chance we get there, with a lot of bracketing around "eventually".
When something has been done once before, we have a directional map and we can often copy fairly quickly. See OpenAI to Claude.
We know animals are smarter than LLM’s in the important, learning day-to-day ways, so we have a directional compass. We know the fundamentals are relatively simple, because randomness found them before we did. We know it’s possible, just figuring out if it’s possible with anything like the hardware we have now.
We don’t know if a battery like that is possible - there are no comparisons to make, no steer that says “it’s there, keep looking”.
This is also the time in history with the most compute capacity coming online and the most people trying to solve it. Superpowers, superscalers, all universities, many people over areas as diverse as neuro, psych who wouldn’t have looked at the domain 5 years ago are now very motivated to be relevant, to study or build in related areas. We’ve tasted success. So my opinion is based on us having made progress, the emerging understanding of what it means for individuals and countries in terms of competitive landscape, and the desire to be a part of shaping that future rather than having it happen to us. ~Everyone is very motivated.
Betting against that just seems like a risky choice. Honestly, what would you bet, over what timeframe? How strongly would you say you’re certain of your position? I’m not challenging you, I just think it’s a good frame for grounding opinions. In the end, we really are making those bets.
My bands are pretty wide. I can make a case for 5 years to AGI, or 100 years. Top of my head without thinking, I’d put a small amount on 5 years, all my money on within 100, 50% or more within 20/30.
The bet itself would make Earth less hospitable on a long shot. It's like shredding a winning lottery ticket in the hopes the shreds will win an even bigger lottery someday in the future.
There is another level to AI and how we fundamentally structure them that nobody is doing yet to my knowledge. This next round of innovation is fundamentally different than the innovation that is the focus now - nobody is looking to next stage bc this one hasn't achieved what we expected - bc it won't.
I sus that future iterations of AI will do much better tho.
Another reply, different thought. I’d be keen to see what eg Carmack is up to. Someone outside of the usual suspects. There is a fashion to everything and right now LLM’s are a distraction on an S curve. The map is not the territory and language is a map.
One problem is that people assume the end goal is to create a human-cognition-capable AI. I think it' pretty obvious by this point that that's not going to happen. But there is no need for that at all to still cause a huge disruption; let's say most current workers in roles that benefit from AI (copilot, writing, throwaway clipart, repetitive tasks, summarizing, looking up stuff, etc.) lead not even to job loss but fewer future jobs created - what does that mean for the incoming juniors? What does that mean for the people looking for that kind of work? It's not obvious at all how big of a problem that will create.
> human-cognition-capable AI. I think it' pretty obvious by this point that that's not going to happen
It's obvious to some people but that's not what many investors and company operators are saying. I think the prevailing message in the valley is "AGI is going to happen" for different values of when, not if. So I think you'd be forgiven for taking them at face value.
I think the mistake is that in the media it is extrapolating linear growth but in practice it is a wobbly path. And this wobbly path allows anyone to create whatever nearrative they want.
It reminds me of seeing headlines last week that NVDA is down after investors were losing faith after the last earnings. Then you look at the graph and NVDA is only like 10% off its all times high and still in and out of the most valuable company in the world.
Advancement is never linear. But I believe AI trends will continue up and to the right and even in 20 years when AI can do remarkably advanced things that we can barely comprehend, there will be internet commentary about how its all just hype.
There's a reason why so many of the people on the crypto grift in 2020-2022 have jumped to the AI grift. Same logic of "revolution is just around the corner", with the added mix of AGI millenarianism which hits a lot of nerds' soft spots.
> The prevailing counterargument against AI consistently hinges on the present state of AI rather than the trajectory of its development: but what matters is not the capabilities of AI as they exist today, but the astonishing velocity at which those capabilities are evolving.
No, the prevailing counter argument is that the prevailing argument in favor of AI taking over everything assumes that the acceleration will remain approximately constant, when in fact we don't know that it will do so and we have every reason to believe that it won't.
No technology in history has ever maintained an exponential growth curve for very long. Every innovation has followed the same pattern:
* There's a major scientific breakthrough which redefines what is possible.
* That breakthrough leads to a rapid increase in technology along a certain axis.
* We eventually see a plateau where we reach the limits of this new paradigm and begin to adapt to the new normal.
AI hypists always talk as though we should extrapolate the last 2 years' growth curve out to 10 years and come to the conclusion that General Intelligence is inevitable, but to do so would be to assume that this particular technological curve will behave very differently than all previous curves.
Instead, what I and many others argue is that we are already starting to see the plateau. We are now in the phase where we've hit the limits of what these models are capable of and we're moving on to adapting them to a variety of use cases. This will bring more change, but it will be slower and not as seismic as the hype would lead you to believe, because we've already gotten off the exponential train.
AI hypists come to the conclusion that general intelligence is inevitable because they know the brain exists and are materialists. Anyone who checks those two boxes will come to the conclusion that an artificial brain is possible and therefore AGI is as well. With the amount of money being spent then its only a matter of when
> With the amount of money being spent then its only a matter of when
Yes, but there's no strong reason to believe that "when" is "within fewer than 1000 years". What's frustrating about the hype is not that people think the brain is replicable, it's that they think that this single breakthrough will be the thing that replicates it.
Moore's law is still going as far I'm aware - there may have been clarification of sorts recently but that's kept up exponentially rather well despite everyone knowing that it can't do that.
Moore's law would improve the speed of LLMs and improve their size, but in recent weeks [0] it's become apparent that we're hitting the limit of "just make them even bigger" being a viable strategy for improving the intelligence of LLMs.
I'm excited for these things to get even cheaper, and that will enable more use cases, but we're not going to see the world-changing prophesies of some of AI's evangelists in this thread come true by dint of cheaper current-gen models.
But we don't know if AI development is following an exponential or sigmoid curve (actually we do kind of, now, but that's beside the point for this post.)
A wise institution will make decisions based on current capabilities, not a prognostication.
If investors didn't invest based on expected future performance, the share market would look completely different than it actually does today. So, I can't understand how anyone can claim that.
It was unclear if the current wave of AI would be an exponential, or for how long, or if it would end up just being another S-curve. The potential upside hooked a lot of people into action on the VC-maths of "it doesn't matter if it's unlikely, because the upside is just too good".
It is now becoming clear however that we aren't getting AGI. What we have now is fundamentally what we're likely to have in 5-10 years time. I do think we'll find better uses, figure our shit out, and have much more effective products in that time, I think we're entering the "LLM-era" in much the same way as the 2010s were the smartphone era that redefined a lot of things, but in still the same way, a phone of ~2010 isn't materially different to a phone of ~2020, they're still just internet connected, location aware, interfaces to content and services.
But you could also say: the prevailing argument for AI consistently hinges on the (imagined, projected based on naive assumptions) trajectory of AI rather than the present state.
> the astonishing velocity at which those capabilities are evolving.
This is what is repeated ad nauseam by AI companies desperate for investment and hype. Those who’ve been in the game since before this millennium tend not to be so impressed — recent gains have mostly been due to increased volume of computation and data with only a few real theoretical breakthroughs.
Laymen (myself included) were indeed astonished by ChatGPT, but it’s quite clear that those in the know saw it coming. Remember that those who say otherwise might have reasons (other than an earnest interest in the truth) for doing so.
I honestly believe this specific case is a Pareto situation where the first 80% came at breakneck speeds, and the final 20% just won't come in a satisfactory way. And the uncanny valley effect demands a percentage that's extremely close to 100% before it has any use. Neural networks are great at approximations, but an approximate person is just a nightmare.
What is your time horizon? We're already at a date where people were saying these jobs would be gone. The people most optimistic about the trajectory of this technology were clearly wrong.
If you tell me AI newscasters will be fully functional in 10 or 15 years, I'll believe it. But that far in the future, I'd also believe news will be totally transformed due to some other technology we aren't thinking about yet.
AI allows us to see everything we track the data of rn - and see in a useful way and in real time. It also allows all the tedious and repetitive tasks done by everyone, no longer needs to be done by anyone - creating a static webpage or graphics for a mobile app, a mobile app, or game development - a of those are the easiest to do they ever have been.
AI isn't for millennials or even Gen z - it's for Alpha, they will be the first to truly understand what AI is and to use it as it will be used forever after. Til they adopt it, none of this really matters.
the prevailing argument in favor of investing in AI is its potential.
the prevailing argument against using AI is its current lack of potential.
Those things are inherently in tension, think of it as hiring a new employee straight out of undergrad. You are hiring them based largely on the employee they will become...with increasing expectations over time balanced against increasing variability in outcomes over time. However, if one year in that employee continues to suck at their current job, their long term potential doesn't really matter. Moreso, the long term potential is no longer credibly evidenced by the inability to progress at doing the current job.
This is an investment gone bad in the current state of things. It doesn't matter what might happen it matters what did. The investment was made based on the perception of astonishing velocity, and it seems that we may need to calibrate our spedometers.
Isn't this essentially the same argument as "there are only 10 covid cases in this area, nothing to worry about"?
It's really missing the point, the point is whether or not exponential growth is happening or not. It doesn't with husbands, it does with covid, time will tell about AI.
Transformers have been around for 7 years, ChatGPT for 2. This isn't the first few samples of what could be exponential growth. These are several quarters of overpromise and underdelivery. The chatbot is cool and it outperforms what awful product search has become. But is it enough to support a 3.5 trillion dollar sized parts supplier?
It amazes me how excited people are to see their livelihoods destroyed. I'm retired, but people designing AI in their 20's will be unemployed in a decade. Good luck dudes and dude-ettes, you're fucking yourselves.
Blue Sky feels like a cocktail party where half the guests are knitting sweaters for their cats, the other half are debating how to save democracy, and I’m just standing there wondering how I got invited.
It’s like scrolling through a group chat where everyone forgot the topic but kept texting anyway.
Honestly, it’s impressive how they’ve managed to create a platform that feels simultaneously too niche and too random.
Idk, I spent a few hours over a few days trying to find something cool about it, and couldn’t.
The next wave won’t be monolithic but network-driven. Orchestration has the potential to integrate diverse AI systems and complementary technologies, such as advanced fact-checking and rule-based output frameworks.
This methodological growth could make LLMs more reliable, consistent, and aligned with specific use cases.
The skepticism surrounding this vision mirrors early doubts about the early internet fairly concisely.
Initially, the internet was seen as fragmented collection of isolated systems without a clear structure or purpose. It really was. You would gopher somewhere and get a file, and eventually we had apps like like pine for email, but as cool as it was it has limited utility.
People doubted it could ever become the seamless, interconnected web we know today.
Yet, through protocols, shared standards, and robust frameworks, the internet evolved into a powerful network capable of handling diverse applications, data flows, and user needs.
In the same way, LLM orchestration will mature by standardizing interfaces, improving interoperability, and fostering cooperation among varied AI models and support systems.
Just as the internet needed HTTP, TCP/IP, and other protocols to unify disparate networks, orchestrated AI systems will require foundational frameworks and “rules of the road” that bring cohesion to diverse technologies.
We are at the veeeeery infancy of this era and have a LONG way to go here. Some of the progress looks clear and a linear progression, but a lot, like the Internet, will just take a while to mature and we shouldn’t forget what we learned the last time we faced a sea change technological revolution.
You are definitely on to something here, but the difference is that the fundamental process was proven. It "just" needed to scale. That's hard and complex, but on a different level.
I don't think anyone doubted the nature of the technology. The bits were being sent. It's not like we were unsure of the fundamental possibility of transmitting information. The potential was shown very, very early on (Mother of all demos was in 1968). What we were and to some extent still are unsure of is the practical impact on society.
AI and LLMs in particular are not even at the mother of all demos level yet notwithstanding the grandiose claims and demos. There is no consensus on what these models are even doing. There is (IMO) justified skepticism surrounding the claims of reasoning and ability to abstract. We are in my opinion not yet at the "bits are being sent" stage.
I see this as entirely surmountable. We’re still making geometric progress in small model accuracy, and breakthroughs like test-time training and synthetic data are poised to deliver immediate gains in self-training performance.
Your point about skepticism being warranted when viewing this linearly is well taken. But this isn’t a linear path. The Internet, at its core, was about connecting computers to unlock the value of those connections—a transformative but relatively straightforward concept.
What we’re dealing with now is the training of cognitive digital intelligence. This is an inherently dynamic and breakthrough-oriented process, one that evolves in ways far less predictable or constrained than simple network effects. While the metaphor of connectivity is useful, it doesn’t fully capture the parallel, multi-dimensional approaches at play here.
Pessimism, in my view, is deeply unwarranted, especially given the history of technological progress. Time and again, advancements have proven to be far more impactful and beneficial than even the most optimistic predictions. Consider the projections for AI in 2017—most futurists undershot its actual progress by an order of magnitude.
Does possessing specialized knowledge or skills alter the ethical landscape of medical decisions?
In other words, should someone with the capacity to administer an experimental treatment be held to a different standard than those without such expertise?
Let’s say an individual opts for a less effective or accessible treatment due to personal limitations or lack of knowledge: does this alter the ethical weight of their decision?
This question becomes more complex when treatments have varying degrees of risk and benefit.
If a treatment is simply an off-label, it often goes totally unnoticed unless it carries an unacceptable risk.
Under that condition, what happens when the treatment holds significant promise, potentially offering a curative outcome? But it doesn’t yet have medical trials? Does the prospect of a cure override the ethical concerns about its risks? Where is that line?
When faced with a life-threatening disease like recurring cancer, what is the individual’s responsibility to society in considering the ethics of self-treatment? Should she have accepted less effective therapy that had already lead to a failure condition for the good of the rest of us even though we are barely effected?
How do the potential benefits to the individual—such as survival—contrast with the potential societal harm, if any, in terms of bypassing established protocols or ignoring sanctioned research?
At what point does the line blur between personal medical autonomy and the ethical implications of self-administered treatments?
I really do wonder about these questions. I mean, the only difference between the ‘ivermectin cures cancer’ et al crowd and this lady is her degree of knowledge, so how do we deal with this ethically?
Am I tired or dumb? I know all the words but I have no idea what you’re saying. I can’t pick any two consecutive rows of text in your message that I understand.
The long-term side effects of GLP-1 drugs are not well-studied, but the press seems to talk about them as if they are—in glowing terms as if their broad adoption is a given and a lifetime attached to the tit of a friendly drug company (with your credit card inserted) is merely a temporary problem.
I think we need to greet this whole class of drugs with skepticism, if for no other reason than they’re being a pharma companies’ wet dream of a product. The incentives to interrupt their rollout don’t exist.
I’m aware of the wide range of benefits, and understand they may end up saving lives. But skepticism is warranted here.
GLP-1s are relatively well studied, though. At least compared to the vast majority of "new" drugs.
They have a history going back to the 1970s (50 years) and have multiple FDA approved brands going on 15 years now (Liraglutide - 2009 for the EU, 2010 for the US).
So sure - we're going to get the chance to observe effects that are very hard to tease out prior just because we have a large number of folks taking them now.
But the press talks about them as if they are well studied because... well... comparatively they are.
Basically - even if I agree that caution is warranted here (and I do), your argument can be equally applied to drugs like ACE inhibitors/ARBs, and insulin. Both are which are pretty compelling drugs.
But diet and exercise are drastically more effective at eliminating these health issues and then some. You can't make a pill for that, though.
It does not "solve" addiction, obesity, or diabetes, it helps alleviate some of their effects. Solving would mean addressing the reason you're addicted, dealing with the underlying behaviors that cause most obesity (unhealthy food, too much food, and too little exercise) and type II diabetes.
There are corner cases where some genetic predisposition, or other factor like cancer treatments or injury lead to obesity or diabetes, and for those cases you probably do need an intervention. But this should not be the case for the masses. It is insane to consider "just giving this drug to everyone" especially when some of the side-effects include:
- Hypoglycemia
- Gallstones
- Pancreatitis
- Kidney failure
- Thyroid cancer (maybe)
and often include
- Diarrhea
- Constipation
- Fatigue
- Abdominal pain
- Bloating
- Burping
- Heartburn
- Blurred vision
From an investor's perspective, I'm sure this is magic. From an interventionist medical system like the U.S., I'm sure this also seems magic. But for nearly all people dealing with these problems it doesn't address the underlying behavioral issues.
> The long-term side effects of GLP-1 drugs are not well-studied
GLP-1 is a hormone that is released every time you eat, and the drugs are virtually identical hormones at natural levels that take ~100x longer (half life of a couple hours) to be excreted. It's a tiny amount of one more peptide among millions- that part is certainly safe.
If there's any issue with the drug it would be from the constant activation of a natural hormone receptor. Like how anabolic steroids can hurt you, and TRT can affect your natural testosterone production. Maybe after 10 years it breaks your natural satiety system and makes you always/never hungry. That kind of thing would probably show up in mouse models.
Either way- obesity is the largest cause of lost years in the first world and it isn't close. If you are obese your whole life you die 10-15 years earlier and are sicker. It's not going to kill you as fast as untreated t2 diabetes, but in both cases it would be crazy not to take a drug that can just make those problems just go away.
>The long-term side effects of GLP-1 drugs are not well-studied
The long-term effects of diabetes is.
For me, this was the _ONLY_ thing that brought my blood sugar under control.
Even with severe dietary restriction, my blood sugar would be dangerously high first thing in the morning after fasting 12-16hrs.
The 'potential side effects' of the drugs that I was taking was terrifying. And the list of drugs that I was on was so long that even if there was only a 1% chance that I'd catch a side effect from 1% of the drugs then my prospects went down to nil.
I was scheduled for gastric-bypass surgery.
I can modestly say that ozempic probably saved my life.
I've lost ~100lbs (~45kgs) and I can now wear the same size clothes that I wore in high-school which is a nice benefit too.
> The long-term side effects of GLP-1 drugs are not well-studied
I think it's been around for long enough (2017-2024 for Semaglutide), and there have been enough people taking it (tens of millions), that it's possible to start drawing conclusions about this. Maybe a Bayesian approach might say something?
Lifetime attachments are fantastic .You can not become a offgrid partisan prepper if you are medical dependant on a functioning society . Insulin, hearing aids,the other aids, neural implants, software as a service. To be a stability hostage of society is fantastic .Like Dis or else.
How is that different from covid vaccines? The long term risk wasn't known, but covid was super dangerous for a subset of the population, so for those it was absolutely a risk worth taking, and so is obesity.
Memetically speaking, it evidently has an R0 above one, though! And I guess there are even such people who deny that there is a "condition" of obesity.
Its not very different from those or any other drugs. There's always a cost benefit analysis.
Now we have to wonder when people are going to try mandating these. I can imagine the argument will be "of course not, covid was killing everyone and contagious. Well, obesity is killing everyone and driving up healthcare costs. But my body my choice! But not when it can harm other people! Etc"
Do you know that for sure? What portion of hospital visits are due to issues downstream of obesity? Heart disease, diabetes, and stroke hospitalize more people than covid ever could have dreamed of.
A vaccine only takes a few doses, plus they're pretty cheap. Worst case scenario you get a booster every year. These drugs stop working the second you drop taking them
As a former user, “the second you stop taking them” isn’t totally accurate. It does take a few weeks for the effects to wear off and appetite to return.
It's down to how much of a diet you do (the drug only manages your appetite). I went hardcore and lost 40kg in 3 months, almost "effortlessly". One meal a day, no sugar, 45min cardio daily. Now trying to figure out where the floor is and planning a progressive reduction in dose/frequency over the next few months. I am lucky I had no side effect and I increased the dosage only sparingly, but people react differently.
Wow, that is crazy and like a 3.500 kcal deficit a day. That's heavy and I'm glad it worked for you! I struggle with cardio, simply don't like it, am more into lifting heavy stuff... :-D
What weight did you start out from? 40-50 kg is what I'd like to lose after all, I think.
133kg to 93kg. But I am a big guy, 188cm, so I think my absolute min weight would be 87kg, perhaps a bit more now that I am older.
To be honest I do the cardio mostly to stay healthy, I am not convinced it contributes much to the weight loss. I had to pause the cardio twice for a week while on holidays and lost the same amount of weight those weeks. I do take vitamins supplements though.
In term of meal, I take some supermarket ready meal, typically pasta. So it's not even an unpleasant diet. But just that for the day, no dessert. I think cutting sugar is key. I do allow myself a glass of red wine in the evening.
Now I am no doctor, I don't necessary recommend it, it is just that it worked for me, and allows to make an intense but concentrated effort. Another downside of a fast weight loss like that is loose skin, will probably take a few more months and some weight lifting to remediate that.
Very successful. I used it as an opportunity to change my lifestyle and eating habits.
I stopped eating processed foods and cut nearly all my sugar intake. It was a total lifestyle change and I lost 40 pounds in the process. I’ve been off of it for nearly a month, kept off the weight so far, and never felt better.
Glad it worked for you! I found that with less appetite there's less cravings, that seems to help me to transition to better food. I hope the effect stays that way.
Nothing else I've taken so far has changed my life in such an immediate and drastic manner. It's why I'm all over these threads in a desire to help dispel misinformed social-media fueled FUD. There are legitimate concerns to be had, but what most people repeat even on HN are downright Facebook meme quality level.
That said...
For me, I went from 276lbs to 162lbs at my lowest in about 9mo on Tirzepatide (Mounjaro/Zepbound). 85% of the loss was in the first 100 days. I was putting in all the effort I possibly could aside from taking the drug, but I attribute the drug for most of my actual long-term success. It made things I had tried to do in the past (eating healthy, eating properly sized portions, regulating my snacking/late night binges, drinking) much easier. I call it a PED for dieting. Losing the weight also made exercising at first tolerable, and these days downright enjoyable and something I look forward to on training days.
Since I hit my lowest I have put on about 25lbs of lean muscle mass by hitting the gym for resistance training at a regular consistent schedule. When you see the results I did so rapidly in one direction, it's highly motivating to know you can "put in the work" and see results in the other. I'm now about 187lbs at 5'11" with a body fat percentage of just under 12% from my latest DEXA scan. I plan to try to stabilize at around 11% or so, since the studies on it show 12% is where the major long-term health benefits start to accrue. After that I will begin to focus on increasing my VO2MAX (e.g. cardio fitness) as much as possible. I'll be at 2 years from starting Tirzepatide this coming March.
The drug in combination with lifestyle changes can work wonders. I am but one example, and not much of an outlier at this point.
I used to worry about "outing" myself when I first started taking it, but after seeing the results I did and having friends ask me what the hell I was doing to see such success I realized I could no longer pretend it was "eating less and moving more" - I didn't want to be part of the problem.
* They're expensive right now because there's a shortage.
* Once the actuaries see the long term data I imagine insurance companies will foot the bill entirely for them as a cost-saving measure. The only thing strong enough to override a doctor's "I bet it's your period" is "I bet it's your weight."
* It is genuinely far far easier to maintain weight than lose it. Your body establishes a new set-point and you +/-10 lbs around it naturally.
Can't speak for these drugs specifically but after losing 100lbs my appetite adjusted, got higher energy, far less effort to do (well everything) but working out especially.
Oh I was considering the $200/mo to be the expensive price. Yeah name brand you can pay $1k/mo but I don't imagine most folks going that route over the generic given the choice.
But yeah, that's a good point I suppose. Fewer side effects than meth that's for sure.
The long-term effects of COVID are pretty darn obvious, though. I know two people with Myalgic Encephalomyelitis from the 2009 flu epidemic and it's nasty. The vaccine was widespread enough that we've got a reasonable upper bound on on the COVID vaccine worst case to make the cost/benefit analysis an easy call.
Yeah but you are not meant to do a diet your whole life either, they help you make a diet. If people revert to unhealthy habits after that it is on them. But there is a difference between fasting to lose weight (where you are hungry, very hard to sustain) and a stable diet (where you do not have to fight hunger).
You kinda do though. Ideally dieting isn't some activity you do for a little while and then go back to inhaling oreos. Instead you find a long term sustainable lifestyle that doesn't cause you to steadily gain weight.
I think you are confusing "dieting" and "sustainable lifestyle". Dieting is starving yourself to lose weight. That's not sustainable and those drugs help a lot. But once you are on target weight, you can switch to a sustainable lifestyle where hunger is much less of a problem, just resisting the temptation of the oeros. But that's not hunger, that's gourmandise.
They(you know: those people) need to find a vaccine that does the same thing as these GLP-1s. It seems termination shock is becoming a bigger issue with these new products as time goes on.
It’s probably generally irrelevant what they can do today, or what you’ve seen so far.
This is conceptually essentially Moore’s law, but about every 5.5 months. That’s the only thing that matters at this stage.
I watched everyone make the same arguments about the general Internet, and then the Web, then mobile, then Bitcoin. It’s just a toy. It’s not that useful. Is this supposed to be the revolution? It uses too much power. It won’t scale. The technology is a dead end.
The general pattern of improvement to technology has been radically to the upside at an increasing pace parabolically for decades and there’s nothing indicating that this is a break in the pattern. In fact it’s setting up to be an order of magnitude greater impact than the Internet was. At a minimum, I don’t expect it to be smaller.
Looking at early telegraphs doesn’t predict the iPhone, etc.
>> Looking at early telegraphs doesn’t predict the iPhone, etc.
The problem with this line of argument is that LLMs are not new technology, rather they are the latest evolution of statistical language modelling, a technology that we've had at least since Shannon's time [1]. We are way, way past the telegraph era, and well into the age of large telephony switches handling millions of calls a second.
Does that mean we've reached the end of the curve? Personally, I have no idea, but if you're going to argue we're at the beginning of things, that's just not right.
________________
[1] In "A Mathematical Theory of Communication", where he introduces what we today know as information theory, Shannon gives as an example of an application a process that generates a string of words in natural English according to the probability of the next letter in a word, or the next word in a sentence. See Section 3 "The Series of Approximations to English":
I think we can pretty safely say bitcoin was a dead end other than for buying drugs, enabling ransomware payments, or financial speculation.
Show me an average person who has bought something real w bitcoin (who couldn’t have bought it for less complexity/transaction cost using a bank) and I’ll change my mind
Bitcoin failed because of bad monetary policy turning it into something like a ponzi scheme where only early adopters win. The monetary policy isn't as hard to fix as people make it out to be.
Yes but they’re also anonymous. You don’t have your name attached to the account and there’s no paperwork/bank that’s keeping track of any large/irregular financial transactions
I heard this as one of the early sales pitches for Bitcoin. “Digital cash.”
That all seemed to go out the window when companies developed wallets to simplify the process for the average user, and when the prices surged, some started requiring account verification to tie it to a real identity. At that point, it’s just a bank with a currency that isn’t broadly accepted. The idea of digital cash was effectively dead, at least for the masses who aren’t going to take the time to figure out how to use Bitcoin without a 3rd party involved. Cash is simple.
No, not exactly. If you know someone used cash at one place can you track every cash transaction they've ever made? If you know one bitcoin transaction from a wallet you can track everything that key pair has done from genesis to present. So, if anything, it's worse.
Speaking of the iPhone, I just ugpraded to the 16 Pro because I want to try out the new Apple Intelligence features.
As soon as I saw integrated voice+text LLM demos, my first thought was that this was precisely the technology needed to make assistants like Siri not total garbage.
Sure, Apple's version 1.0 will have a lot of rough edges, but they'll be smoothed out.
In a few versions it'll be like something out of Star Trek.
"Computer, schedule an appointment with my Doctor. No, not that one, the other one... yeah... for the foot thing. Any time tomorrow. Oh thanks, I forgot about that, make that for 2pm."
Try that with Siri now.
In a few years, this will be how you talk to your phone.
The issue with appointments is the provider needs to be integrated into the system. Apple can’t do that on their own. It would have to be more like the roll out of CarPlay. A couple partners at launch, a lot of nothing for several years, and eventually is a lot of places, but still not universal.
I could see something like Uber or Uber Eats trying to be early on something like this, since they already standardized the ordering for all the restaurants in their app. Scheduling systems are all over the place.
I meant appointment in the "calendar entry category" sense, where creating an appointment is entirely on-device and doesn't involve a third party.
Granted, any third-party integrations would be a significant step up from my simple scenario of "voice and text comprehension" and local device state manipulation.
Many situations, prefer text to voice. Text: Easier record keeping, manipulation, search, editing, ....
With some irony, the Hacker News
user interface is essentially all
just simple text.
A theme in current computer design seems to be: Assume the user doesn't use a text editor and, instead, needs an 'app' for every computer interaction. Like cars for people who can't drive, and a car app for each use of the car -- find a new BBQ restaurant, need a new car app.
Sorry, Silicon Valley, with text anyone who used a typewriter or pocket calculator can do more and have fewer apps and more flexibility, versatility, generality.
I am generally on your side of this debate, but Bitcoin is a reference that is in favor of the opposite position. Crypto is/was all hype. It's a speculative investment, that's all atm.
Bitcoin is the only really useful crypto that fundamentally has no reason to die because of basic economics. It is fundamentally the only hard currency we have ever created and that's why it is revolutionary
I find it hard to accept the statement that "[bitcoin] is fundamentally the only hard currency we have ever created". Is it saying that gold back currencies were not created by us, or that gold isn't hard enough?
Additionally, there's a good reason we moved off deflationary hard currencies and onto inflationary fiat currencies. Bitcoin acts more like a commodity than a medium of exchange. People tend to buy it, hold it, and then eventually cash out. If I am given a bunch of bitcoin, the incentive is for me not to spend it, but rather keep it close and wait for it to appreciate — what good is a currency that people don't spend?
Also I find it weird when I read that due to its mathematically proven finite supply it is basic economics that gives it value. Value in modern economics is defined as what people are willing to give up to obtain that thing. Right now, people are willing to give up a lot for bitcoin, but mainly because other people are also willing to give up a lot for bitcoin, which gives it value.
It's a remarkable piece of engineering that has enabled this (solving the double spending problem especially), but it doesn't have inherent value in and if itself. There are many finite things in the world that are not valued as highly as bitcoin is. There's a finite number of beanie babies, a finite number is cassette tapes, a finite number of blockbuster coupons...
Gold is similar — should we all agree tomorrow that gold sucks and should never be regarded as a precious metal, then it won't lose its value completely (there's only a finite amount of it, and some people will still want it, e.g. for making connectors). But its current valuation is far higher than it would be for its scarcity alone — people mainly want gold, because other people want gold.
Hello everyone! With immense joy in my heart, I want to take a moment to express my heartfelt gratitude to an incredible lottery spell psychic, Priest Ray. For years, I played the lottery daily, hoping for a rewarding outcome, but despite my efforts and the various tips I tried, success seemed elusive. Then, everything changed when I discovered Priest Ray. After requesting a lottery spell, he cast it for me and provided me with the lucky winning numbers. I can't believe it—I am now a proud lottery winner of $3,000,000! I felt compelled to share my experience with all of you who have been tirelessly trying to win the lottery. There truly is a simpler and more effective way to achieve your dreams. If you've ever been searching for a way to increase your chances of winning, I encourage you to reach out via email: psychicspellshrine@gmail.com
People discussing AI certainly is the perfect place to advertise something about the "incredible lottery spell psychic". Honestly, I can't even tell if it's satire or spam, and I love it.
If that’s the case, I’d like to hear from Matt about this. I’ve known him for years, and I don’t think he is unaware of conflicts like these. In fact I’ve seen him be deeply thoughtful about complex problems in the past. He’s not perfect (who is?), but he really does try.
Given that he has been pretty reasonable about stuff like this in the past, I don’t find myself inclined to ascribe bad intent until I hear from him personally.
Seems like the kind of situation where only one person can answer.
> Given that he has been pretty reasonable about stuff like this in the past, I don’t find myself inclined to ascribe bad intent until I hear from him personally.
there is a level of actions that are so bad that intent doesnt actually matter anymore. i would say matt has crossed that line here.
ThePrimeagen just did an interview with him, the video is also available on youtube now too.
Not the best interview IMO since prime didn't have much time to prepare questions / topics, and so he is very much "firing from the hip" but you'll get to hear matt go into detail about this topic.