> old code gets automatically compiled by the old version of the compile
That's not what happens. You always use the same version of the compiler. It's just that the newer compiler version also knows several older dialects (known as editions) of the language.
Right, it's not considered weird for a C++ compiler to offer C++ 98, C++ 11, C++ 14, C++ 17, C++ 20, C++ 23 and C++ 26 (seven versions) and support its own extra dialects.
It is also usual for the C++ compilers to support all seven standard library versions too. Rust doesn't have this problem, the editions can define their own stdlib "prelude" (the reason why you can just say Vec or println! in Rust rather than their full names, a prelude is a set of use statements offered by default) but they all share the same standard library.
core::mem::uninitialized() is a bad idea and we've known that for many years, but that doesn't mean you can't use it in brand new 2024 Edition Rust code, it just means doing so is still a bad idea. In contrast C++ removes things entirely from its standard library sometimes because they're now frowned on.
I don't know how everyone arrives at that conclusion when the cost of the subscription services is also going up (as evidenced by the very article we're talking about). People who are renting are feeling this immediately, whereas people who bought their computers can wait the price hikes out for a couple years before they really need an upgrade.
Subscriptions have a "boiling frog" phenomenon where a marginal price increase isn't noticable to most people. Our payment rails are so effective many people don't even read their credit card statements, they just have vampires draining their accounts monthly.
Starting with a low subscription price also has the effect of atrophying people's ability to self-serve. The alternative to a subscription is usually capital-intensive - if you want to cancel Netflix you need to have a DVD collection. If you want to cancel your thin client you have to build a PC. Most modern consumers live on a knife edge where $20/month isn't perceptible but $1000 is a major expense.
The classic VC-backed model is to subsidize the subscription until people become complacent, and then increase the price once they're dependent. People who self-host are nutjobs because the cloud alternative is "cheaper and better" until it stops being cheaper.
My bank has an option to send me a notification every time I'm charged for something. I've noticed several bills that were higher than they should have been "due to a technical error". I'm certain some companies rely on people not checking and randomly add "errors".
Notably there's no way (known to me) that you can have direct debits sent as requests that aren't automatically paid. I think that would put consumers on an equal footing with businesses though, which is obviously bad for the economy.
It's normally an option in my experience. I have mine set for charges over $100. I don't want a notification every time I buy gas (I do check my statements every month though).
What is the harm in being notified when you buy gas? It doesn’t hurt anything, and I DO want to be notified if someone else buys gas on my card!
The discussion started as a way to avoid forgetting to cancel subscriptions or to catch subscription price increases; if you are setting your limit to $100, you aren’t going to be seeing charges for almost all your subscriptions.
I have my minimum set to $0, so I see all the charges. Helpful reminder when I see a $8 charge for something I forgot to cancel.
Alert fatigue. Most people, if they get an alert for every single purchase they make, will learn to ignore the alerts as they are useless 99% of the time. Then when an alert comes through that would be useful, they won't see that either.
Anyone who has had the misfortune to work on monitoring systems knows the very fine line you have to walk when choosing what alerts to send. Too few, or too many, and the system becomes useless.
As I said, I have my alert set to $0 and it really hasn’t caused fatigue. For one thing, when it is something i just purchased, the alert is basically just a confirmation that the purchase went through. I close it immediately and move on.
If I get an alert and I didn’t buy anything, it makes me think about it. Often times it just reminds me of a subscription I have, and I take the moment the think if I still need it or not. If I start feeling like I am getting a lot of that kind of alert, I need to reevaluate the number of subscriptions I have.
If I get an alert and I don’t immediately recognize the source (the alert will say the amount and who it is charged to), it certainly makes me pause and try to figure out what it is, and that has not been “alert fatigued” away from me even after 10+ years of these alerts.
Basically, if I get an alert when I didn’t literally JUST make a purchase, it is worth looking into.
I dont think it causes alert fatigue; I am not getting a bunch of false alerts throughout my day, because I shouldn’t be having random charges appear if I am not actively buying something.
> The alternative to a subscription is usually capital-intensive - if you want to cancel Netflix you need to have a DVD collection.
I did Apple Music and Amazon Music. The experience of losing “my” streaming library twice totally turned me off these kinds of services. Instead I do Pandora, and just buy music when I (rarely) find something I totally love and want to listen to on repeat. The inability to build a library in the streaming service that I incorrectly think of as “mine” is a big feature, keeps my mental model aligned with reality.
I do wish these services would have an easier method to import/export playlists and collections. But that would make it easier to leave, so its not going to happen.
This is something I’ve been seeing for a while. As a teen that kept his 300 dollar paycheck in cash that money would last a very long time. Now I make a good 6 figures and was seeing my accounts spending way more than I should. It wasn’t big purchases it was 50 dollars here 200 hundred there. A subscription here and there. By the end of the month I would wrack 8k in spending.
Going line by line I learned how much I neglected these transactions being the source of my problem. Could I afford it? Yes. But saving and investing is a better vehicle for retirement early than these minor dopamine hits
> if you want to cancel Netflix you need to have a DVD collection
You don't need a whole DVD collection to cancel Netflix, even ignoring piracy. Go to a cheaper streaming service, pick a free/ad supported one, go grab media from the library, etc. Grab a Blu-Ray from the discount bin at the store once in a while, and your collection will grow.
No, I do own some (actually it was more in the VHS days so tapes) and I just found that I never really watched them again. So I stopped buying movies. I'm the same with books. Once I read it, I've read it. I would rarely read a novel twice. I know what's going to happen, so what's the point? Reference books are different of course.
You're not really thinking this through enough. The exact same logic you used can be applied to music: once you've listened to the album once, you know how it will go, so what's the point of listening again? Presumably you do get something out of listening to music again (since you said you do listen to it more than once), so whatever that "something" is... you can infer that others get similar value out of rereading books/rewatching movies, even if you personally don't.
For myself, the answer is "because the story is still enjoyable even if I know how it will end". And often enough, on a second reading/viewing I will discover nuances to the work I didn't the first time. Some works are so well made that even having enjoyed it 10+ times, I can discover something new about it! So yes, the pleasure of experiencing the story the first time can only be had once. But that is by no means the only pleasure to be had.
> The exact same logic you used can be applied to music: once you've listened to the album once, you know how it will go, so what's the point of listening again?
Most music doesn't have the same kind of narrative and strong plot that stories like novels and movies do, this is a massive difference. And even if it does, it doesn't usually take a half hour or more to do such a change. That's a pretty big difference about the types of art.
I've bought a ton of movies in the past. The vast majority I've sold second hand or thrown away because I just didn't care to watch again and I didn't feel like storing something I'd never use forever.
Same goes for a lot of other media. Some amount of it I'll want to keep but most is practically disposable to me. Even most videogames.
Sure but modern cloud subscriptions have a lot of service layers you otherwise won't pay for so effectively you may be buying the hardware yearly that's a lot different than renting a media collection that would be assembled over a lifetime for the price of one new item a month.
> Subscriptions have a "boiling frog" phenomenon where a marginal price increase isn't noticable to most people.
This is so apt and well stated. It echos my sentiment, but I hadn't thought to use the boiling frog metaphor. My own organs are definitely feeling a bit toastier lately.
Difference is that if subscription goes up from $10 to $15, that doesn't seem to bad.
But if you want to purchase a new computer, and the price goes from $1000 to $1500, then that's a pretty big deal. (Though in reality, the price of said computer would probably go up even more, minimum double. RAM prices are already up 6-8 fold from summer)
Where I live, a pair of Kingston FURY Beast Black RGB DDR5 6000MHz 32GB (2x16GB) has literally gone up from what is equivalent to $125 this summer, to currently selling for what is equivalent to $850.
I think looking at the same exact product from the same retailer is not really the full story. Personally I would accept looking at the same exact spec ram across retailers in your region.
Maybe its still a lot more for you, but in the US it's not as bad as I see people say.
Realistically people normally buy whatever ram is the cheapest for the specs they want at the time of purchase, so that's the realistic cost increase IMO.
Wouldn't historical data also be inflated by the gold plated Monster branded RAM sticks too though? Making the now to then comparison, well, comparable.
The MRSP for those GPUs is already inflated. There's a reason Nvidia is going to start making more RTX 3060 GPUs. Because people (and system builders) can't afford 40XX and 50XX GPUs.
Difference is subscriptions need to support IT staff, data centers, and profit margins. A computer under your desk at home has none of those support costs and it gets price competition from used parts which subscriptions don't have.
Cloud (storage, compute, whatever) has so far consistently been more expensive than local compute over even short timeframes (storage especially, I can buy a portable 2TB drive for the equivalent of one year of the entry level 2TB dropbox plan). These shortage spikes don't seem likely to change that? Especially since the ones feeling the most pressure to pay these inflated prices are the cloud providers that are causing the demand spike in the first place. Just like with previous demand spikes, as a consumer you have alternatives such as used or waiting it out. And in the meantime you can laugh at all your geforce now buddies who just got slapped with usage restrictions and overage fees.
Subscription is still worth it for most people though. Sure it costs more, but your 2TB plan isn't a single harddrive, it is likely across several harddrives with RAID ensuring that when (not if!) they fail no data is lost, plus remote backups. When something breaks the subscription fixes that for no extra charge.
If you know how to admin a computer and have time for it, then doing it yourself is cheaper. However make sure you are comparing the real costs - not just the 2TB, but the backup system (that is tested to work), and all your time.
That said, subscriptions have all too often failed reasonable privacy standards. This is an important part of the cost that is rarely accounted for.
I’m not even sure it does cost more. I could have a geforcenow subscription for like 8 years before it’s more expensive than building a similar spec gaming rig.
Depends on the service, and timeframes. For geforcenow, you also need to consider the upgrade cycle - how often would you need to upgrade to play a newer game? I'm not sure but probably at least once within that 8 years. Buying a new car, or almost new car, and driving it until it falls apart is a better financial option than leasing. But if you want a new car every year or two, leasing is more affordable - for that scenario. Also it depends on usage. My brother in law probably plays a video game once every other month. At that point, on demand pricing (or borrowing for me) is much better than purchase or consistent subscription. You need to run the numbers.
Depends on how much you play. geforcenow is limited to 100 hours a month, with additional hours sold at a 200% premium. This dramatically changes the economics ( https://www.techpowerup.com/344359/nvidia-puts-100-hour-mont... has a handy chart for this )
I'm not sure what the value of shaming people's hobbies is. 3 hours a day is easy if it's your primary hobby, and likely double/triple that on weekends.
> Sure it costs more, but your 2TB plan isn't a single harddrive, it is likely across several harddrives with RAID ensuring that when (not if!) they fail no data is lost, plus remote backups. When something breaks the subscription fixes that for no extra charge.
Well yes, of course. And for cloud compute you get that same uptime expectation. Which if you need it is wonderful (and for something like data arguably critical for almost everyone). But if we're just talking something like a video game console? Ehhh, not so much. So no, you don't include the backup system cost just because cloud has it. You only include that cost if you want it.
But if you finance the computer (not hard to get 0% financing on consumer electronics), the price goes from $41 a month to $62 a month. It’s the same difference.
The mental model of subscriptions and financing are totally different. If I'm paying a subscription I might cancel next month, and that's a sort of freedom. If I'm financing a piece of hardware I don't want to stop paying, I want that hardware, so that's a commitment.
The difference is that with financing you're stuck with it (and your credit rating drops, at least in the EU here). You're not stuck with a subscription. If your income changes and you can't afford it anymore then you can cancel your subscription.
In the US if you don't have any debt, that is bad for your credit rating. Perversely, the more debt you have, the easier it is to get more credit, at least up to a point.
Oh sure, my original comment’s point was just to allude to the point that costs are going up for all methods of compute, so that fact alone shouldn’t influence your buy versus rent versus finance decision too much.
This idea that there’s a conspiracy to take personal computing away from the masses seems far fetched to me.
Or much longer. The computers I use most on a daily basis are over 10 years old, and still perfectly adequate for what I do. Put a non-bloated OS on them and many older computers are more than powerful enough.
There must be a breaking point. We’ve reached ours last year, when price went up again and my grandfathered plan wasn’t accepted anymore. So I talked to the missus and we cancelled our Netflix.
It had been my account for, what, a decade? A decade of not owning anything because it was affordable and convenient. Then shows started disappearing, prices went up, we could no longer use the account at her place (when we lived separately), etc. And, sadly, I’m done with them.
I think most people will eventually reach a breaking point. My sister also cancelled, which I always assumed would never happen.
> I don't know how everyone arrives at that conclusion when the cost of the subscription services is also going up
Of course they will go up, that's the whole idea. The big providers stock on hardware, front-run the hardware market, starve it for products while causing the prices to rise sharply and at that point their services are cheaper because they are selling you the hardware they bought at low prices, the one they bought in bulk, under cheap long term contracts and, in many cases, kept dark for some time.
Result - at the time of high hardware prices in retail, the cloud prices are lower, the latter increase later to make more profits, and the game can continue with the cloud providers always one step ahead of retail in a game of hoarding and scalping.
Most recently, scalping was big during the GPU shortages caused by crypto-mining. Scalpers would buy GPUs in bulk then sell them back to the starved market for a hefty margin.
Cloud providers buying up hardware at scale is basically the same, the only difference is they sell you back the services provided by the hardware, not the actual gear.
That's one common reason for renting, not the only one.
I've rented trailers and various tools before too, not because I couldn't afford to buy them, but because I knew I wouldn't need them after the fact and wouldn't know what to do with them after.
Is this true? I'm trying to think of a solid example and I'm drawing blanks.
Apartments aren't really comparable to houses. They're relatively small units which are part of a larger building. The better comparison would be to condominiums, but good luck even finding a reasonably priced condo in most parts of the US. I'd guess supply is low because there's a housing shortage and it's more profitable to rent out a unit as an apartment than to sell it as a condo.
It seems to me that most people rent because 1) they only need the thing temporarily or 2) there are no reasonable alternatives for sale.
Exactly, if you can’t afford the high upfront cost that you can stretch out over a longer period of time, you’re stuck paying more over the long term as the subscriptions get more expensive.
Because the World Economic Forum, where our political and corporate leaders meet and groom each other, point-blank advertised "you will own nothing and be happy."
That seems oddly rigid though. I need to known in advance which networks will definitely never need subnetting so I can assign them a /64.
Why have so, so many address bits and then give us so few for subnetting? People shame ISPs endlessly for only giving out /56s instead of /48s, pointing at the RFCs and such. But we still have 64 entire bits left over there on the right! For what? SLAAC? Was DHCP being stateful really such a huge problem that it deserves sacrificing half of our address bits?
The actual intention has always been that there be no hard-
coded boundaries within addresses, and that Classless Inter-
Domain Routing (CIDR) continues to apply to all bits of the
routing prefixes.
Yes, kind of. In the same sense that Vec<T> in Rust with reused indexes allows it.
Notice that this kind of use-after-free is a ton more benign though. This milder version upholds type-safety and what happens can be reasoned about in terms of the semantics of the source language. Classic use-after-free is simply UB in the source language and leaves you with machine semantics, usually allowing attackers to reach arbitrary code execution in one way or another.
That what happens can be reasoned about in the semantics of the source language as opposed to being UB doesn't necessarily make the problem "a ton more benign". After all, a program written in Assembly has no UB and all of its behaviours can be reasoned about in the source language, but I'd hardly trust Assembly programs to be more secure than C programs [1]. What makes the difference isn't that it's UB but, as you pointed out, the type safety. But while the less deterministic nature of a "malloc-level" UAF does make it more "explosive", it can also make it harder to exploit reliably. It's hard to compare the danger of a less likely RCE with a more likely data leak.
On the other hand, the more empirical, though qualitative, claim made by by matklad in the sibling comment may have something to it.
[1]: In fact, take any C program with UB, compile it, and get a dangerous executable. Now disassemble the executable, and you get an equally dangerous program, yet it doesn't have any UB. UB is problematic, of course, partly because at least in C and C++ it can be hard to spot, but it doesn't, in itself, necessarily make a bug more dangerous. If you look at MITRE's top 25 most dangerous software weaknesses, the top four (in the 2025 list) aren't related to UB in any language (by the way, UAF is #7).
>If you look at MITRE's top 25 most dangerous software weaknesses, the top four (in the 2025 list) aren't related to UB in any language (by the way, UAF is #7).
FWIW, I don't find this argument logically sound, in context. This is data aggregated across programming languages, so it could simultaneously be true that, conditioned on using memory unsafe language, you should worry mostly about UB, while, at the same time, UB doesn't matter much in the grand scheme of things, because hardly anyone is using memory-unsafe programming languages.
There were reports from Apple, Google, Microsoft and Mozilla about vulnerabilities in browsers/OS (so, C++ stuff), and I think there UB hovered at between 50% and 80% of all security issues?
And the present discussion does seem overall conditioned on using a manually-memory-managed language :0)
You're right. My point was that there isn't necessarily a connection between UB-ness and danger, and stuck together two separate arguments:
1. In the context of languages that can have OOB and/or UAF, OOB/UAF are very dangerous, but not necessarily because they're UB; they're dangerous because they cause memory corruption. I expect that OOB/UAF are just as dangerous in Assembly, even though they're not UB in Assembly. Conversely, other C/C++ UBs, like signed overflow, aren't nearly as dangerous.
2. Separately from that, I wanted to point out that there are plenty of super-dangerous weaknesses that aren't UB in any language. So some UBs are more dangerous than others and some are less dangerous than non-UB problems. You're right, though, that if more software were written with the possibility of OOB/UAF (whether they're UB or not in the particular language) they would be higher on the list, so the fact that other issues are higher now is not relevant to my point.
> In fact, take any C program with UB, compile it, and get a dangerous executable. Now disassemble the executable, and you get an equally dangerous program, yet it doesn't have any UB.
I'd put it like this:
Undefined behavior is a property of an abstract machine. When you write any high-level language with an optimizing compiler, you're writing code against that abstract machine.
The goal of an optimizing compiler for a high-level language is to be "semantics-preserving", such that whatever eventual assembly code that gets spit out at the end of the process guarantees certain behaviors about the runtime behavior of the program.
When you write high-level code that exhibits UB for a given abstract machine, what happens is that the compiler can no longer guarantee that the resulting assembly code is semantics-preserving.
> It’s the only kind of program that can be actually reasoned about.
No. That is one restriction that allows you to theoretically escape the halting problem, but not the only one. Total functional programming languages for example do it by restricting recursion to a weaker form.
Also, more generally, we can reason about plenty of programs written in entirely Turing complete languages/styles. People keep mistaking the halting problem as saying that we can never successfully do termination analysis on any program. We can, on many practical programs, including ones that do dynamic allocations.
Conversely, there are programs that use only a statically bounded amount of memory for which this analysis is entirely out of reach. For example, you can write one that checks the Collatz conjecture for the first 2^1000 integers that only needs about a page of memory.
That's true in general, but people do use these hobbyist boards as an alternative to a manufacturer dev board when prototyping an actual product.
It's reasonably common in the home automation space. A fair few low volume (but still commercial nevertheless) products are built around ESP32 chips now because they started with ESPHome or NodeMCU. The biggest energy provider in the UK (Octopus) even have a smart meter interface built on the ESP32.
Avoiding cyclic dependencies is good, sure. And they do name specific problems that can happen in counterexample #1.
However, the reasoning as to why it can't be a general DAG and has to be restricted to a polytree is really tenuous. They basically just say counterexample #2 has the same issues with no real explanation. I don't think it does, it seems fine to me.
There's no particular reason an Auth system must be designed like counterexample #2. There's many ways to design that system and avoid cycles. You can leverage caching of role information - propagated via messages/bus, JWT's with roles baked-in and IDP's you trust, etc. Hitting an Auth service for every request is chaotic and likely a source of issue.
You don't necessarily need to hit the auth service on every request, but every service will ultimately depend on the auth service somewhere in its dependencies.
If you have two separate systems that depend on the auth system, and something depends on both, you have violated the polytree property.
You shouldn't depend on the auth service, just subscribe to it's messages and/or trust your IDP's tokens.
This article, in my interpretation, is about hard dependencies, not soft. Each of your services should have their own view of "the world". If they aren't able to auth/auth a request, it's rejected - as it should be, until they have the required information to accept the request (ie. broadcasted role information and/or an acceptable jwt).
There’s a million reasonable situations where this pattern could arise because of you want to encapsulate a domain behind a micro service.
Take the simplest case of a CRM system a service provides search/segmentation and CRUD on top of customer lists. I can think of a million ways other services could use that data.
The article doesn't make that claim. For example, the service n7 is used by multiple other nodes, namely n3 and n4. There is no cycle there, so it's okay.
but why is having multiple paths to a service wrong ? The article just claims "it does bad things", without explaining how it does bad things and why it would be bad in that context.
Treating N4 as a service is fair. I think the article was leaning more toward that idea of N4 being a database, which is a legit bad idea with microservices (if fact defeating the point entirely). My takeaway is that if you're going to have a service that many other services depend on, you can do it but you need to be highly away of that brittleness. Your N4 service needs to be bulletproof. Netflix ran into this exact issue with their distributed cache.
Suppose we were critiquing an article that was advocating the health benefits of black coffee consumption, say, we might raise eyebrows or immediately close the tab without further comment if a claim was not backed up by any supporting evidence (e.g. some peer reviewed article with clinical trials or longitudinal study and statistical analysis).
Ideally, for this kind of theorising we could devise testable falsifiable hypotheses, run experiments controlling for confounding factors (challenging, given microservices are _attempting_ to solve joint technical-orgchart problems), and learn from experiments to see if the data supports or rejects our various hypotheses. I.e. something resembling the scientific method.
Alas, it is clearly cost prohibitive to attempt such experiments to experimentally test the impacts of proposed rules for constraining enterprise-scale microservice (or macroservice) topologies.
The last enterprise project I worked on was roughly adding one new orchestration macroservice atop the existing mass of production macroservices. The budget to get that one service into production might have been around $25m. Maybe double that to account for supporting changes that also needed to be made across various existing services. Maybe double it again for coordination overhead, reqs work, integrated testing.
In a similar environment, maybe it'd cost $1b-$10b to run an experiment comparing different strategies for microservice topologies (i.e. actually designing and building two different variants of the overall system and operating them both for 5 years, measuring enough organisational and technical metrics, then trying to see if we could learn anything...).
Anyone know of any results or data from something resembling a scientific method applied to this topic?
Came here to say the same thing. A general-purpose microservice that handles authentication or sends user notifications would be prohibited by this restriction.
reply