Vigorous activity is defined as something like > 75-80% max heart rate, or > 6.0 METS, not as an absolute, all out sprint. It's actually quite far from what you expect
The 2019 physical activity guidelines for Americans include consensus recommendations including the evidence and some good graphs to visualize the "bang for buck"
For those curious, those (weekly) recommendations are: twice weekly resistance training, and 150-300min moderate intensity aerobic activity, or 75-150min vigorous aerobic activity.
If you want a program to just give you a starting point, I highly recommend Barbell Medicine's (free) "Beginner Prescription"
It feels ridiculous not to mention car dependence and the things that enabled it: restrictive zoning, parking minimums, the car lobby.
In the last 50 years, the US has bulldozed dense, mixed used housing that enabled community and tight knit neighborhoods. More economically/socially viable housing (read: an apartment on top of a business) has literally been banned in much of the US. Ensuring that there's a large plot of asphalt to house personal vehicles that are ever increasing in size is baked into zoning laws (though some cities have finally banned parking minimums). Suburbia sprawls, literally requiring most of the country to own a car.
I would love to see some data on this, but my intuition is that everyone is physically farther away as a result, which weakens their general connection and likelihood to party together, and makes it harder for them to get to/from a party in the first place.
There's other feasible side effects too like less savings due to the cost of owning a car (I've seen estimates of the US average exceeding $10k/yr), or expensive housing exacerbated by all of the above - less space for housing due to roads/parking (and the cost rising as a direct result of a developer needing to include parking), and rising taxes to finance more and more infrastructure: suburban sprawl means more roads, pipes, electrical lines, while contributing significantly less economic value (Strong Towns has done some great graphics on how much dense urban areas subsidize their sprawling single family home filled counterparts).
It feels ridiculous to bring up car dependence in an article about 1980-2020 social trends, when the US was car-dependent the entire time, and the big drop was in the 2010s in particular.
It’s car dependence, but the impacts were delayed because people used to just drink and drive. Now that’s rightfully seen as unacceptable, but we are still left with car dependence. So people just don’t leave home now.
It was totally unacceptable to drink and drive in the 2000s, and the sharp decline didn't start until right after. You'd also find a similar decline in socializing among non-driving-age children.
It was totally unacceptable in the 2000s, but there still remained a lot of "...but I can probably get away with it". That has declined in the interim.
The overall "going out with friends" survey result, not just partying, dropped from like 76% to 65% within a few years. There's no way that was from people suddenly not drinking and driving.
The sprawl of suburbia isn't so much outside the top 5-10ish cities. Even "growing" places like Columbus OH in the midwest, you can go from cornfield to cornfield across the built environment in probably 25 miles and about as many minutes on the freeway network that is entirely uncongested since it is so overbuilt for the population (unlike in those top 5 places where it may be underbuilt). By and large that is how the bulk of the country looks and operates. The idea that you'd drive an hour and still be in the same metro region is this big exception that people living in that exception assume must be the norm, but really isn't.
So 75% lives outside of it. Yeah I'd say the majority lives this way and to live otherwise is an exception for the remaining 25%. And even within those top 10 some are more like what I describe. There are definitely parts of those metros where the "mile a minute" travel estimation from uncongested highways applies. Certainly true for philadelphia outside the ~50sq miles of the gridded central city. Places like Houston average home is only like 250k pretty much at parity with midwest prices.
According to the US Census Bureau, the median house age in the usa is 1980.
I live in a 1960 house of the type that is supposedly illegal, although every house in my suburb built since then has had building codes and planning regulations forcing walkability.
Cars are forced for specialization. I had a 20 mile each way commute to an absolutely horrible neighborhood but a very high paying job. I am in walking distance of some minimum wage manual labor jobs. I can't afford to work at those minimum wage manual labor jobs and live here, and a car is incredibly cheap compared to my higher income.
No one can explain why an architectural movement peaking in 1950s-1970s had no effect on socialization for decades until the smartphone era. Multiple entire generations lived in "soulless car filled suburbs" and socialized wildly according to the data in the article... until smartphones...
There's an entire mythology built around the idea that any new problem that occurs began coincidentally with the construction of suburbs in the 1950s, even if the new problem didn't appear for the first 75 years of suburban living.
But that hasn't changed much between the 80s and now. It was bad then and it is bad now. So I don't see it being a significant factor for change in socialization on that timescale.
Curious why you chose C++?
Were there aspects of other languages/ecosystems like Rust that were lacking?
Would choosing Rust be advantageous for blockchains that natively support it (like Solana)?
To be clear: I don't mean to imply you should have done it any other way. I'm interested mainly in gaps in existing ecosystems and whether popular suggestions to "deprecate C++ for memory safe languages" (like one made by Azure CTO years ago) are realistic.
Rust is the future of systems programming and will always be for the foreseeable future. The memory issue will mostly be addressed as needed, see from John Carmack yesterday[1], the C++ ecosystem advantage (a broad sense of how problems whether DS, Storage, OS, Networking, etc. have been solved) will be very hard to overcome for newer programming languages. I think it is ironic how modern C++ folks just keep chugging along releasing products while Rust folks are generally haranguing everyone about "memory safety" and generally leaving half finished projects (turns out writing Rust code is more fun than reading someone else, who would have guessed).
> The memory issue will mostly be addressed as needed
I have no allegiance to either lang ecosystem, but I think it's an overly optimistic take to consider memory safety a solved problem from a tweet about fil-c, especially considering "the performance cost is not negligible" (about 2x according to a quick search?)
Performance drop of 2x for memory safety critical sections vs Rust rewrite taking years/decades, not even a contest. Now, if that drop was 10x maybe, but at 2x it is no brainer to continue with C++. I'm not certain Fil-C totally works in all cases, but it is an example of how the ecosystem will evolve to solve this issue and not migrate to Rust.
What would you consider to be a non memory safety critical section? I tried to answer this and ended up in a chain of 'but wait, actually memory issues here would be similarly bad...', mainly because UB and friends tend to propagate and make local problems very non-local.
Because we are on the 'unsafe' territory. And Rust doesn't even have a defined memory model. Rust is a little bit immature. We have some other services written in Rust though.
> With POSIX semaphores, mutexes, and shared pointers, it is very rare to hit upon a memory issue in modern C++.
There is a mountain of evidence (two examples follow) that this is not true. Roughly two-thirds of serious security bugs in large C++ products are still memory-safety violations.
I write high performance backends in C++. Works approximately as described in article and all data are in RAM and in structures specialized for access patterns. Works like a charm and runs 24x7 without a trace of problem.
I've never had a single complaint from my customers. Well I do have bugs in logic during development but those are found and eliminated after testing. And every new backend I do I base on already battle tested C++ foundation code. Why FFS would I ever want to change it (rewrite in Rust). As a language Rust has way less features that I am accustomed to use and this safety of Rust does not provide me any business benefits. It is quite the opposite. I would just lose time, money and still have those same logical bugs to iron out.
How many other programmers have you trained up to that level of results? Can you get them to work on Windows, Chrome, etc. so users stop getting exposed to bugs which are common in C-like languages but not memory-safe languages?
I do not train programmers. I hire subcontractors when I need help. They're all same level as myself or better. Easy to find amongst East Europeans and does not cost much. Actually cheaper than some mediocre programmer from North America who can only program using single language / framework and has no clue about architecture and how various things work together in general.
Any reasonable meaning of “proper” would include not causing memory issues, so you’ve just defined away any problems. Note that this is substantially different from not having any problems.
The great lesson in software security of the past few decades is that you can’t just document “proper usage,” declare all other usage to be the programmer’s fault, and achieve anything close to secure software. You must have systems that either disallow unsafe constructs (e.g. rust preventing references from escaping at compile time) or can handle “improper usage” without allowing it to become a security vulnerability (e.g. sandboxing).
Correctly use your concurrency primitives and you won’t have thread safety bugs, hooray! And when was the last time you found a bug in C-family code caused by someone who didn’t correctly use concurrency primitives because the programmer incorrectly believed that a certain piece of mutable data would only be accessed on a single thread? I’ll give you my answer: it was yesterday. Quite likely the only reason it’s not today is because I have the day off.
> And when was the last time you found a bug in C-family code caused by someone who didn’t correctly use concurrency primitives because the programmer incorrectly believed that a certain piece of mutable data would only be accessed on a single thread? I’ll give you my answer: it was yesterday.
You answered my question. My original argument was using concurrency primitives "properly" in C++ prevents memory issues and Rust isn't strictly necessary.
I have nothing against Rust. I will use it when they freeze the language and publish a ISO spec and multiple compilers are available.
> My original argument was using concurrency primitives "properly" in C++ prevents memory issues
Yes, I know, I addressed that. It's true by definition, and a useless statement. Improper usage will happen. If improper usage results in security vulnerabilities, that means you will have security vulnerabilities.
Note that I say this as someone who makes a very good living writing C++ and has only dabbled in rust. I like C++ and it can be a good tool, but we must be clear-eyed about its downsides. "It's safe if you write correct code" is a longer way to say "it's unsafe."
You're right, if you use the concurrency primitives properly you won't have data races. But the issue is when people don't use the concurrency primitives properly, which there is ample evidence for (posted in this thread) happening all the time.
But with this argument, the response is "well they didn't use the primitives properly so the problem is them", which shifts the blame onto the developer and away from the tools which are too easy to silently misuse.
This also ignores memory safety issues that aren't data races, like buffer overflows, UAF, etc.
Proper usage is fine. The problem is that it is easy to make mistakes. The compiler won't tell you and you may not notice until too late in production, and it will take forever to debug.
Here's two: CVE-2021-33574, CVE-2023-6705. The former had to be fixed in glibc, illustrating that proper usage of POSIX concurrency primitives does nothing when the rest of the ecosystem is a minefield of memory safety issues. There are some good citations on page 6 of this NSA Software Memory Safety overview in case you're interested://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI_SOFTWARE_MEMORY_SAFETY.PDF
Edit: to be less glib, this is like saying “our shred-o-matic is perfectly safe due to its robust and thoroughly tested off switch.” An off switch is essential but not nearly enough. It only provides acceptable safety if the operator is perfect, and people are not. You need guards and safety interlocks that ensure, for example, that the machine can’t be turned on while Bob is inside lubricating the bearings.
Mutexes and smart pointers are important constructs but they don’t provide safety. Safety isn’t the presence of safe constructs, but the absence of unsafe ones. Smart pointers don’t save you when you manage to escape a reference beyond the lifetime of the object because C++ encourages passing parameters by reference all over the place. Mutexes and semaphores don’t save you from failing to realize that some shared state can be mutated on two threads simultaneously. And none of this saves you from indexing off the end of a vector.
You can probably pick a subset of C++ that lets you write reasonably safe code. But the presence of semaphores, mutexes, and shared pointers isn’t what does it.
I don’t think so. The fact that someone with extensive experience thinks modern C++ is safe because it has semaphores and mutexes and smart pointers is legitimately scary. It’s not merely wrong, it reflects a fundamental misunderstanding of what the problem even is. It’s like an engineer designing airliners saying that they can be perfectly safe without any redundant systems because they have good wheels. That should have you backing away slowly while asking which manufacturer they work for.
I think their statement amounts to something like in line of: subset of modern C++ and feature usage patterns can be reasonably safe and I am ok with it. Nothing is ever really safe of course. One should consider trade offs of quality / safety vs costs and make their own conclusion on where to lean more and where enough is enough.
There's an argument to be made that you can write safe C++ by using the right subset of the modern language. It might even be a decent argument. But that's not the argument that was made here. They mentioned two things that have only the most tangential connection to security and that aren't even part of C++, plus one C++ feature that solves exactly one problem.
Tl;DR: This was software that ran on a spacecraft. Specifically designed to be safe, formally analyzed, and tested out the wazoo, but nonetheless failed in flight because someone did an end-run around the safe constructs to get something to work, which ended up producing a race condition.
I haven't seen anybody mention NvChad, which is a popular pre-configured neovim setup with lots of documentation (and a community of support). You're still free to customize, but it saves a TON of time in terms of getting to the foundational editor features.
The author mentions switching from editor to terminal often - NvChad has built in terminal integration so you can toggle floating/vertical/horizontal terminals (whose contents persist when closed) with a simple keybind.
Highly recommend Barbell Medicine if you're interested in evidence based strength training info.
The best workout routine is one that you enjoy and adhere to. Progressive overload (which is NOT just adding weight) is important for progression too. But you probably don't need to go "as hard" as you think - somewhere between 2 and 5 repetitions from failure (RPE 5-8) is fine assuming you're in it for general fitness/health (but even if you aren't).