Eventually and it was massively controversial within academia. There were studies that showed it worked, but studies are positivist and for many education academics, positivist is an insult. That's why it took literally generations and a political war to soak into academia at large after the science was uncontroversial.
> Construction in the USA is driven by capitalism. From my own observations, a big part of why we build less in recent times is the real estate market crash in 2008. We're still feeling the effects.
An efficient market would see an increase in supply to meet demand. This is exactly what happened in Minneapolis, Raleigh, and many cities in Texas, which made it comparatively earlier to get construction permits (in some cases, particularly for multi-family housing).
> We have too much space relative to our population
If you're arguing that there's an abundance of space, this is true in many countries (and was certainly true prior to the Federal-Aid Highway Act or Levittown).
> We have too much space relative to our population and our cultural focus on individualism mean that people will always prefer single family homes
Why? There are plenty of locales in the US where this very much isn't the case.
> Those cant exist without cars and people with cars need to be able to commute.
If we're simply talking about the average daily commute for the average person, why? There are still plenty of cities in the US that have effective public transportation.
MB provided a cash infusion during the GFC in exchange for 10% of Tesla. Both companies saw it as a strategic partnership.
Tesla planned on sticking to luxury vehicles and selling electric power trains to companies, like MB, that would handle everything else. MB, as far as I’m aware, thought that Tesla would prioritize this more than they did.
Tesla helped develop the MB B 250e, MB’s first BEV. At the same time, they developed and launched the Model S, which was far more expensive but a complete game changer.
Who knows what happened between this and when MB sold their stake in Tesla, but it’s easy to imagine that both companies became less enthusiastic about their partnership over time.
> I expect the EU will introduce hefty tariffs on Chinese EVs when the local automakers will lobby for it to protect local jobs, but hardcore protectionism long term only ensure your domestic industry lags behind technologically and falls behind globally.
Not sure what the EU’s strategy is here. The impact of Chinese EVs is going to be the same as electrification would have been without them in the first place.
Think about it this way. The EU automotive supply chain was perfectly healthy when US owned manufacturers had a large chunk of market share. Throw on the tariffs, force local production, and everything is good.
Chinese manufacturers will eventually do the same thing. Tesla and Greely have already done it. There are plenty of recent examples of traditional manufacturers doing it as well (Suzuki comes to mind).
The EU automotive industry is still in bad shape. The level of production and employment across the entire supply chain simply isn’t needed for BEVs.
I’ve transitioned a lot of my work over to Julia, but R is still the most intuitive language I’ve used for scripting out data collection, cleaning, aggregation, and analysis cases.
The ecosystem is simply better. The folks who maintain CRAN do a fantastic job. I can’t remember the last time a library incompatibility led to a show stopper. This is a weekly occurrence in Python.
> I can’t remember the last time a library incompatibility led to a show stopper.
Oh, it’s very common unless you basically only use < 5 packages that are completely stable and no longer actively developed: packages break backwards compatibility all the time, in small and in big ways, and version pinning in R categorically does not work as well as in Python, despite all the issues with the latter. People joke about the complex packaging ecosystem in Python but at least there is such a thing. R has no equivalent. In Python, if you have a versioned lockfile, anybody can redeploy your code unless a system dependency broke. In R, even with an ‘renv’ lockfile, installing the correct packages version is a crapshoot, and will frequently fail. Don’t get me wrong, ‘renv’ has made things much better (and ‘rig’ and PPM also help in small but important ways). But it’s still dire. At work we are facing these issues every other week on some code base.
I'd love to hear more about this because from my perspective renv does seem to solve 95% of the challenges the folks face in practice. I wonder what makes your situation different? What are we missing in renv?
Oh, I totally agree that ‘renv’ probably solves 95% of problems. But those pesky 5%…
I think that most problems are ultimately caused by the fact that R packages cannot really declare versioned dependencies (most packages only declare `>=` dependency, even though they could also give upper bounds [1]; and that is woefully insufficient), and installing a package’s dependencies will (almost?) always install the latest versions, which may be incompatible with other packages. But at any rate ‘renv’ currently seems to ignore upper bounds: e.g. if I specify `Imports: dplyr (>= 0.8), dplyr (< 1.0)` it will blithely install v1.1.3.
The single one thing that causes most issues for us at work is a binary package compilation issue: the `configure` file for ‘httpuv’ clashes with our environment configuration, which is based on Gentoo Prefix and environment modules. Even though the `configure` file doesn’t hard-code any paths, it consistently finds the wrong paths for some system dependencies (including autotools). According to the system administrators of our compute cluster this is a bug in ‘httpuv’ (I don’t understand the details, and the configuration files look superficially correct to me, but I haven’t tried debugging them in detail, due to their complexity). But even if it were fixed, the issue would obviously persist for ‘renv’ projects requiring old versions.
(We are in the process of introducing a shared ‘renv’ package cache; once that’s done, the particular issue with ‘httpuv’ will be alleviated, since we can manually add precompiled versions of ‘httpuv’, built using our workaround, to that cache.)
Another issue is that ‘renv’ attempts to infer dependencies rather than having the user declare them explicitly (a la pyproject.toml dependencies), and this is inherently error-prone. I know this behaviour can be changed via `settings$snapshot.type("explicit")` but I think some of the issues we’re having are exacerbated by this default, since `renv::status()` doesn’t show which ones are direct and which are transitive dependencies.
Lastly, we’ve had to deactivate ‘renv’ sandboxing since our default library is rather beefy and resides on NFS, and initialising the sandbox makes loading ‘renv’ projects prohibitively slow — every R start takes well over a minute. Of course this is really a configuration issue: as far as I am concerned, the default R library should only include base and recommended packages. But it in my experience it is incredibly common for shared compute environments to push lots of packages into the default library. :-(
---
[1] R-exts: “A package or ‘R’ can appear more than once in the ‘Depends’ field, for example to give upper and lower bounds on acceptable versions.”
Agree with this, I am pretty agnostic to the pandas vs R whatever stuff (I prefer base R to tidyverse, and I like pandas, but realize I am old and probably not in majority based on comments online). But many teams who are "R adherent" folks I talk to are not deploying software in varying environments so much as reporting shops doing ad-hoc analytics.
There was a good discussion on Reddit about it. I don’t have the link but it should be easy to find.
Annual meetings are apparently organized by local groups who lobby/compete with one another. China has a very large SciFi readership and the group from Chengdu was very active when it came to lobbying and gathering votes. It wasn’t until later that people started to realize this may not have been the best decision, e.g. with visa applications and so on.
As far as the actual scandal, there’s also discussion about whether there was any actual government intervention or if this was mainly the result of self-sensorship.
The risk of government intervention (read: penalties for everyone involved) is what drives self-censorship. There's no meaningful distinction between the two.
Look, I hate pulling out the fallacies list, but this is literally the dictionary definition of tu quoque. Nobody mentioned the US here, and it's not relevant to the discussion.
Being held without legal representation or fair trial is extremely uncommon in the West and normally reserved for military custody.
China is a state with a poor human rights record and unlike the US will systematically deny people basic human rights for entirely political reasons.
I can't even begin to understand your comment; obviously Edward Snowden fled to Russia, because if he fled to a country with an extradition treaty with the US he'd be extradited. I can't see a white, English-speaking dude trying to defect to Cuba or China if given the choice.
The US is not perfect, but it's leaps and bounds better than the PRC when it comes to human rights. FFS, the latest party line from the CCP is that human rights are not universal, but rather a Western-centric idea that is being unjustly pushed on the world!
> As far as the actual scandal, there’s also discussion about whether there was any actual government intervention or if this was mainly the result of self-sensorship.
China’s whole censorship legal framework is based on self censorship, so there would not be much difference.
> It is also, IMO, a lot of protein to consume a day. I personally have never done that level of protein intake, ever, but I know a few who have, its an insane (to me) amount of protein powder, chicken, and broccoli (or alternatives) you have to consume, every day. If the goal was sustainability, it's definitely not possible for average people.
Yeah, I don't get it either.
If you exercise regularly and you're going on a cut, 1.6kg is a good target to maintain muscle mass. That said, in order for you to reach this point, you already have to be very good at regulating your food intake.
Otherwise, .9g per kg is a much more realistic target for the 90%+ of people who are simply trying to be fit.
Research shows that 0.8g per kg of body weight is the absolute minimum recommended for individuals with minimal physical activity. If you are attempting to become fit, research suggests you want at least 1.3g per kg for moderate activity.
Honestly 1.6g per kg isn’t that much. It’s much more than most people are used to, but it’s not like you need protein shakes and chicken breast only. It just requires you to not consume all your calories in carbs and fat, which is a lot easier.