Contrarian here. I've fell in love with Firefox's AI Chatbox sidebar. It's extremely helpful to have Gemini immediately available to help with translating and summarizing text.
Eventually and it was massively controversial within academia. There were studies that showed it worked, but studies are positivist and for many education academics, positivist is an insult. That's why it took literally generations and a political war to soak into academia at large after the science was uncontroversial.
> Construction in the USA is driven by capitalism. From my own observations, a big part of why we build less in recent times is the real estate market crash in 2008. We're still feeling the effects.
An efficient market would see an increase in supply to meet demand. This is exactly what happened in Minneapolis, Raleigh, and many cities in Texas, which made it comparatively earlier to get construction permits (in some cases, particularly for multi-family housing).
> We have too much space relative to our population
If you're arguing that there's an abundance of space, this is true in many countries (and was certainly true prior to the Federal-Aid Highway Act or Levittown).
> We have too much space relative to our population and our cultural focus on individualism mean that people will always prefer single family homes
Why? There are plenty of locales in the US where this very much isn't the case.
> Those cant exist without cars and people with cars need to be able to commute.
If we're simply talking about the average daily commute for the average person, why? There are still plenty of cities in the US that have effective public transportation.
MB provided a cash infusion during the GFC in exchange for 10% of Tesla. Both companies saw it as a strategic partnership.
Tesla planned on sticking to luxury vehicles and selling electric power trains to companies, like MB, that would handle everything else. MB, as far as I’m aware, thought that Tesla would prioritize this more than they did.
Tesla helped develop the MB B 250e, MB’s first BEV. At the same time, they developed and launched the Model S, which was far more expensive but a complete game changer.
Who knows what happened between this and when MB sold their stake in Tesla, but it’s easy to imagine that both companies became less enthusiastic about their partnership over time.
> I expect the EU will introduce hefty tariffs on Chinese EVs when the local automakers will lobby for it to protect local jobs, but hardcore protectionism long term only ensure your domestic industry lags behind technologically and falls behind globally.
Not sure what the EU’s strategy is here. The impact of Chinese EVs is going to be the same as electrification would have been without them in the first place.
Think about it this way. The EU automotive supply chain was perfectly healthy when US owned manufacturers had a large chunk of market share. Throw on the tariffs, force local production, and everything is good.
Chinese manufacturers will eventually do the same thing. Tesla and Greely have already done it. There are plenty of recent examples of traditional manufacturers doing it as well (Suzuki comes to mind).
The EU automotive industry is still in bad shape. The level of production and employment across the entire supply chain simply isn’t needed for BEVs.
I’ve transitioned a lot of my work over to Julia, but R is still the most intuitive language I’ve used for scripting out data collection, cleaning, aggregation, and analysis cases.
The ecosystem is simply better. The folks who maintain CRAN do a fantastic job. I can’t remember the last time a library incompatibility led to a show stopper. This is a weekly occurrence in Python.
> I can’t remember the last time a library incompatibility led to a show stopper.
Oh, it’s very common unless you basically only use < 5 packages that are completely stable and no longer actively developed: packages break backwards compatibility all the time, in small and in big ways, and version pinning in R categorically does not work as well as in Python, despite all the issues with the latter. People joke about the complex packaging ecosystem in Python but at least there is such a thing. R has no equivalent. In Python, if you have a versioned lockfile, anybody can redeploy your code unless a system dependency broke. In R, even with an ‘renv’ lockfile, installing the correct packages version is a crapshoot, and will frequently fail. Don’t get me wrong, ‘renv’ has made things much better (and ‘rig’ and PPM also help in small but important ways). But it’s still dire. At work we are facing these issues every other week on some code base.
I'd love to hear more about this because from my perspective renv does seem to solve 95% of the challenges the folks face in practice. I wonder what makes your situation different? What are we missing in renv?
Oh, I totally agree that ‘renv’ probably solves 95% of problems. But those pesky 5%…
I think that most problems are ultimately caused by the fact that R packages cannot really declare versioned dependencies (most packages only declare `>=` dependency, even though they could also give upper bounds [1]; and that is woefully insufficient), and installing a package’s dependencies will (almost?) always install the latest versions, which may be incompatible with other packages. But at any rate ‘renv’ currently seems to ignore upper bounds: e.g. if I specify `Imports: dplyr (>= 0.8), dplyr (< 1.0)` it will blithely install v1.1.3.
The single one thing that causes most issues for us at work is a binary package compilation issue: the `configure` file for ‘httpuv’ clashes with our environment configuration, which is based on Gentoo Prefix and environment modules. Even though the `configure` file doesn’t hard-code any paths, it consistently finds the wrong paths for some system dependencies (including autotools). According to the system administrators of our compute cluster this is a bug in ‘httpuv’ (I don’t understand the details, and the configuration files look superficially correct to me, but I haven’t tried debugging them in detail, due to their complexity). But even if it were fixed, the issue would obviously persist for ‘renv’ projects requiring old versions.
(We are in the process of introducing a shared ‘renv’ package cache; once that’s done, the particular issue with ‘httpuv’ will be alleviated, since we can manually add precompiled versions of ‘httpuv’, built using our workaround, to that cache.)
Another issue is that ‘renv’ attempts to infer dependencies rather than having the user declare them explicitly (a la pyproject.toml dependencies), and this is inherently error-prone. I know this behaviour can be changed via `settings$snapshot.type("explicit")` but I think some of the issues we’re having are exacerbated by this default, since `renv::status()` doesn’t show which ones are direct and which are transitive dependencies.
Lastly, we’ve had to deactivate ‘renv’ sandboxing since our default library is rather beefy and resides on NFS, and initialising the sandbox makes loading ‘renv’ projects prohibitively slow — every R start takes well over a minute. Of course this is really a configuration issue: as far as I am concerned, the default R library should only include base and recommended packages. But it in my experience it is incredibly common for shared compute environments to push lots of packages into the default library. :-(
---
[1] R-exts: “A package or ‘R’ can appear more than once in the ‘Depends’ field, for example to give upper and lower bounds on acceptable versions.”
Agree with this, I am pretty agnostic to the pandas vs R whatever stuff (I prefer base R to tidyverse, and I like pandas, but realize I am old and probably not in majority based on comments online). But many teams who are "R adherent" folks I talk to are not deploying software in varying environments so much as reporting shops doing ad-hoc analytics.