I think it's not bashing of npm specifically so much as it is the node ecosystem it serves and depends on; at least in my mind it's difficult to separate node from npm. That said, for what it's trying to do (read a list of deps, resolve vs. registry, download and unpack) it seems to do a fine job of it.
My major complaint about npm is the choice to allow version range operators on dependency declarations. We know the node.js ecosystem places a high value on composability, so using lots of tiny modules which themselves depend on lots of tiny modules is the norm. This is a problem though because range operators get used liberally everywhere, so getting reproducible builds is like winning the lottery.
There are other things I don't like about using npm: node_modules/ is big and has a lot of duplication (even with npmv3), it's pretty slow, historically it has been unstable, its still crap on Windows, etc. - but for someone who has 'ensures reproducible builds' as part of their job description, the way its modules get versioned is its worst feature.
For reproducible builds (or at least 'to get the same versions again') you should be using 'npm shrinkwrap'. (Of course there's probably more you should do to get true reproducible builds, but that goes for any package manager).
The range operators are important, else you'd never be able to resolve 2 packages that want a similar versioned sup-dependency e.g. jquery 1.12 because without range operators those 2 packages would have declared minor version differences (1.12.1 and 1.12.3) depending on when they were published. This would mean you'd always end up with duplicated dependencies.
I'd argue 'node_modules is big' is not a fault of npm. If the package or app you're trying to install generates a large node_modules dir, that is something you should take up with the package maintainer. See buble vs babel - buble has a way smaller dep tree.
npm is only slow in the ways that all other package managers are, when installing large dependency trees or native dependencies (like libSass) and it is way faster than say pip and rubygems in this regard. When I 'pip install -r requirements.txt' at work, I literally go and make a coffee.
Also never experienced any instability, though I may have been lucky. Certainly it has been very stable for the last year or so when I've been working with a lot. Could you elaborate on why it is crap on Windows? I did think all major issues (e.g. deep nesting problem) were now fixed ...
The main problems we ran into with shrinkwrap were:
It shrinkwraps everything in your current node_modules directory.
This includes platform specific dependencies that may not work on other platforms but now will cause npm install to fail instead of just printing a message about it.
So our current workflow has to be:
1. Update package.json
2. rm -rf node_modules/
3. npm install --production # This doesn't include any of those pesky platform specific packages
4. npm shrinkwrap
5. npm install # Get the dev dependencies
As far as the other comments about npm, I just generally have more problems with it than rubygems/bundler and the general OS package managers.
Shrinkwrap is ridiculous. I'm expected to go look at every resolved dependency and individually add them if I want to update or not? No thanks; one app at my workplace defines ~50 top level dependencies, but this balloons to almost 1300 - and this is with npm v3 - after npm install. Ain't nobody got time for that.
Deep nesting is not 'solved' it just doesn't happen 100% of the time anymore. If you have conflicts, you still have deep trees. I suppose range operators help with this a little, but looking at what gets installed it doesn't seem to help that much; I still have duplicated dependencies.
I was mentally comparing npm to tools like maven, ivy and nuget, all of which are faster but also not interpreted. Not a fair comparison I guess.
> Shrinkwrap is ridiculous. I'm expected to go look at every resolved dependency and individually add them if I want to update or not?
Not sure you're aware of the suggested flow (see here [1]), but it isn't ridiculous. Use 'npm outdated' to see which packages are out-of-date and 'npm update --save' to update a dep (and update the shrinkwrap file).
Keeping track of stale sub-dependencies is a problem in and of itself, but again that exists with any package manager. (Because you will always need to pin dependencies before you go to prod right). So that 'lockfile' will get out of date pretty fast. Node at least has solutions for this that other communities don't [2] (I haven't tried this service).
Even patenting a formula's application is becoming more restricted. In Australia and the US, the two relevant tests seem to be whether the formula does something a human can't, and whether the formula is used to 'improve' the operations of a computer or machine (e.g. if it made your hard drive faster or controlled a robotic arm).
I had to look into this regarding an algorithm we're building and whether it's patentable. If your formula is a mere scheme or method implemented in a computer, it's likely to be rejected. The following links add detail:
Is there a meaningful performance difference between ZFS in FreeBSD or Illumos and ZoL?
Edit: this is a legitimate question. I know previously Linux-related ZFS efforts used FUSE, but ZoL is native. I assume performance should be roughly equivalent between Linux and FreeBSD, certainly.
Better observability with Dtrace, mdb, iostat, and vmstat on illumos-based systems for sure. Also simple logic dictatates that if something is made on something for that certain something, it's going to run best on that something. Linux has ZFS, but it's grafted on and the illumos POSIX layer is emulated in that sense. Further, Linux's version of OpenZFS will always lag behind fixes and features in the illumos-based systems; even FreeBSD usually contains newer vesions of OpenZFS sooner than ZFS on Linux does. Linux is just "the last hole on the porting flute".
Also, experience tells me that illumos and FreeBSD based systems will always perform faster than Linux with
regads to ZFS but I'd have to publish a full line of benchmarks and I bet even those would get vehemently disputed because Linux is all the rage right now, so that's a lost cause: you have to try it for yourself and make up your own mind.
I think this is overstating the risk a little bit.
Looking at the video linked elsewhere, this was run in a bedroom on the second or even third floor of a single-family residence. He mentions not being able to detect x-rays more than 35ft from the device. I think a neighbor would have to be trying to get in range to be affected.
It has data from 1994 to 2014, including this plus several other statistics. In particular, it looks like they are tracking on the order of trillions of miles driven per year, so you're right, making any kind of statement after only 100 million is more shameful marketing than anything else.
Edit: this is US-only, while Tesla is claiming vs. worldwide. Clearly more miles driven worldwide, and probably higher deaths per 100m miles.
Not only is the statistics a big issue, but the apples-to-oranges comparison is what really irks me. The NHTSA average is over all car models on the road. Even in 2014, a non-negligible fraction of those are cars without even decent airbags and crumple zones.
The proper comparison would be with non-Autopilot Tesla miles. But that comparison makes Autopilot look bad.
Unless you're OK with breaking compatibility with all existing Windows apps - most of which you have no control over - rewriting the kernel as a POSIX thing is going to look a lot like 'bash on windows' in reverse. You'd have to have some kind of shim layer to translate all the old system calls, so most everyone's existing software is going to be slower and there will be bugs, etc. Including your own, since Windows is a kernel, a ton of drivers, a userland (including basic runtimes), a windowing system, a basic set of productivity tools and games, etc.
I think this would be a total disaster that killed Windows, personally.
I wonder what companies really have designed new backends for windows for years anymore? Technically you could run many generic things there (databases or JVM), but what would you actually gain?
Likely if you have to use windows, you're left in some legacy niche, looking backwards and not into the future.
This doesn't make sense to me. The entire point of Roth is that you pay tax up front, and then the retirement proceeds are tax-free (assuming you withdraw after proper age, etc.). To then go and tax these withdrawals is basically neutering the entire Roth deal - I think there would be some pretty major political backlash on that.
What is the percentage of Americans with assets in a Roth IRA compared to the number of Americans who would rather see someone else pay for a tax increase? Politically, altering the deal on Roth IRAs could work out quite well.
I don't know the answer to that, or about Roth 401(k) plans, but I think there are a lot of people overall with one or the other. I think that threatening either would have an effect similar to threats on Social Security, without the justification of it being insolvent.
IMO, at worst, Roth plans could get phased out for new contributions with existing deposits and the tax-free withdrawals honored.
I don't know if it's universal, but it's at least extremely common for there to be a psych eval prior to receiving a badge. Not sure why the parent thinks otherwise.
The operative word there is _good_. Remember that we're talking about multiple systems that are run differently all across the country. Some systems are better than others. It's not a one-size fits all comparison.
My major complaint about npm is the choice to allow version range operators on dependency declarations. We know the node.js ecosystem places a high value on composability, so using lots of tiny modules which themselves depend on lots of tiny modules is the norm. This is a problem though because range operators get used liberally everywhere, so getting reproducible builds is like winning the lottery.
There are other things I don't like about using npm: node_modules/ is big and has a lot of duplication (even with npmv3), it's pretty slow, historically it has been unstable, its still crap on Windows, etc. - but for someone who has 'ensures reproducible builds' as part of their job description, the way its modules get versioned is its worst feature.