How is it any worse than using names from a namespace controlled by “npmjs.com”: if you’re concerned about build stability, you should be caching your deps on your build servers anyways.
I've never used npm or developed any javascript before but it sounds equally horrible.
Not decoupling the source of the package (i.e., the location of the repository whether it is on remote or local) and its usage in the language is a terrible idea.
from foo import bar
# foo should not be a URL. It should just be an identifier.
# The location of the library should not be mangled up in the code base.
Are we gonna search replace URL strings in the entire codebase because the source changed? Can someone tell me what is the upside of this approach because I cannot see a single one but many downsides.
The whole idea of a URL is that it’s a standardized way of identifying resources in a universally unique fashion: if I call my utility library “utils”, I’m vulnerable to name collisions when my code is run in a context that puts someone else’s “utils” module ahead of mine on the search path. If my utility module is https://fwoar.co/utils then, as long as I control that domain, the import is unambiguous (especially if it includes a version or similar.).
The issue you bring up can be solved in several ways: for example, xml solves it by allowing you to define local aliases for a namespace in the document that’s being processed. Npm already sort of uses the package.json for this purpose: the main difference is that npmjs.com hosts a centralized registry of module names, rather than embedding the mapping of local aliases->url in the package.json
Allow me to provide an extremely relevant example (medium sized code base).
About 100 python files, each one approximately 500-1000 lines long.
Imagine in each one of these files, there are 10 unique imports. If they are URLs (with version encoded in the URL):
- How are you going to pin the dependencies?
- How do you know 100 files are using the same exact version of the library?
- How are you going to refactor dependency resolution or upgrades, maintenance, deprecation?
How will these problems be solved? Yes, I understand the benefits of the URL - its a unique identifier. You need an intermediate "look up" table to decouple the source from the codebase. That's usually requirements.txt, poetry.lock, pipenv.lock, etc.
The Deno docs recommend creating a deps.ts file for your project (and it could be shared among multiple projects), which exports all your dependencies. Then in your application code, instead of importing from the long and unwieldy external URL, import everything from deps.ts, e.g.:
// deps.ts
export {
assert,
assertEquals,
assertStrContains,
} from "https://deno.land/std/testing/asserts.ts";
And then, in your application code:
import { assertEquals, runTests, test } from "./deps.ts";
This was my first instinct about how I'd go about this as well. I actually do something similar when working with node modules from npm.
Let's say I needed a `leftpad` lib from npm - it would be imported and re-exported from `./lib/leftpad.js` and my codebase would import leftpad from `./lib`, not by its npm package name.
If / when a better (faster, more secure, whatever) lib named `padleft` appears I would just import the other one in `./lib/leftpad.js` and be done. If it had incompatible API (say, reversed order of arguments) I would wrap it in a function that accepts the original order and calls padleft with the arguments reversed so I wouldn't have to refactor imports and calls in multiple places across the project.
Yeah, this sort of "dependency injection" scheme is better than having random files depend on third party packages anyways: it centralizes your external dependencies and it makes it easier to run your browser code in node or vice-versa: just implement `lib-browser` and `lib-node` and then swap then out at startup.
It's an upcoming feature on the browser standards track gaining a lot of traction (deno already supports it), and offers users a standardized way to maintain the decoupling that you mentioned, and allows users to refer to dependencies in the familiar bare identifier style that they're used to from node (i.e. `import * as _ from 'lodash'` instead of `import * as _ from 'https://www.npmjs.com/package/lodash'`).
I imagine tooling will emerge to help users manage & generate the import map for a project and install dependencies locally similar to how npm & yarn help users manage package-lock.json/yarn.lock and node_modules.
Yeah, I agree, but that intermediate lookup table (a) can be in code and (b) can involve mapping local package names to url package names.
One off scripts would do `from https://example.com/package import bar` and bigger projects could define a translation table (e.g. in __init__.py or similar) that defines the translation table for the project.
Embedding this sort of metadata in the runtime environment has a lot of advantages too: it’s a lot easier to write scripts that query and report on the metadata if you can just say something like `import deps; print( deps.getversions(‘https://example.com/foo’)`
One of the best parts about web development is that, for quick POC-type code, I can include a script tag that points at unpkg.com or similar and just start using any arbitrary library.