It's nice for anything with nested scopes -- Example that came to mind since a link to Crafting Interpreters was also on the front page: Variable lookup for for an interpreter. If each scope gets its own set of bindings, you can make a ChainMap from most specific to least specific and look up values from that instead of manually traversing scopes.
If you're implementing some kind of style sheet system, you often want to support a tower of overrides (say, theme → document → chapter → explicitly-marked-section).
This strikes me very much as the wrong default behaviour. If you want the (IMO expected) behaviour of updating maps later in the list, the docs advocate creating a entirely new class with overridden set and delete methods[0]. That's not the end of the world, but if they had made this the default behaviour then the getting the current behaviour would just be
chain_map.maps[0][key] = value
and you wouldn't need to create a new class at all.
I think it depends on your use case. If you're using it to unify several disparate mappings then it might make more sense for changes to affect the originating mapping. OTOH if you're using it to provide several levels of defaults (e.g. global config -> user config -> envars -> runtime config) then it makes more sense for changes to affect only the topmost mapping. This is also how union filesystems work.
FWIW I find the current behavior preferable. In fact in all cases I use ChainMap the first mapping is always an empty dictionary, because I specifically don't want to mutate the others.
hmm. That is not what I would have expected. I would have thought it would update the value of the first map with the key "iMac", otherwise, add the key to the top level. The same would go for:
>>> del inv['IMac']
deleting the key in the first mapping it appears. My guess is it is designed to ensure that updating a key will not update accidentally update a default mapping? This behavior could easily be prevented though by:
Your point is interesting, and it’s how I’ve designed a lot of APIs when I’m not sure how I’ll end up needing the class or function the most: slightly less ergonomic in the know use case, but across the board it has a higher “median” ergonomicity. A third alternative of course would be to add a method to the constructor, specifying this behavior (e.g. write_depth=1).
However, it appears that the “locals, globals, builtins” lookup was the design constraint this was intended for, and the core of Python seems to prefer new classes over compact functionality. For example look at OrderedDict not just being an “ordered=“ keyword only parameter (ordered) on dict.
That design would have made ordered dict incompatible with dict (an ordered dict couldn't be constructed with an ordered key, same for an unordered dict), and at the time, keyword args were unordered, so an ordered dict had to be constructed with an iterable of tuples instead of key value pairs.
I disagree. There’s a strange precedent for builtins overloading like ‘type’. The same could be done for ‘dict’. Do you want me to share a satisficing call signature?
There are a few modules on PyPI (env, ezenv) that make the environment lookup portion more convenient/robust, by selecting a prefix for your application config. (Or write it yourself.) A prefix function that returns a mapping with the keys sliced and lowercased, works like this:
[1] https://docs.python.org/3.7/library/collections.html#collect...