This old dev has been around to witness the transition from in-depth algorithm development to algorithms as commodities. I've pounded out my share of B-tree search, polygon intersection, and numeric stuff. Try doing numeric stuff reliably and quickly on a tiny computer without a Floating Point Unit and you'll know why you might need to learn, and then program, Jack Bresenham's line-drawing and circle-drawing algorithms.
And there's Newton and Raphson's sqrt(). Hint: if you understand your data and your precision requirement, you can avoid iterating it all the way out to the end of your integer precision, and get a lot more work done.
Try transposing a 4-megabyte matrix before the week is out on a machine with 32KiB of RAM and two 5-megabyte hard drives. Yes, it can be done.
When I started doing this work it took deep understanding to get anything done. Now it takes the skills of a good reference librarian. We have to be able to find stuff (via npm, nuget, maven) quickly, assess its quality, and figure out how to use it. So the type of thinking we need has changed from "deep" to "broad".
I disagree somewhat. For example in my freshman year of college I took a numerical analysis class that covered Newton-Raphson, Brent’s method, advanced quadrature, Runge-Kutta, preconditioning, all sorts of matrix linsolve, LU, rotation matrix, inversion stuff, floating point byte structure, and various topics in parallel computing like Amdahl’s law.
It was intense and very detailed, with a lot of discussion of how these issues arise in and how to write these algorithms in the C language and also in Python (like, in C extension modules of CPython).
My first job out of college was in a defense research lab that had projects running on tons of simulation systems, radar systems, hardware and embedded systems for vehicles used in the military.
What I learned later when I went back to grad school and worked more mainstream tech product jobs is that in those “hardcore numerical computing” situations, there is often a complete absence of architecture or design, no framing the problem in terms of the customer or success criteria, too much willingness to reinvent wheels, constant belief that build it is always better than buy it.
From a technical point of view, there was also a gaping absence of statistics, especially statistical optimization techniques like with Markov chains and simulated annealing.
There was a certain chip on people’s shoulder that hard core graph algorithms or hard core data structure based search was “real programming” when you could solve the same problem with simulated annealing in 1/100 the amount of code, as long as you didn’t parochially think your numerics experience was superior and force a solution that way.
Even though my peers were really experienced, it felt like there was an inverse relationship between numerical algorithm thinking and creative problem solving thinking. Hard core algorithms just sucked all the air out of the room, without making projects more successful.
> Try transposing a 4-megabyte matrix before the week is out on a machine with 32KiB of RAM and two 5-megabyte hard drives.
Transposing is just copying bytes from one place to another, isn't it? There's no maths involved, is there?
To have that run in a week with the most naive approach possible you'd only need a machine that can copy seven words a second. Are there real relevant machines that can't do that? What algorithms could help you anyway? Something to do with cache and the order in which you read and write? And you've got plenty of space as you can trivially do it in-place.
I suspect, since the poster mentioned having two hard drives, that really they're confusing transposition with inversion. These aren't the same things.
And yet we still do, and it's interesting why we do.
Most software, we optimize to make it more legible to the compiler.
But some software—software we don't trust ourselves to optimize without changing its meaning—we leave alone, and then instead we optimize the compiler specifically to compile that kind of software into better object code.
This is what happens with maths code (usually in FORTRAN) and the FORTRAN compilers that work on them. We don't improve the Fourier-transform implementation; we just teach the compiler how to take the "textbook" implementation we've got, and do ever-fancier things to it.
Of course, one could also improve the Fourier transform's design—but that requires a mathematician, not a software engineer. Nobody's going to come up with an algorithm with entirely-better time/space complexities just by noodling around in an editor with the existing one. They've got to start from scratch with new maths, derive new theorems, write a textbook implementation of that, and then contribute it to the codebase. And then we'll start a new cycle of getting the compiler to optimize that textbook code.
Back in 2008 (?), I was in aerospace. The big thing at that time was to become a "systems integrator" instead of a manufacturer. Obviously, while sound on the surface, the OEMs still are manufacturers today. With a higher degree of integration of third party systems, e.g. radar an so on.
Today, the same logic can be applied to all the Xaas business models out there. SaaS, PaaS, algorythms, it seems that there could be two extremes. On the one end you have companies developing the services, on the other you have businesses excelling at combining these services.
It all depends what kind of work you do. I do things that touch embedded hardware and graphics and frequently find myself doing exactly the sort of things you described. If you miss it, it's certainly not gone :) (and I made a conscious decision to build my career around this, rather than fishing for npm modules).
Then there is the teaching it in schools to new comp sci people, as part of a curriculum designed to help you understand basic principles
Then there are implementations on various environments, constant optimizations, witness the JS engine wars for instance
People specialize in their own industryon various niches — primitives, or combining primitives, etc. And algorithms are just one part of an industry!
How many of you want to invent your own SHA1 or BLS signatures LOL
You just use them and they have documented properties and guarantees.
These are building blocks people use. This thing about glorifying algorithms is sort of an “applied maths” mentality. And models like Black-Scholes therefore become taught everywhere and become self-fulfilling prophecies.
But really, math is just a very simplified version of computing and modeling. People operate with maybe 10-20 symbols throughout the entire thing. There are many things — like the game of life and New Kind of Science - that you can’t describe in analytic form.
I know you meant this harmlessly and maybe even encouragingly, but this post comes off as pretty entitled. It comes off kind of like, "hey, I want access to your knowledge and wisdom, but I want you to do an enormous amount of work to put it into a format that is easy for me to consume at my leisure. plzkthxbye."
Usually when you want to learn from someone more experienced than you, you seek them out, and you do the work to understand their wisdom, not the other way around. Plato didn't ask Socrates to write everything down so he could read it and grow, Plato studied under Socrates and did the work of transcribing his words himself for the future.
> It comes off kind of like, "hey, I want access to your knowledge and wisdom, but I want you to do an enormous amount of work to put it into a format that is easy for me to consume at my leisure. plzkthxbye."
Kind of.
When guys like OJ write books, I buy them. When they blog / tweet, I re-tweet them. Mostly to share knowledge with my industry peers and other newbs, but also to support directly or indirectly.
I don't see this as one-sided and I don't see anything wrong with expecting something in returning for sharing knowledge.
I will tell you what disappoints me - when good knowledge is lost over time. Or worse, hoarded and lost anyway.
And I will tell you what else is disappoints me - people who assume young newbs have an entitlement complex when seeking out knowledge, let alone encouraging others to share knowledge.
You strike me as a 'what's in it for me?' type of person. That's ok, I might buy your blog or read your book if you have them.
> why waste time implementing the ‘low’ level stuff when there were plenty of other problems waiting to be implemented.
Because:
- The package is licensed through GPL
- The lib allocates memory in such way that it messed up my program
- The last update to the lib is 3 years ago, and it takes at least another 3 years for the author of the package get out of jail which is the min punishment of some NSL
- I don't like how it's interface is designed
- Because I got paid even I write everything by myself
The list goes on.
Another way to look at this is, "Do I really want just to implement a Linked-list? Or an sizable & queueable list that allow me to sort items via LRU?". Sometime, you just have to to write some "low level stuff" by yourself to make it exactly as you wanted.
> There are companies where algorithms are not commodities
And there is an entire industry! Embedded (but not the Raspberry-pi kind of embedded that is similar to desktop).
Try to do a dead reckoning system, or anything related to real-time gesture/motion analysis. Try reading, processing, mixing and outputting audio in real time. Or any DSP related stuff. What about FPGA, ASICs, etc.?
Why oh why is the embedded industry so easily forgotten? It's not all cloud, food apps, SaaS and Electron, ya know?
Edit to add: all the above with the limited resources of embedded, that forces you to improve your algorithms.
> Why oh why is the embedded industry so easily forgotten? It's not all cloud, food apps, SaaS and Electron, ya know?
Because the other industry is inherently linked to the internet. It's like the media telling you that the media is very important, or your brain telling you that it's the most interesting organ in your body (I forget which comedian I stole that from).
Depends on the embedded ecosystem. STM has tons of free software packages for their products that remove the need to write drivers and a variety of algorithms from scratch.
Albeit not the same, Xilinx and Altera have a plethora of FPGA IP that can be licensed/bought, although I can see the merit in a custom implementation here to save money.
> I date the age of the Algorithm from roughly the 1960s to the late 1980s. During the age of the Algorithms, developers spent a lot of time figuring out the best algorithm to use and writing code to implement algorithms.
Today there's orders of magnitude more developers spending time figuring out algorithms and writing core to implement them than in the 80s. The difference, maybe, is that even more developers today do not write any algorithm at all. But if there's an age of the algorithm, it is today. At no point in human history have as many algorithms been written as today. And next week, many more will be written!
You're missing the point. There is a larger absolute number of people working on complex algorithms, but as a percentage it is much smaller. In the 60's deep algorithmic knowledge was required for 90%+ of programming tasks, now it is probably closer to 1%. It is perfectly possible to make a living as a programmer nowadays without being able to inverse a binary tree
What's the obsession with inverting a binary tree? I have never found an use case that particular algorithm, but on the other hand I would be very happy if more programmers were aware that having a where clause in sql statement (and maybe and index) is more efficient than selecting entire database and looping over the results, even if the end result is the same.
I understand that in order to come up with the inversion on the fly you need to know about trees and recursion, but is it really that scary?
Probably unrelated, but every time I update Homebrew, I get the feeling that there's something that should be O(log N) or O(n) but the implementation takes O(N), O(N^2) or worse. There's no way checking a small list of installed package versions against a large list of available versions should take that long.
Oh, yes, this! I feel like I'm going crazy that my team members cannot see how incredibly slow Homebrew is. I don't know the reason for this, but updating the list of packages is excruciatingly slow and it does it every time I want to install anything.
Every time I have to use it, I have time to ponder about my life choices (like why am I using a mac to do sysadmin/dev work?).
I doubt it's just this, since on my laptop just brew list from warm cache with 102 packages takes about a second, and twice as long if files must be read from SSD. That's a lot of reading and computation for list of 102 packages.
Apparently I've been living under a rock because despite 20+ years of professional software development (and a degree in discrete mathematics) I wasn't familiar or at least didn't recall this "invert a binary tree" problem. I had to look up what we mean by "invert" here.
My first thought was "flatten the tree depth first and then reverse that linear list" which I knew to be inefficient, but I'm guessing I'd eventually stumble upon the recursive version of that when I got down to the nitty-gritty.
But now that I think about it can't you solve this problem with a single boolean flag that says "read right to left" or "read left to right"? I.e,. if the children are labeled `A` and `B` and you interpret "normal" as A,B and "inverted" as "B,A", isn't that sufficient? That's basically what the recursive version of this algorithm is doing, but why bother to actually move data around?
I get that this is an abstract problem, but I'm having trouble understanding any practical use case for this concept.
If you've got a real, in-memory, linked-list-style binary tree my `is_inverted` flag seems like it would be sufficient.
And if the real problem is moving blocks of data around on disk or in buffers, then isn't this all a question of the exact "flat" representation of this tree?
If my whiteboard answer was "keep a flag that says whether or not the tree is inverted" would I be hired?
Can someone reframe this question in a way that makes it clear that the trivial rtl-mode vs ltr-mode is insufficient? The only way this question makes sense to me is if you're really looking for someone to come up with the serialized representation of that data structure that makes this transform efficient, but none of the public answers to this question seem to cover that at all.
I honestly don't understand how this problem is meaningful for the in-memory, linked-list case.
Most people interpreted it as a horizontal flip like you did, though some cursed souls leapt to doing a vertical flip and converting it into a DAG. A horizontal flip is actually extremely simple, just check the base case, swap the pointers and recurse. Arguably easier than fizzbuzz, and imo a pretty solid interview question. A good argument for physically flipping it instead of tagging it as flipped is that localizes the operation instead of leaking a (potentially recursively) tagged tree into the rest of the application. Of course all the ink spilled about binary tree 'inversion' doesn't have much to do with the actual interview, which apparently was actually about converting from a min-heap to max-heap. Max also self-described as 'rude' w/r/t the interview, so hard to know if the heap incident was even why he wasn't selected.
Like many algorithm interview questions, I'm not sure it's meant to be a practical problem, but rather one that demonstrates your ability to think through problems.
(Out of principle, I prefer to ask interview questions that are based on real problems I've had to solve in my job, but sometimes they're harder to ask coherently than well-defined/concise problems like "invert a binary tree" so I can see the appeal)
> If my whiteboard answer was "keep a flag that says whether or not the tree is inverted" would I be hired?
Personally I'd give you bonus points for recognizing that could be a solution, then clarify that I wanted a solution that actually moved the data around and give you the opportunity to answer it in light of that clarification.
They're not missing it, that is their point exactly: that relative numbers don't matter, absolute numbers do. Of course you no longer need 90% of programmers to write hardcore algorithms - we've evolved beyond that (and yes it's evolution, not involution).
I remember reading an article a while back about how professional photography has become a commodity. You can find and license any imaginable photograph, for pennies, in minutes. From the best photographers in the world. So from a business perspective, photography as a pure art has been devalued to almost nothing. Nobody pays you to take the very best possible photo.
So why do people still hire photographers? To put themselves or their product into a photo. Most photographers are no longer paid to take the very best photos, but to make their photos specific to what the particular client cares about. That's something that can't be commoditized.
I think that's where software is now as an industry. Very few of us these days get to innovate at a "pure art" level. Instead, we get paid to pull together a bespoke solution that's sensitive to a given business and its needs. It isn't novel at the macro-level, only at the micro-level.
This can be kind of depressing, but it's also a little comforting that those micro-level solutions are so hard to fully commoditize. As long as the real world is messy and varied, the leaves in the software tree will have to be too, and there will be work that needs doing.
I don't see how that's depressing at all. Another way to phrase what you're saying is "Developers get paid to write software that solves problems people actually need solved.
For many of us, writing something pure and generic and beautiful is much more gratifying. And it's depressing to think that we'll probably never write something like that that's good enough to be picked over the version that was developed at Google and then open-sourced.
That's not unique to software, or the modern world at all, though.
Most people have never created the best-in-the-world implementation of anything. The math just doesn't work that way. Only one can be the best. We need to stop turning such unobtainable things into goals.
That's not what I'm saying at all. Of course very few people make "the best" of anything. What I'm saying is that the internet, and in our case open-source software, have caused "the best" to be all that matters, because it's accessible to everyone. There's no longer a practical reason for most people to engineer most (generic) things themselves because the better solution isn't hidden behind a license, or proprietary to an organization, or inaccessible due to physical media. People used to buy things like linkers. Today, the state-of-the-art is nearly always at your fingertips, for free, in seconds.
I'm not saying this is a bad thing. Progress always makes certain endeavors obsolete over time. You don't see job postings for telegraph operators any more because it's no longer useful. But if a person enjoys one of those endeavors for its own sake, it's bittersweet to see it relegated to a hobby.
I think software is unique in that there isn't a neat "research" vs. "engineering" side, at least in so far as a lot of boundary pushing in the field has happened by engineers actually building software.
I think there will always be a frontier where that is the case in software—there will always be hackers building open source projects that push some boundary, fostering their own community, and potentially changing broader paradigms—but that more and more, we're seeing a neater boundary between research (Universities, labs, projects that are essentially R&D teams at big companies) and engineering.
I mean, nobody's stopping you from writing your perfect, beautiful Haskell implementation of Dijkstra's Algorithm.
I guess it's simply a different approach, but I'd rather not waste time re-implementing the wheel when I could be making software that will help somebody fulfill a concrete need.
My point is that the ideal scenario, where you write something generic that also meets people's real needs (better than what's already out there), has become a very rare thing.
I concur:
Nowadays, the basic building blocks are done. Good sorting, database indexing, O(1) set-membership, many variants of lists, hash tables, and other higher level data structures are well covered by modern libraries. The foundations of modern system design are algorithms and data structures, and they are largely commoditized, for general use. Certainly, working on general purpose, clever-but-foundational algorithms is becomming increasingly niche.
I disagree with the final conclusion that somehow algorithms have faded into obscurity.
Anyone writing any program has to consistently track their expected program performance as it shuffles data between commodotized library calls or between network end points. This is algorithm design. Many "foundational" algorithms like max-flow actually make use of "foundational(er)" algorithms and shuffle data between them.
This is what the modern programmer does when he decomposes a data-intensive problem into sub-problems and uses commoditized libraries and re-composes those solutions.
The push to catalog and archive a wide array of well-implemented algorithms and data structures tracked the emergence of the personal and commodity hardware. The emergence of new hardware will require new solutions. Mobile, webassembly (perhaps), ASIC-based data centers, CUDA, are all exciting new areas where our "foundational" algorithms and canonical solutions need to be revisited, redesigned, and re-tuned. Or at least re-composed.
I spent a lot of time reading Knuth's books, especially Seminumerical Algorithms when I was implementing multi precision arithmetic for various architectures. What amazing pieces of work they are. That was back in the time when you could look at assembler (or even C code) and get a good idea of how many cycles it would take.
I love the Art of the Algorithm, but I have to say I'm very happy to use off the shelf libraries now. I've implemented binary search dozens of times and each time spent ages fixing all the corner cases. Instructive, but a waste of time when you can import a library which is battle tested and faster than anything you'll ever write.
Algorithms are dead - long live the (commoditized) Algorithm!
Incidentally, I think there is at least one important area where Knuth is actually still ahead of off-the-shelf libraries as of 2020: "broadword computing". When skimming volume 4A for semi-recreational purposes a few years ago, I was surprised to find multiple "bit-hacks" which were better than anything I had found in other sources, and proceeded to use them in my own open-sourced work. Every time I've encountered a situation where bitvectors are useful, I have benefited from rolling my own implementation.
Came here to say this. It's rare that you want to consume these things through an abstraction layer: the layer is often heavier than the work underneath. Sucks for readability, though, so I usually spend as much time documenting as implementing.
>This was the age of the algorithm. [...] When it comes to algorithm implementation, developers are now spoilt for choice; why waste time implementing the ‘low’ level stuff when there were plenty of other problems waiting to be implemented. Algorithms are now like the bolts in a bridge: very important, but nobody talks about them. Today developers talk about story points, features, business logic, etc. [...] Today, we are in the age of the ecosystem.
The blog author doesn't make it explicit but his usage of "algorithm" is actually about "low-level algorithms" which is how it seems to support his claim that "algorithms are now commodities".
But "algorithms" in the general sense are not commodities. It's just that we're now free to concentrate on higher-level problems with new algorithms.
E.g., in the 1980s & 1990s, the Barnes & Noble section for computers would have books with algorithms showing how to write disk-based B-trees from scratch. But now with SQLite library in wide usage, you can just use their B-trees instead of reinventing your own. Some more examples of low-level algorithms I used to write by hand:
- manually uppercase chars by adding "32" to the ASCII code because there was no Upper() function in a standard library.
- manually titlecase/propercase/lowercase a person's name from "JOHN DOE" to "John Doe" by looping the the string char-by-char and subtracting "32" from the ASCII code if it doesn't follows a space char because there was no TitleCase() function
- adding a drop shadow beneath a window because the operating system GUI didn't render shadows for you
- manually writing link lists to create dynamic size memory buffers because there was no C++ STL library yet or built-in associative arrays in Python/C#
I don't do have to do any of that low-level tedious work anymore but I still write new algorithms because there is always some [missing functionality] where a standard library doesn't exist or a consensus implementation hasn't yet "won" in the marketplace of ideas. It could be distributed cloud/desk/mobile data sync algorithm, or collaborative recommendation algorithm, etc. Algorithms (general sense) will not be commoditized for decades -- if ever.
I wrote this because I wanted (normalizing) locale comparisons to be super fast, as they can be part of the key for cache lookups of text layout. In a language identifier such as "en-Latn-US" the script is in title case.
So this kind of thing still does happen, just at the lowest levels of libraries that provide nice clean abstractions for other people to use.
One of my favorite mini games at my job is rewriting classic algorithms to run in batched mode on gpu/tpu. The speed improvements often improve model training time by days, and it's always a lovely intellectual challenge. (The basic challenge is to rewrite the algorithms in terms of matrix operations which operate on many examples of the problem at once, while eliminating all branching.)
I am somewhat confused about the definition of Algorithms under the article's context.
What is this Algorithm we are talking about? Is it about basic data structures? If that is what we talk about, I feel at least from like 15 years ago, people already giving up writing those things from scratch for new projects.
I agree encapsulation is growing. As developers we are in a reality that we lose more and more predictability over the behavior of our code.
But that seems to create more demand for nuanced solution to reign the complexity, since commoditized algorithm created this problem, not solving it.
I don’t remember last time I implemented sorting, search or trees, these are available in all languages and runtimes.
Skill, I implement a lot of my own algorithms. Following reasons.
1. Performance. Custom algorithms and data structures is often a requirement to write performant code, because SIMD, memory hierarchy, and other modern hardware-related shenanigans largely ignored by authors of standard libraries.
2. GPGPU. Due to their massively parallel nature they need completely different algorithms, a naïve implementation is often by an order of magnitude slower than purposely built one, due to things like memory access coalescing and group shared memory.
3. Licensing. In my area of work (CAD/CAM/CAE) some good quality libraries are only available as GPL or even AGPL. Apparently, the commercial licenses are targeted towards companies like GM or Airbus, way too expensive.
Yes commodities, but I still need an intuitive understanding of how the algorithms work to pick the right one. I might even need to read the code when I get deeper into solving my problem. Or implement a dummy version simply for learning.
Algorithms are commodities, but algorithm skill is still deeply needed
I wonder if the number of people working on algorithms is about the same proportion of the workforce, it's just that there's way way more developers now, and they need commoditized algorithms that solve 90% of the problem (but that 10% is still out there...).
For example, if you summed up all the 'core committers' of open source projects, people working on operating systems, embedded software, systems programming, and other specialized software developers that might think about unique algorithms, you wonder if that's about the same as the number of total software developers in the 80s
Personally I found it a bit disappointing moving from academia to industry and finding out that there is not much of a market for algorithm implementation or development/research. Sure, there are specialized products where they are important (DB, OS, languages/libraries, trading systems) and even places for algorithms in business apps, but it doesn’t seem you can really “specialize” in it as a career (like distributed systems, system programming, frontend) unless you’re very picky about which jobs you take.
All that's old is new again. The level of abstraction just keeps getting higher.
Someday, ecosystems like AWS and Azure will be as standardized as programming languages. You'll still be writing algorithms, but instead of telling one PC what to do, you'll be telling a distributed system what to do. The old design patterns will still be relevant; the implementation will just look different.
We're already there to some extent. It's just messy and fragmented right now.
And that's why HackerRank, Codility and all the other coding sites have zero relevance today in regards of the actual capability of the candidate. The fact that hiring committees, HR departments and software engineering managers rely on them goes to show how detached all those people are/have been from reality for a long time now.
Good software engineering skills on the other hand are pretty much more needed than ever.
sorry, but i don't agree with that assertion. algorithmic skills are never going to be out of date.
true, widespread availability of high quality implementations may obviate the need to implement such structures 'by hand'. however, the developer must understand not only the operations that a data-structure supports, but also their complexity. one needs to understand the fundamental properties of data-structures to use them properly, so that the application satisfies it's own complexity requirements.
and complexity here is not just asymptotic complexity, but also the machine cycle count (ofcourse benchmarking helps here. a lot) this also implies that one must understand the architecture of modern machines f.e. how the cache-hierarchy affects performance etc. etc.
every decade (or even less !) new architectures are sufficiently different from the previous generation that makes a mockery of our 'intuition' of their performance characteristics. data structures that might work wonderfully on old pdp machines would probably not be a good choice on modern machines with multiple layers of caches etc.
this constant tension between abstractness and efficiency is what makes, imho, programming such a joy :)
The thing is, in large tech companies like Google, you might be tasked with designing (or modyfing a design) of something that works on Internet scale. For such cases, thinking in terms of algorithms is essential. That's why I think FAANGs still require algorithmic literacy. Unfortunately, other companies, which don't have FAANG-like problems, ape its recruitment process...
By definition, any code written to solve a problem is an algorithm (not just the standard stuff found in textbooks), since a lot of people in IT write code to solve problems, then yes they absolutely need to understand some what performance characteristics and the correctness of what they wrote. Even if you don't need to performance characteristics, you certainly need the correctness part!
I don't think its all or nothing. A component of being a good software engineer certainly involves being able to reason about performance characteristics, and the correctness of the algorithms you write (Every piece of code written to solve a problem is an algorithm by definition)
I always regarded Codility as just a test whether one is a University of Warsaw computer science alumnus/na - and no wonder given who authored most of the tasks.
And there's Newton and Raphson's sqrt(). Hint: if you understand your data and your precision requirement, you can avoid iterating it all the way out to the end of your integer precision, and get a lot more work done.
Try transposing a 4-megabyte matrix before the week is out on a machine with 32KiB of RAM and two 5-megabyte hard drives. Yes, it can be done.
When I started doing this work it took deep understanding to get anything done. Now it takes the skills of a good reference librarian. We have to be able to find stuff (via npm, nuget, maven) quickly, assess its quality, and figure out how to use it. So the type of thinking we need has changed from "deep" to "broad".