Makefiles. I always dismissed them as a C compiler thing. Something that could never be useful for Python programming. But nowadays every project I create has a Makefile to bind together all task involved on that project. From bootstrapping the dev environment, running checks/test, starting a devserver, building releases and container images. Makefiles are just such a nice place to put scripts for these common tasks compared to normal shell scripts. The functional approach and dependency resolving of Make allows you to express them with minimal boilerplate and you get tab completion for free.
Sometimes I try to take more native solutions (eg. Tox, docker) but I always end up wrapping those tools in a Makefile somewhere forthe road since there are always missing links and because Make is ubiquitous on nearly every Linux and macOS it is just all you need to get a project started.
Running the 'make' command in this repo will setup all requirements and then run code checks and tests. Running it again will skip the setup part unless one of the dependency files has changed (or the setup env is removed).
In 2019 Makefiles are a useful tool for automating project-level things. Too often webapps will require you to install X to install Y to run producing artifact Z. Since Make is old and baked and everywhere, specifying "make Z" is a useful project-level development process. It's not tied to a language (e.g. Tox) nor a huge runtime (Docker). Make is small enough in scope to be easy, and large enough to be capable without a lot of incantations.
The big downside of Make, alas, is Windows compatibility.
> The big downside of Make, alas, is Windows compatibility.
GNU Make works fine on Windows. The sources come with a vcproj to build it natively, or you get it from ezwinports. At my dayjob, we have a pretty complicated build with GNU Make for cross-compiling our application to Arm and PowerPC, and it works on Windows, even with special Guile scripts to reduce the number of shell calls which are extremely slow on Windows.
Most popular folder on Windows is "My documents", it has a space at least in some Windows versions. Make doesn't support such paths: http://savannah.gnu.org/bugs/?712
VBScript works better on Windows, IMO. Also works out of the box on all Windows versions since at least 2000 (on Win9x it was shipped with IE).
Not really... not since XP, anyway. Unless you have a space in your username (which is a terrible idea for many other reasons), your "Documents" path is C:\Users\JohnSmith\Documents. "Program Files" is pretty much the only important path which is likely to have spaces, and your makefiles (hopefully!) don't need to touch that.
On Windows, it's not up to me to decide where users will keep my stuff, and where it will work. Users decide.
For a software to work fine on Windows, it must support spaces in files and paths. Also Unicode in files and paths.
VBScript does, GNU Make doesn't.
> which is a terrible idea for many other reasons
If you use make to setup stuff, it's very possible you'll need to access "c:\Users\All Users" which does contain space in username. Also "c:\Program Files (x86)\Common Files" which contain more than one.
That is entirely correct and really the most glaring downside of Make. In my opinion: If you have spaces in your dependency names, just stay away from Make as far as possible.
Long paths were introduced in Windows NT in 1993, VC6 released in 1998.
I’ve just installed Visual C++ 6.0 Professional on a WinXP VmWare machine. Took less than a minute, BTW — modern SSDs are awesome. The default installation path under program files also contain spaces, it’s "C:\Program Files\Microsoft Visual Studio\VC98"
Eli is one the few free software veterans who exclusively works on Windows. His ports are excellent and native Windows binaries wherever possible ("native" meaning: no MSYS at all). It's not that "native" is always better, but it is good to have the choice. Especially w.r.t. GNU Make, I found the MSYS version to be very hard to reason with, since the additional MSYS path conversion makes things even more complicated than it already is...
Does make files ever actually work that way? In my experience they always require you to install a bunch of packages for libraries which usually only tells you the package names for ubuntu so you have to hunt down what the package is called on your distro or if that version of the package is even in the repos.
If you write them yourself they certainly can. For small projects, you can leave everything explicit and it works great.
Can you break down your problem into a bunch of rules of the form “To produce this file, I need to run these shell commands, which read those other files over there as input”? If so, Make can take care of figuring out which steps actually need to be run.
> The big downside of Make, alas, is Windows compatibility.
You'd have to give me a _very_ compelling reason to support developers who use Windows, when Windows lacks these essential tools. Besides, don't people who develop on Windows live in WSL?
Nope. I develop in Python, Java, and Kotlin on Windows and never touch WSL. Make is available natively through Chocolatey (a Windows package installer), but I prefer Gradle.
(I also write code to run on Linux, but still prefer Gradle.)
If you're already using a Vagrant or Docker-based development workflow, WSL doesn't really add much, and takes some things away. I/O performance, for example.
> If you're already using a Vagrant or Docker-based development workflow, WSL doesn't really add much, and takes some things away. I/O performance, for example.
I've been actively using WSL for over a year along with Docker and set up the Docker CLI in WSL to talk to the Docker for Windows daemon.
Performance in that scenario is no different than running the Docker CLI in PowerShell, or do you just mean I/O performance in general in WSL? In which case once you turn off Windows defender it's very usable. WSL v2 will also apparently make I/O performance 2-20x faster depending on what you're doing.
WSL adds a lot if you're using Docker IMO. Suddenly if you want, you can run tmux and terminal Vim along with ranger while your apps run nice and efficiently in Docker. Before you know it, you're spending almost all of your time on the command line but can still reach into the Windows cookie jar for gaming and other GUI apps that aren't available on Linux and can't be run in a Windows VM.
I find that it depends a lot on what you're doing. The real problem with WSL is I/O latency.
It's acceptable for relatively infrequent file access, but will eat you alive if you're doing anything that involves lots of random file access, or batch processing of large sets of small files, or stuff like that.
I just haven't seen that as a problem in my day to day as a developer working with Flask, Rails, Phoenix and Webpack.
That's dealing with 10k+ line projects spread across dozens of files quite often, and even transforming ~100 small JS / SCSS files through Webpack. It's all really fast even on 5 year old hardware (my source code isn't even on an SSD either).
Fast as in, Webpack CSS recompiles often take 250ms to about 1.5 second depending on how big the project is and all of the web framework code is close to instant to reload on change. Hundreds of Phoenix controller tests run in 3 seconds, etc..
It isn't perfect. The IO performance is currently poor and it doesn't play well with Windows Defender (wastes a lot of CPU). Also, since your IDE would live in Windows, you can sometimes have issues with Windows and Linux both interacting with the same files.
More developers are coding in Windows than any other operating system -- almost more than Mac and Linux combined. The Hacker News filter bubble might lead us to believe otherwise.
>The big downside of Make, alas, is Windows compatibility.
Isn't the big problem that you have no idea what it's doing to your system? Also that you aren't expected to be able to undo it. You can read the makefiles, of course, but it seems simpler not to have to. (Just update the necessary packages yourself, to the latest version.)
>Isn't the big problem that you have no idea what it's doing to your system?
As opposed to what exactly? Any other alternative, e.g. separate shell scripts, "npm run" scripts in package.json, running a Docker image, hell even cmake or other make-like tools - does stuff you don't know about without reading the files either.
With Docker at least everything is contained in the container. Which makes isolating and resetting environments a breeze. Something I worry about often is contaminating my system's 'state'. Which always leads to broken builds or incomplete build systems because a missing dependency is not spotted on your system because it was installed by some other tool some other time.
I tend to write my Makefiles to create as much of a local dev environment as possible for every project. Using Python virtualenv/Pipenv/Poetry, Ruby vendored dirs, custom Gopath per project (using direnv), etc. Most tools support some sort of isolation/localisation, but it's often just not on by default.
I wish more tools did this, I almost always want a local, self-contained environment for everything. The few times I don't actively want I don't see much pain in having one. A couple minutes setup time, maybe?
I have seriously considered hiring someone to audit and prune all the random little libraries and tools I've installed over the years for that one-off time I had to process a weird file format or wanted to try something from HN.
(I was heavily downvoted). What I was thinking is: as opposed to just running your built-in package manager yourself, to upgrade your system to the latest version of all the packages it might require.
Makefiles are used for more than package management. In fact, it doesn't seem very common to use them for package management. Maybe I'm missing something?
I often find the non linear way of working with Make an advantage. Since it allows you to break a big piece of procedural shell code with lots of control flow (if X is installed don't install again, etc) into small self contained functional pieces with clean input and output boundaries which can be run individually. It also greatly improves code reuse as every target/recipe can be considered a function.
> Since it allows you to break a big piece of procedural shell code with lots of control flow (if X is installed don't install again, etc) into small self contained functional pieces with clean input and output boundaries which can be run individually
At that point, I use a scripting -not shell- language (which is not as implicit)
I don't program in c anymore, so my major workflow is all in one language; I write my server in the same language that do s the compilation, which is in the same language that does utilities like creating network tunnels to my lab nodes
That said, I wholeheartedly agree with the comment.
It's sad that something so central to a project and so useful and important to so many people seems like it hasn't advanced ... ever.
Developers generally do the minimum with Makefiles and get out. They are similar to 1040 forms in popularity.
I've always had a dream of a redesigned "make" system... with import statements, object oriented rules, clear targets, rules and files clearly seperated, structure and organization... sigh.
I can agree with your sentiment. I often sought alternatives for Make because some things are just missing. However I always end up with a Makefile because Make is just so basic and ubiquitous. Any big change or alternative to Make and you will loose that. That's why I try to ovoid newer Make features.
So for me Make not advancing is actually a feature. As it is one of the few things I can depend on to stay the same.
I find Nix to be a really nice alternative to Make; although it's also quite heavyweight and "invasive" (it's not just self-contained binary like `make`), so it's more a case of "I'm already using Nix to install dependencies, why not use it for orchestration too?"
Bazel is great until you have to install tensorflow from source into a container and are sitting wondering why you have to put and configure a JVM inside a container-destined-to-be-static-binary, temporarily to get a python program with no Java bindings installed.
As I'm not the most intelligent developer, I'm sure there's a better, more sophisticated way to do this but I got really frustrated and gave up.
I'm sure bazel is a good tool when it is used properly, but as a greenhorn in the tech field, the constant version mismatching between bazel and tensorflow can become quite the pain when you have to build tensorflow from source.
I will +1 that. I like Bazel a lot because it really forces you into a singular way of building a project which is clean and nice.
That said, mileage may vary. It was originally built by Google so it has quirks. I find it is best suited for projects with compiled dependencies and large repos.
Otherwise, I was going to add that Gradle as a build system is very advanced and improves upon make in many ways.
What I meant in this respect is that a lot of makefile rules have a lot of commonality - it would be great to inherit a workhorse rule and tweak an option instead of having to copy/paste a rule or twiddle makefile variables (which aren't normal programming language variables)
fabfile.py (Fabric) could be used as a Makefile in Python. If you don't ever need to ssh to other machines ti run your tasks, you could use pyinvoke library directly (tasks.py).
https://www.fabfile.org/
It is easy to add command line arguments to the tasks, configure them using files (json, yaml), environment variables, to split the task definitions into several modules/namespaces.
Having used fabric in the past, I've always found it just as easy to use a shell script and make files.
There's always some level of bootstrapping a project (installing packages/software, compiling libraries and dependencies) where it's easier to just to write a shell script than to program python to do. E.g. How do you get fabric installed on a system?
There's also been this longevity of sorts that Make seems to have gotten right. People just keep going back to it because it's simple.
I've been moving away from using shell scripts in a tools/ directory to using Python Invoke (http://www.pyinvoke.org), which is the library underlying fabric.
I used bash scripts for years, but for a lot of reasons made the switch:
- It was always painful to create small libraries of functions used across multiple scripts in a project
- It's difficult to consistently configure them with per-user settings. I've written bash implementations of settings management, Invoke handles this for me.
- I'd still have to reach for Python whenever I needed to do anything with arrays or dicts, etc.
- Getting error handling correct can be a chore
Invoke has a lot of nice to haves to:
- Autogenerated help output
- Autocomplete out of the box
- Very easy to add tasks, just a Python function
- Easy to run shell code when needed
- Very powerful config management when needed
- Supports namespacing, task dependencies, running multiple tasks at once and deduplicating them
It's not perfect, but it's a lot better than my hand rolled scripts were.
Groxx replies hits the point. I might work with a small number of platforms, but the "super-simple" qualifier is the point. The point at which you need dictionaries (associative arrays) in your install script, not to mention settings management beyond a make include is the point at which you've outgrown make.
it's also far, far, far easier to make it work predictably on multiple platforms. and easier to understand and change later. that can get nightmarishly hard in make/bash, once you go outside the super-simple realm.
Wow I've never seen such bloated python project before. It has 10 dependancies with 2 additional optional dependancies and the introduction/tutorial is absurdly overspecific.
The first example they use to describe the tool is:
> Cufflinks is a tool to assemble transcripts, calculate abundance and conduct a differential expression analysis on RNA-Seq data. his example shows how to create a typical Cufflinks workflow with Snakemake. It assumes that mapped RNA-Seq data for four samples 101-104 is given as bam files.
This is epitome of non-programmer programming, colour me disappointed.
And yet it's still less crufty than the "by hackers for hackers" GNU Automake and less over engineered than the "made by real professional programmers at a real big tech company" Luigi. Would love the hear if you have any suggestions for actual alternatives for doing this type of automation beyond what make can neatly deal with rather than just going "eww.. it has dependencies"; "eww.. it's made by bioinformaticists".
> One of the worst problems with using windows (in my opinion) is that there’s no native GNU make.
GNU Make even comes with a vcproj file for building a native binary with Visual Studio. Worked fine for me. Building it with Guile support though is difficult, but fortunately Eli Zaretskii provides native binaries through his ezwinports, and they worked pretty much flawlessly for me. Of course you will usually need a shell to execute recipes, but Make itself runs natively. For more information, see README.W32 in the sources.
WSL is currently horrendously (unusably, IMO) slow. WSL2 promises a 20x speed up, but it was already 100x slower than native Linux at some actually-realistic workloads that happen all the time when you're developing (e.g. `git grep`), so it's probably still too slow to be tolerable.
I had the opposite problem of wanting to develop some stuff for Windows from a Linux environment, and I settled on running a linux VM and copying binaries over by scping to WSL, which works reasonably well.
A nice thing with WSL is that you get working make and rsync. But I would like make for coding on native windows. Many FOSS projects use Makefile as the parent post described.
I develop on Windows, and I like make as a lazy default you can just type in and as long as you maintain the make file, it will build the thing.
It is also a nice document of what you can build and how.
I also like it because Netlify supports it, so you can get it to run make to deploy your site when you push a commit, giving you a lot of control about your CI, while keeping it simple.
I don't really know a good all in manual, most thing about Make I learned over years of using it from different sources. And I still sometimes discover new features (and new ones are still added in recent release, but I tend to avoid them to keep Makefiles compatible on older systems).
Things I tend to google often because I forget and some are used more often than others are: automatic variables, implicit rules, conditionals and functions.
One trick that really helps making Make complete is making your own pseudo state files and understanding the dependency system. One of the best features of Make is its dependency resolving. You generally write rules because you want a target (a file or directory) to be created, based on prerequisites (dependencies) according to a recipe (shell scripts). Make figures out that if the prerequisites didn't change, it doesn't need to run the recipe again and it will reuse the target. Greatly saving on build time.
Because Make relies on file timestamps to do its dependency resolving magic if you don't have a file there is not much Make can do. So what you can do instead is create a pseudo target output yourself. For example: https://github.com/aequitas/macos-menubar-wireguard/blob/mas... Here a linter check is run which creates no output. So instead a hidden file .check is created as target. Whenever the sources change the target is invalidated and Make will run this recipe again updating the timestamp of .check. Also note the prerequisites behind the pipe (order-only prerequisites). These don't count toward the timestamp checking, but only need to be there. Ideal for environment dependencies, like in this case the presence of the swiftlint executable.
We're doing this, and I mostly love it. I haven't found a great way to do code re-use across projects yet, and I'm not super happy with the Make function syntax (but, maybe if it needs a function, I should turn it into a shell script that itself is called by the Make command...).
All in all tho, it's a fantastic place to write down long CLI commands (ex: launching a dev docker container with the right networking and volume configurations) that you use a lot when working on the project.
Our Jenkins pipeline also relies on the Makefiles, literally just invoking `make release`, which is also pretty awesome.
When using it in multiple projects and CI you also tend to develop some kind of Developer-API with common commands/targets. No matter what kind of project you run you always use the same target names to get started. No remembering which tool is used for this lanuage, just clone it, run `make` and you're off, `make test` to test, etc.
Make does support includes (https://www.gnu.org/software/make/manual/html_node/Include.h...) which allow for some form of code reuse across projects. But then you encounter the balance between DRY and clarity. There are always exceptions, so you try to make stuff to universal, but then its hard to grok the code.
And I feel that if I start to use functions I'm using Make wrong and that kind of logic better fits in shell scripts called from the Makefile.
Makefiles (the way I use them at least) should be simple to read and explain themselves. But it's often hard to balance this with the features Make provides, like implicit rules and automatic variables. And if I ever turn to generating Makefiles (other than for C projects where it kind of expected) I will probably retire.
Oh absolutely. It's fantastic for that. Our build pipeline actually relies on that; every project has a "release" target that is basically for the CI to use.
> Make includes
Yeah, I looked into that, and I think I had the same conclusion.
> scripts called from the Makefile
That's what I'm thinking is the way to level up this kind of system. Although then, why have `make init` instead of just `./bin/init` ?
The biggest reason I use Make is the dependency resolving.
In the `make init` example. It doesn't matter how many intermediate steps are involved `init` is the end-state I want to achieve. So in most of my Makefiles the `init` target will fan-out into requirements as wide and deep as it needs, including running apt to install missing system dependencies. But then the good part. If a dependency is already fulfilled Make won't have to run it again. Although sometimes its hard or clunky to convert some dependencies into 'files' so Make can do its dependency resolving work properly.
Never wrote them myself but have encountered them sometimes. Had no major issues with them then I believe. However I would probably write a Makefile to manage the Ruby environment and install Rake as they don't come installed by default.
self taught dev here too. I have never used make files, but pretty sure I'm using NodeJs in a similar role. I use it to automate all my "scripting", including deploying of my SaaS product to the cloud and running unit tests.
PS: I do my primary development on windows, but my production environment is ubuntu. node apps "just work" on both environments. truely cross platform.
I put Make in the same class as Vi. I hate using them but I have to learn them because they're the least of N evils, the most pragmatic way out of a hole.
I second this... a lot of times, broken make files are standing between you and victory, so it would be good to at least have some familiarity with them.
I love Make in concept and kind of hate it in practice. There is sooo much incidental complexity and so many warts to work around. I think it's a concept that is ripe for a new approach that thoughtfully keeps the good, ditches the bad, and maybe even adds some useful capabilities that aren't already there.
But of course I'm immediately skeptical of this idea a la https://xkcd.com/927/ (Standards). For instance, maybe this is what npm and all the rest thought they were doing. Certainly Rake in the ruby world tried to do this, and I never really liked it, so clearly they missed the mark somehow, at least for me. But then when I feel discouraged about the ability to improve on things, I think about how I felt this way when I first heard about Git. Why would you implement a new source control system when we already have subversion? Sure, svn has its frustrations and warts, but this new thing is just gonna have its own frustrations and warts and now we'll just have another frustrating warty thing and we haven't really gained anything. And this is totally true! Git is super frustrating and warty. Except that it's also way better than subversion, much faster and far more flexible. It was a revelation when I started using it. So I think back to Linus when he was thinking about creating git and think that he probably didn't have this discouraged uncertainty about improving things; he just had ideas for a better way and he went out and did it. (And yes, I know it was influenced by bitkeeper and other DVCs exist, so it's not like he invented the concept, but my point stands.)
Example: https://gitlab.com/internet-cleanup-foundation/web-security-...
Running the 'make' command in this repo will setup all requirements and then run code checks and tests. Running it again will skip the setup part unless one of the dependency files has changed (or the setup env is removed).