As of today, there is no build system which is 1) simple and easily comprehensible, 2) reliable and rock stable and 3) still able to build complex applications with all their (seemingly) quirky needs.
Redo solves some problems with reliability, which is good [1], but regarding complexity reduction and simplicity it seems to be not better than a plain Makefile. [2]
Also, I don't think it is a good idea to implement this in Go, because you are limiting your userbase to those willing to install Go and willing to compile and build your stuff. From another perspective: Redo is not a tough task, so why not using a ubiquitous language such as Perl or Python? That way, it would run out of the box on almost every machine. Heck, you could even implement it in portable shell script with acceptable effort. If you ever want to establish a new build system, the entry barrier should be as low as possible.
[1] But honestly, redo don't address any issue that "make clean; make" wouldn't solve. So the practical advantage is rather limited.
[2] Nothing wrong with plain Makefiles, though, I use that approach successfully for many small projects.
> regarding complexity reduction and simplicity it seems to be not better than a plain Makefile.
Makefiles work great most of the time, but become difficult when you need to do things that don't fit well with the make model. I do a lot of multi level code generation, for instance, and make requires a lot of incantations to get right. Whereas redo works exactly the same way regardless of the complexity. I used make for many, many years and got very good at using it before I decided to implement something new.
> Also, I don't think it is a good idea to implement this in Go...
I choose Go because I would not have enjoyed it as much in C. I did not use Perl or Python because redo is used recursively. Perl and Python startup were too slow. I actually wrote a shell implementation that served me well for a while, but it was too slow.
Likely, there are those who won't use it because it's Go and that's fine. I've solved my problem and made the solution available to anyone else to whom it might be useful.
>But honestly, redo don't address any issue that "make clean; make" wouldn't solve.
Fixing make's reliability issue with "make clean; make" is like rebooting Windows when it hangs.
Yeah, you can do that, but it doesn't actually solve the underlying problem. With redo, you don't need need to do that.
The redo-inspired build tool I wrote abstracts the tasks of composing
a build system, by replacing the idea of writing a build description
file with command-line primitives which customize production rules
from a library. So cleanly compiling a C file into an executable
looks something like this:
Find and delete standard list of files which credo generates, and derived objects which are targets of *.do scripts:
> cre/rm std
Customize a library template shell script to become the file hello.do,
which defines what to do to make hello from hello.c:
> cre/libdo (c cc c '') hello
Run the current build graph to create hello:
> cre/do hello
Obviously this particular translation is already baked into make,
so isn't anything new, but the approach of pulling templated
transitions from a library by name scales well to very custom
transitions created by one person or team and consumed at
build-construction-time by another.
I think this approach reduces the complexity of the build system by
separating the definition of the file translations from the construction
of a custom build system. These primitives abstract constructing the
dependency graph and production rules, so I think it's also simpler
to use. Driving the build system construction from the shell also
enables all the variability in that build system that you want without
generating build-description files, which I think is new, and also
simpler to use than current build-tool approaches. Whether all-DSL
(eg make), document-driven (eg ant), or embedded DSL (eg scons),
build tools usually force you to write or generate complicated build
description files which do not scale well.
Credo is also inspired by redo, but runs in Inferno, which is even
more infrequently used than Go (and developed by some of the same
people). I used Inferno because I work in it daily, and wanted to
take advantage of some of the features of the OS, that Linux and bash
don't have. Just today I ran into a potential user that was turned
off by the Inferno requirement, so I'll probably have to port it to
Linux/bash, and lose some of those features (eg, /env), to validate
its usability in a context other than my own.
EDIT: Replaced old way, to call script to find and delete standard derived objects, with newer command.
There's not much in the way of DJB's documentation other than a conceptual sketch so there's much
room for interpretation.
There are many differences between the two implementations, some quite fundamental. redux uses sha1 checksums instead of timestamps. Timestamps cause all sorts of problems as we know from make.
apenwarr redo has all sorts of extra utilities to ameliorate the problems.
redux has the minimum functionality needed for the task (init, redo, redo-ifchange, redo-ifcreate)
and I don't think it needs any others.
Redux passes most apenwarr tests. The few it does not pass are apenwarr specific.
I've not tried the converse. Might be interesting.
I actually have a task list to add an apenwarr compatibility mode to redux so users can switch easily. At that point, it should pass all the apenwarr tests.
See https://github.com/gyepisam/redux
It's an implementation of DJB's redo concept.
I wrote it because I needed a build tool with exactly the features you describe and none of the alternatives had them.
I encourage you to try it out and provide feedback if necessary.