Hacker Newsnew | past | comments | ask | show | jobs | submit | gyepi's commentslogin

> The radial arm saw doesn't get enough love, probably because it has space planning requirements that table saws can bypass on account of their portability.

No, the radial arm saw does not get enough love because it is dangerous when used for anything but cross cuts and straight angled cuts, both of which are easily handled by a miter saw, which is safer and more versatile.

I used one for years and was happy when I finally gave it away. A sliding miter saw and a table saw are a safer combination, for sure.

Wholly agree with the dust collection. You can actually get far with a shop vac if you add a cyclone type pre-filter bin to it.


This is analogous to saying, I can tell you Lisp is no good because I used it for a couple of years and was glad to stop. The radial arm saw doesn't get enough love for two reasons: poorly-made department-store versions that don't keep their settings, and ill-informed people who never learned how to maintain and use one properly. The original DeWalt saws and the Northfield saws are amazing machine tools, beautifully made, safe when used properly, tremendously powerful and versatile. owwm.org and the Mr. Sawdust book are great resources.


When I was in college, both SICP and Knuth's books were on the recommended reading lists. I didn't buy the former until years later, but did buy Knuth's books (individually and at a time when I could ill afford them) and read them through. They are hard and it was very slow going. There's still a lot I don't understand. But I learned a huge amount, continue to do so and would absolutely recommend them to anyone with enough interest in the field. Similarly, when I finally bought and read SICP, I wondered why I hadn't read it sooner. I still read both books, along with many other "hard" books and enjoy them. I don't think I would be the programmer I am today if I had not read those books.

I disagree with the author. I certainly understand that reading "hard" books takes a lot of effort that may not seem worthwhile, but would not say they are overrated. However, like anything else, they aren't for everyone. Just those who are ready for them.


Some of these requirements should be built into any build tool. However, most can be added easily enough:

For instance, redux [https://github.com/gyepisam/redux] is written in Go (not compiled for binary distribution, but I could add that), is cross platform, supports any mix of languages and tasks, is very easy to learn.

It uses shell scripts to create targets so everything is scriptable. Stuff like recognizing standard folder hierarchies and auto-discovery can be added with small scripts or tools. It can be as simple as you want or as complex as you need.


1. Since make has builtin suffix rules, the Makefile could be simplified to:

    CXX=g++

    hello: main.o factorial.o hello.o

    clean:
        rm -rf *o hello
2. Shameless plug: he didn't mention redo [1], which is simpler than make and more reliable. The comparable redo scripts to the Makefile would be:

    cat <<EOF > @all.do
    redo hello
    EOF

    cat <<EOF > hello.do
    o='main.o factorial.o hello.o'
    redo-ifchange $o
    g++ $o -o $3
    EOF

    cat <<EOF > default.o.do
    redo-ifchange $2.cpp
    g++ -c $1 -o $3
    EOF

    cat <<EOF > @clean.do
    rm -rf *o hello
    EOF
[Edit: Note that these are heredoc examples showing how to create the do scripts.]

These are just shell scripts and can be extended as much as necesary. For instance, one can create a dependency on the compiler flags with these changes:

    cat <<EOF | install -m 0755 /dev/stdin cc
    #!/bin/sh
    g++ -c "\$@"
    EOF

    # sed -i 's/^\(redo-ifchange.\+\)/\1 cc/' *.do
    # sed -i 's}g++ -c}./cc}' *.do
sed calls could be combined; separated here for readablility.

[1] https://github.com/gyepisam/redux


CXX=g++ isn't necessary either; make already knows about $(CXX) and how to link C++ programs. Also, I think you wanted .o, not o.

And compared to that Makefile, the redo scripts you list don't seem simpler at all. I've seen reasonably compelling arguments for redo, but that wasn't one.


> CXX=g++ isn't necessary either; make already knows about $(CXX) and how to link C++ programs.

You're right, of course.

> Also, I think you wanted .o, not o.

I would, yes, but I copied the Makefile ;)

Should have been clearer; I meant that redo is simpler (and more reliable) than make.

For simple projects, redo scripts are a bit longer. However, as the projects grow, the redo scripts reach an asymptote whereas Makefiles don't. The only way to reduce the growth in make is to add functions and implicit rules which get ugly real fast.


redo is pretty cool, but I ran into trouble with apenwarr's implementation (https://github.com/apenwarr/redo, see https://groups.google.com/d/msg/redo-list/GL5z8eEqT90/tk_vLZ...) with OS X Mavericks. I have no experience with the alternative implementation at https://github.com/gyepisam/redux, since it came out after I reimplemented the build system in question with CMake.

In general, I found CMake quite useable for my needs, and quite clean. It also required less build system code than redo. CMake fits quite nicely into a (C or C++) project which consists of many binaries and libraries which can depend on each other.


redo might be simpler and more reliable, but shell isn't. And redo is encouraging even more work to be done in shell. Additionally, the redo version is more verbose and harder to read. While fancier tasks will make's version look horrible relatively quickly, they won't make redo's version look any better.


> redo might be simpler and more reliable, but shell isn't.

Not quite sure what you mean here. The scripts don't do anything complicated and redo catches errors that could occur.

As for readability, etc, I suppose it's relative. Simple makefiles do read very nicely. Unfortunately, they aren't always simple and hairy makefiles are just horrible to write, read and maintain. I've had no such problems with do scripts.


To this day I still don't understand redo (I'm just staring at it, and don't get anything) - haven't really read the internals.

With make it was easier for me to grasp the idea (or maybe I was simply 20 years younger then).


It's actually quite simple. You write a short shell script to produce the output you need and redo handles the dependencies.

For example, the shell script named "banana.x.do" is expected to produce the content for the file named "banana.x".

When you say

    # redo banana.x
redo invokes banana.x.do with the command:

    sh -x banana.x.do banana.x banana XXX > ZZZ
so banana.x.do is invoked with three arguments and its output is redirected to a file.

   $1 denotes the target file
   $2 denotes the target file without its extension
   $3 is a temp file: XXX, in this case.
banana.x.do is expected to either produce output in $3 or write to stdout, but not both. If there are no failures redo will chose the correct one, rename the output to banana.x and update the dependency database.

If banana.x depends on grape.y, you add the line

    redo-ifchange grape.y
to the banana.x.do, creating a dependency. redo will rebuild grape.y (recursively) when necessary.

The only other commands I haven't mentioned are init and redo-ifcreate, which are obvious and rarely used, respectively.

That's it.


I think the big difference between redo and make is that make requires knowledge of dependencies up front, and this is sometimes tricky to get right.

"as you can see in default.o.do, you can declare a dependency after building the program. In C, you get your best dependency information by trying to actually build, since that's how you find out which headers you need. redo is based on the following simple insight: you don't actually care what the dependencies are before you build the target; if the target doesn't exist, you obviously need to build it. Then, the build script itself can provide the dependency information however it wants; unlike in make, you don't need a special dependency syntax at all. You can even declare some of your dependencies after building, which makes C-style autodependencies much simpler."

https://github.com/apenwarr/redo


Sorry, but that doesn't appear simpler to me...


I switched to djbdns 14 years ago and have never looked back. I've installed bind a few times for clients who really wanted to stick with it and it seems to have improved but I'm still happy using a set of small, task specific tools that don't require any maintenance once installed.


Cache poisoning seems to be a problem.


As of today, there is no build system which is 1) simple and easily comprehensible, 2) reliable and rock stable and 3) still able to build complex applications with all their (seemingly) quirky needs.

See https://github.com/gyepisam/redux

It's an implementation of DJB's redo concept.

I wrote it because I needed a build tool with exactly the features you describe and none of the alternatives had them.

I encourage you to try it out and provide feedback if necessary.


Redo solves some problems with reliability, which is good [1], but regarding complexity reduction and simplicity it seems to be not better than a plain Makefile. [2]

Also, I don't think it is a good idea to implement this in Go, because you are limiting your userbase to those willing to install Go and willing to compile and build your stuff. From another perspective: Redo is not a tough task, so why not using a ubiquitous language such as Perl or Python? That way, it would run out of the box on almost every machine. Heck, you could even implement it in portable shell script with acceptable effort. If you ever want to establish a new build system, the entry barrier should be as low as possible.

[1] But honestly, redo don't address any issue that "make clean; make" wouldn't solve. So the practical advantage is rather limited.

[2] Nothing wrong with plain Makefiles, though, I use that approach successfully for many small projects.


> regarding complexity reduction and simplicity it seems to be not better than a plain Makefile.

Makefiles work great most of the time, but become difficult when you need to do things that don't fit well with the make model. I do a lot of multi level code generation, for instance, and make requires a lot of incantations to get right. Whereas redo works exactly the same way regardless of the complexity. I used make for many, many years and got very good at using it before I decided to implement something new.

> Also, I don't think it is a good idea to implement this in Go...

I choose Go because I would not have enjoyed it as much in C. I did not use Perl or Python because redo is used recursively. Perl and Python startup were too slow. I actually wrote a shell implementation that served me well for a while, but it was too slow.

Likely, there are those who won't use it because it's Go and that's fine. I've solved my problem and made the solution available to anyone else to whom it might be useful.

>But honestly, redo don't address any issue that "make clean; make" wouldn't solve.

Fixing make's reliability issue with "make clean; make" is like rebooting Windows when it hangs. Yeah, you can do that, but it doesn't actually solve the underlying problem. With redo, you don't need need to do that.


The redo-inspired build tool I wrote abstracts the tasks of composing a build system, by replacing the idea of writing a build description file with command-line primitives which customize production rules from a library. So cleanly compiling a C file into an executable looks something like this:

Find and delete standard list of files which credo generates, and derived objects which are targets of *.do scripts:

> cre/rm std

Customize a library template shell script to become the file hello.do, which defines what to do to make hello from hello.c:

> cre/libdo (c cc c '') hello

Run the current build graph to create hello:

> cre/do hello

Obviously this particular translation is already baked into make, so isn't anything new, but the approach of pulling templated transitions from a library by name scales well to very custom transitions created by one person or team and consumed at build-construction-time by another.

Also see other test cases at https://github.com/catenate/credo/tree/master/test/1/credo and a library of transitions at https://github.com/catenate/credo/tree/master/lib/do/sh-infe...

I think this approach reduces the complexity of the build system by separating the definition of the file translations from the construction of a custom build system. These primitives abstract constructing the dependency graph and production rules, so I think it's also simpler to use. Driving the build system construction from the shell also enables all the variability in that build system that you want without generating build-description files, which I think is new, and also simpler to use than current build-tool approaches. Whether all-DSL (eg make), document-driven (eg ant), or embedded DSL (eg scons), build tools usually force you to write or generate complicated build description files which do not scale well.

Credo is also inspired by redo, but runs in Inferno, which is even more infrequently used than Go (and developed by some of the same people). I used Inferno because I work in it daily, and wanted to take advantage of some of the features of the OS, that Linux and bash don't have. Just today I ran into a potential user that was turned off by the Inferno requirement, so I'll probably have to port it to Linux/bash, and lose some of those features (eg, /env), to validate its usability in a context other than my own.

EDIT: Replaced old way, to call script to find and delete standard derived objects, with newer command.


apenwarr also did an implementation of redo.

https://github.com/apenwarr/redo

It might be interesting to see if the two of you interpreted DJB's documentation in the same ways.


There's not much in the way of DJB's documentation other than a conceptual sketch so there's much room for interpretation.

There are many differences between the two implementations, some quite fundamental. redux uses sha1 checksums instead of timestamps. Timestamps cause all sorts of problems as we know from make. apenwarr redo has all sorts of extra utilities to ameliorate the problems. redux has the minimum functionality needed for the task (init, redo, redo-ifchange, redo-ifcreate) and I don't think it needs any others.

Redux passes most apenwarr tests. The few it does not pass are apenwarr specific. I've not tried the converse. Might be interesting.

I actually have a task list to add an apenwarr compatibility mode to redux so users can switch easily. At that point, it should pass all the apenwarr tests.

Of course, redux is quite a bit faster too.


Nerdsniping reminds me of the Walt Whitman poem:

   There was a child went forth every day;
   And the first object he look’d upon, that object he became;
   And that object became part of him for the day, or a certain part of the day, or for many years, or stretching cycles of years.
It's especially bad if you'd really rather be doing something other than what you're currently doing.


ouch!


Don't forget Thoreau's quote that he could learn from everyone. I think the problem is thinking of people in terms of rankings. Some are better at some things than others but that should not translate into being better than others. It's a hard distinction to make because the the latter is shorter and more convenient to grasp.


> I think the problem is thinking of people in terms of rankings.

Yes, that's exactly what I meant.


Not knowing you, I can only suggest questions to ask your self.

What yardstick are you using to determine your level of suckage? Is it true? How do you know? If it is true, do you care? If you care, in what way do you care? Does it matter? If it matters, is it important? How so? Hopefully, the books you have read will help with the answers. You may not be able to answer all of these questions (and honestly, the answers don't matter too much), but they might serve to orient you.

I have read a fair number of the books on the list (and many more not on the list) and can truly say that I am a better human being for doing so, according to my criteria.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: