Doing it more can provide better result only when it is allowed to trash the previous iterations to use the latest one.
This is not the case in the industry, we cannot change everything in a product at each iteration. This is why at least a little care and architecture have to be done before doing it and trashing must be made with care.
The list is a list of digits, which combine into a single number (that is: '(1 2 3) == #123). 123 × 4 = 492, so (times #123 #4)) unsurprisingly evaluates to #492 (a.k.a. '(4 9 2)).
A bit funky if you're coming from a Lisp that actually does have numbers that evaluate to themselves, but when it hits you it hits you :)
Quicklisp takes a couple commands to install, then it's easy. To install a library: `(ql:quickload :my-lib)`. Thinks of it closer to apt than to pip/npm, so you don't upgrade one lib but a QL distribution.
You don't really have to deal with asdf. It is used to create a project declaration, which can be generated for you (see "getting started", cl-project). ASDF helps in creating executables, it's also explained in the cookbook.
Can you expand a little on the difficulties you faced? When I tried quicklisp, the instructions in the homepage were easy to follow to install it, search packages, install them, and load them. I haven't tried making a package, or get it included in the repo. Was that where the difficulty you faced was?
Fine to setup.
But I don't understand how to reload a project after closing the REPL.
More generally, too many tools to master and understand...
And didn't find a simple tutorial to start with the ecosystem.
If you're talking about Quicklisp, to reload it after closing the REPL, you just need to run:
(load "~/quicklisp/setup.lisp")
and then you'll have the quicklisp functions available. Running:
(ql:add-to-init-file)
will add something like that to your REPL's init file, so you'll have quicklisp available from the get-go. After that, to load a package like "cl-opengl", whether it's installed or not, you just need to do:
(ql:quickload "cl-opengl")
This is in the homepage, under a section titled "Loading After Installation"[1].
> Perhaps you had problems with the concepts of Lisp packages (namespaces)?
Rather unlikely. The namespaces are pretty easy to understand conceptually, and their interface is pretty minimal. Same goes for Quicklisp, very easy to start with, practically one function call needed in most cases.
Now there's ASDF. I know, it's a thing of beauty, but it's not simple to use or to understand.
Well, it's actually that way for good reasons: the problem it solves is complex and hard. The only comparable system I know of is Erlang releases mechanism, which is also a solid tool, but definitely not the simplest one in Erlang repertoire.
Anyway, if someone cites ASDF as a reason for struggling with Common Lisp, I find it very believable.
> > Perhaps you had problems with the concepts of Lisp packages (namespaces)?
> Rather unlikely. The namespaces are pretty easy to understand conceptually, and their interface is pretty minimal.
Packages can definitely give beginners trouble. I don't think it's that uncommon for newcomers to Lisp to have REPL sessions like this:
CL-USER> (ql:quickload :iterate)
(:ITERATE)
CL-USER> (iter (for i below 10) (collect i))
;; Error: 3 undefined functions and two undefined variables
Then they find out they need to use the package first:
CL-USER> (use-package :iterate)
;; Error: 3 NAME-CONFLICTs saying that the symbols ITER and friends
;; are already accessible
Then they throw their hands in the air about how frustrating Lisp tooling is. For a less obvious example, imagine they had just been using LOOP normally at some point in the past in that REPL session before trying to bring ITERATE in. Once you understand how symbols and packages work, it's easy to see why it works the way it does, but it's also easy to see how a beginner could get totally lost.
Your processes made me think about a email based interface instead of a bash script, this may allows to easily interact with the database bot without knowing bash or python.
Harder, since they don't have an open API and don't want people using non-standard clients. The clients are open-source though, so probably you can do it.
If you have -MD or -MMD in your CFLAGS, GCC (and Clang) will generate the .d files during compilation without you having to add anything else to the Makefile, and without requiring ugly sed magic.
I just tried this, and it seems wrong on two counts:
1) The sed magic is required for adding the dependencies of the .d file itself. For example, if the timestamp of your header changes, it may have acquired new dependencies, and a new .d file has to be generated BEFORE make determines if the .c file is out of date.
(2) is not quite correct. The old .d file from the previous compilation is actually all you need to determine whether the .c file needs to be recompiled. It works in all cases. If the .c file is new (or you're doing a clean rebuild of the whole project,) it will always be compiled, because there will be no corresponding .o. If the .c file, or any of the .h files in the old .d gain new header dependencies, they must have been modified, so their timestamps will be newer than the .o file from the last build, hence the .c file will be recompiled and a new up-to-date .d file will be generated (because a new .d file is always generated when the .c file is compiled.)
If (2) is not correct, then (1) is not needed either. The old .d files from the last compilation pass are sufficient to know which files need to be recompiled in the current compilation pass. Make does not need to know the dependencies of the .d files themselves, it just needs to load all the existing .d files at startup.
EDIT: Yep, I'm fairly confident this works :D. I don't know if whoever wrote that manual page knew about -MD, but I think it might be newer than -M, which would explain it.
The problematic case is with generated header files. Suppose foo.c includes foo.h, where foo.h is generated by a separate command. On a clean build, there's nothing telling Make that it needs to build foo.h before foo.c, so it may not happen (and worse, it may usually happen but sometimes not when doing parallel builds). A separate invocation of `gcc -MM` works for this, as when it generates the dependency information for foo.c it will see that it needs foo.h before you do the actual build.
Personally I've never found it too burdensome to just manually specify dependencies on generated files.
Hm yes I just figured that out the hard way -- <scratching head>.
This feels hacky, but yes it seems to work. I'll think about it a bit more. (I might clone this feature for a build tool -- since the gcc/Clang support is already there, it seems like any serious build tool needs this. Although some have their own #include scanners which is odd.)
I guess a simple way of explaining it is that if there are any new header dependencies, one of the files that make already knows about must have been modified to add the #include statement, so make will correctly rebuild the .c file and generate the new .d file, even though it's working on outdated dependency information.
Though, I guess I wasn't quite correct either. See plorkyeran's sibling comment re: generated header files.
> Although some have their own #include scanners which is odd.
I once worked in a place that had its own #include scanner (partially because it used so many different compilers and other tools... suffice it to say that Watcom C++ was in the mix). To make it work, you had to install a custom filesystem driver that intercepted disk reads during compilation and logged them. A rather... bruteforce approach. But it had the advantage of working with everything.
This is what I do too, and it seems perfect to me for adding set-and-forget dependency awareness to an ordinary Makefile. The first build with a new .c file will work. Its .d file won't exist, but the -include directive silently ignores this because of the - prefix, and it will be built anyway, because its corresponding .o doesn't exist either. Subsequent builds will use the dependency information in the .d.
Also consider adding -MP to CFLAGS to prevent errors if you delete a .h file.
I actually thought about writing a tutorial on how to write a complete "classic" LISP implementation. I know there are already hundreds of toy LISP tutorials but having LISP 1.5 as a goal and implementing it in much the same way it was implemented originally isn't something I've seen yet.
Note: Even if not doing formal methods, one can benefit from such work by making their interpreter equivalent in features, running the same tests/apps through both, and checking for their equivalence.
Adding a dead boolean field, then all the entities with links removes their pointers when possible by looking at the dead field, at the end the object is freed.
[1] https://git.zrythm.org/cgit/zrythm/tree/README.md#n22