Hacker Newsnew | past | comments | ask | show | jobs | submit | weeber's commentslogin


I like the minimalist design of the website :)


Doing it more can provide better result only when it is allowed to trash the previous iterations to use the latest one.

This is not the case in the industry, we cannot change everything in a product at each iteration. This is why at least a little care and architecture have to be done before doing it and trashing must be made with care.


What does the times function?


Multiply.


ok, but why 2 * 4 produces a 9?


The list is a list of digits, which combine into a single number (that is: '(1 2 3) == #123). 123 × 4 = 492, so (times #123 #4)) unsurprisingly evaluates to #492 (a.k.a. '(4 9 2)).

A bit funky if you're coming from a Lisp that actually does have numbers that evaluate to themselves, but when it hits you it hits you :)


Interresting, but the OpenGL setup is not necessary?


Why not?


The author is handling sprite rendering entirely on the CPU, he's only using OpenGL to put a bitmap on the window. Little bit unnecessary.


GLFW+GL provides a portable way to do this though. Otherwise you'd need to mess with platform-specific window systems.

(something like minifb would be a more minimal option though: https://github.com/emoon/minifb)


SDL is more than enough for something simple like this. You don't need to mess with OpenGL.


Yes with SDL Surfaces.


Textures would be better. Surfaces are raw data, but textures are optimized for the individual hardware.


SDL2 is pretty popular these days


One of my goals is to reuse parts of the engine for future projects where hardware acceleration might be used.


Exactly!


I started to learn Lisp, and struggled after trying to use:

- Quicklisp (lisp package manager)

- ASDF (builder)

They are very complicated to use and understand.

Then I abandoned Common Lisp...


Then you have Portacle !

Emacs isn't the only editor too: https://lispcookbook.github.io/cl-cookbook/editor-support.ht... Lem is also a ready-to-use editor (ncurses or Electron)(but Emacs+Slime is still the best experience (maybe vim?))

You can follow this guide to get started too: https://lispcookbook.github.io/cl-cookbook/getting-started.h...

And hopefully this Cookbook will help you along the way: https://lispcookbook.github.io/cl-cookbook/

Quicklisp takes a couple commands to install, then it's easy. To install a library: `(ql:quickload :my-lib)`. Thinks of it closer to apt than to pip/npm, so you don't upgrade one lib but a QL distribution.

You don't really have to deal with asdf. It is used to create a project declaration, which can be generated for you (see "getting started", cl-project). ASDF helps in creating executables, it's also explained in the cookbook.


Can you expand a little on the difficulties you faced? When I tried quicklisp, the instructions in the homepage were easy to follow to install it, search packages, install them, and load them. I haven't tried making a package, or get it included in the repo. Was that where the difficulty you faced was?


Fine to setup. But I don't understand how to reload a project after closing the REPL. More generally, too many tools to master and understand... And didn't find a simple tutorial to start with the ecosystem.


If you're talking about Quicklisp, to reload it after closing the REPL, you just need to run:

    (load "~/quicklisp/setup.lisp")
and then you'll have the quicklisp functions available. Running:

    (ql:add-to-init-file)
will add something like that to your REPL's init file, so you'll have quicklisp available from the get-go. After that, to load a package like "cl-opengl", whether it's installed or not, you just need to do:

    (ql:quickload "cl-opengl")
This is in the homepage, under a section titled "Loading After Installation"[1].

[1] https://www.quicklisp.org/beta/#loading


>Quicklisp

Um... basically the only command you need to learn is ql:quickload followed by the name of the system you want to load.

Perhaps you had problems with the concepts of Lisp packages (namespaces)?


> Perhaps you had problems with the concepts of Lisp packages (namespaces)?

Rather unlikely. The namespaces are pretty easy to understand conceptually, and their interface is pretty minimal. Same goes for Quicklisp, very easy to start with, practically one function call needed in most cases.

Now there's ASDF. I know, it's a thing of beauty, but it's not simple to use or to understand.

Well, it's actually that way for good reasons: the problem it solves is complex and hard. The only comparable system I know of is Erlang releases mechanism, which is also a solid tool, but definitely not the simplest one in Erlang repertoire.

Anyway, if someone cites ASDF as a reason for struggling with Common Lisp, I find it very believable.


> > Perhaps you had problems with the concepts of Lisp packages (namespaces)?

> Rather unlikely. The namespaces are pretty easy to understand conceptually, and their interface is pretty minimal.

Packages can definitely give beginners trouble. I don't think it's that uncommon for newcomers to Lisp to have REPL sessions like this:

    CL-USER> (ql:quickload :iterate)
    (:ITERATE)
    CL-USER> (iter (for i below 10) (collect i))
    ;; Error: 3 undefined functions and two undefined variables
Then they find out they need to use the package first:

    CL-USER> (use-package :iterate)
    ;; Error: 3 NAME-CONFLICTs saying that the symbols ITER and friends
    ;; are already accessible
Then they throw their hands in the air about how frustrating Lisp tooling is. For a less obvious example, imagine they had just been using LOOP normally at some point in the past in that REPL session before trying to bring ITERATE in. Once you understand how symbols and packages work, it's easy to see why it works the way it does, but it's also easy to see how a beginner could get totally lost.


Same, except I went to Clojure (and some toy Lisps for fun, like Hy) instead.


Your processes made me think about a email based interface instead of a bash script, this may allows to easily interact with the database bot without knowing bash or python.


We plan use a Telegam interface for many things (status checks, new invoices etc). Its easier and faster than E-Mail and available everywhere!


Telegram (Messenger)?


Yes. Communicating with telegram from bash is simple. Check out https://www.curry-software.com/en/blog/telegram_unit_fail/ for example.


I'm wondering if it is possible also with Signal (signal.org)


Harder, since they don't have an open API and don't want people using non-standard clients. The clients are open-source though, so probably you can do it.


Simply add:

  -include $(OBJS:%.o=%.d)
in your makefile (with -MMD in CFLAGS).


Don't you need a pattern rule for %.d? GNU Make doesn't seem to come with this rule.

There is a profoundly ugly example here:

https://www.gnu.org/software/make/manual/html_node/Automatic...

    %.d: %.c
        @set -e; rm -f $@; \
         $(CC) -M $(CPPFLAGS) $< > $@.$$$$; \
         sed 's,\($*\)\.o[ :]*,\1.o $@ : ,g' < $@.$$$$ > $@; \
         rm -f $@.$$$$
I copied it into a running example here for anyone curious:

https://github.com/oilshell/blog-code/tree/master/dotd


If you have -MD or -MMD in your CFLAGS, GCC (and Clang) will generate the .d files during compilation without you having to add anything else to the Makefile, and without requiring ugly sed magic.


I just tried this, and it seems wrong on two counts:

1) The sed magic is required for adding the dependencies of the .d file itself. For example, if the timestamp of your header changes, it may have acquired new dependencies, and a new .d file has to be generated BEFORE make determines if the .c file is out of date.

See:

https://www.gnu.org/software/make/manual/html_node/Automatic...

The purpose of the sed command is to translate (for example):

    main.o : main.c defs.h
into:

    main.o main.d : main.c defs.h
The first line isn't correct because the .d file itself has no dependencies.

2) By the time you are compiling, it's too late to generate .d. The .d files are there to determine IF you need to compile.

EDIT: I am trying to generate a test case that shows this fails, but it seems to actually work.

Hm yes I'm convinced it works, but I have to think about why. I guess one way of saying it is that the previous .d file is always correct. Hm.


(2) is not quite correct. The old .d file from the previous compilation is actually all you need to determine whether the .c file needs to be recompiled. It works in all cases. If the .c file is new (or you're doing a clean rebuild of the whole project,) it will always be compiled, because there will be no corresponding .o. If the .c file, or any of the .h files in the old .d gain new header dependencies, they must have been modified, so their timestamps will be newer than the .o file from the last build, hence the .c file will be recompiled and a new up-to-date .d file will be generated (because a new .d file is always generated when the .c file is compiled.)

If (2) is not correct, then (1) is not needed either. The old .d files from the last compilation pass are sufficient to know which files need to be recompiled in the current compilation pass. Make does not need to know the dependencies of the .d files themselves, it just needs to load all the existing .d files at startup.

EDIT: Yep, I'm fairly confident this works :D. I don't know if whoever wrote that manual page knew about -MD, but I think it might be newer than -M, which would explain it.


The problematic case is with generated header files. Suppose foo.c includes foo.h, where foo.h is generated by a separate command. On a clean build, there's nothing telling Make that it needs to build foo.h before foo.c, so it may not happen (and worse, it may usually happen but sometimes not when doing parallel builds). A separate invocation of `gcc -MM` works for this, as when it generates the dependency information for foo.c it will see that it needs foo.h before you do the actual build.

Personally I've never found it too burdensome to just manually specify dependencies on generated files.


Wouldn't the header need to be explicitly listed as a dependency to prompt it's generation anyway?


Hm yes I just figured that out the hard way -- <scratching head>.

This feels hacky, but yes it seems to work. I'll think about it a bit more. (I might clone this feature for a build tool -- since the gcc/Clang support is already there, it seems like any serious build tool needs this. Although some have their own #include scanners which is odd.)

Thanks for the information!


I guess a simple way of explaining it is that if there are any new header dependencies, one of the files that make already knows about must have been modified to add the #include statement, so make will correctly rebuild the .c file and generate the new .d file, even though it's working on outdated dependency information.

Though, I guess I wasn't quite correct either. See plorkyeran's sibling comment re: generated header files.


> Although some have their own #include scanners which is odd.

I once worked in a place that had its own #include scanner (partially because it used so many different compilers and other tools... suffice it to say that Watcom C++ was in the mix). To make it work, you had to install a custom filesystem driver that intercepted disk reads during compilation and logged them. A rather... bruteforce approach. But it had the advantage of working with everything.


There's a -MT flag which does what the sed line does. From the gcc man page:

  Change the target of the rule emitted by
  dependency generation... An -MT option will
  set the target to be exactly the string you
  specify...
So in your example one might do something like

  -MT "$@ $(basename $@).d"
which would output

  main.o main.d : main.c defs.h
for the main.o target.


This is what I do too, and it seems perfect to me for adding set-and-forget dependency awareness to an ordinary Makefile. The first build with a new .c file will work. Its .d file won't exist, but the -include directive silently ignores this because of the - prefix, and it will be built anyway, because its corresponding .o doesn't exist either. Subsequent builds will use the dependency information in the .d.

Also consider adding -MP to CFLAGS to prevent errors if you delete a .h file.


'include' being one of those features that isn't in basic POSIX make, though.


The language is very small, may be interesting to write an interpreter.


The LISP 1.5 manual would help in such a task: http://www.softwarepreservation.org/projects/LISP/book/LISP%...


It's fun to write an interpretor for a lisp in Common Lisp. The book Lisp in Small Pieces goes through this.


I actually thought about writing a tutorial on how to write a complete "classic" LISP implementation. I know there are already hundreds of toy LISP tutorials but having LISP 1.5 as a goal and implementing it in much the same way it was implemented originally isn't something I've seen yet.


Magnus Myreen leveraged that property when mathematically verifying one down to machine code. Used LISP 1.5. Built a bigger language (CakeML) on that.

http://www.cl.cam.ac.uk/~mom22/tphols09-lisp.pdf

Note: Even if not doing formal methods, one can benefit from such work by making their interpreter equivalent in features, running the same tests/apps through both, and checking for their equivalence.


Doing this (for Scheme, but same concept) was the start of my programming languages class.


Or even a very dumb compiler.


Smart thinking. See my other comment. ;)


Thanks, already printed it out.


Oh nice move. I'd do that but Id take out a rainforest with my collection. ;)


Is C++ shared pointers the solution?

Adding a dead boolean field, then all the entities with links removes their pointers when possible by looking at the dead field, at the end the object is freed.


GC is the no-sweat solution :)

But it's considered too slow (or too unpredictable) for games.


I think it's quite easy to build a GC-like system for a particular subsystem of the engine.

The only nuisance is the lack of reflection in C++, as usual.


Yet most games are written in GC languages (flash, java on android, objc+arc on ios, javascript, C# on unity).


Do you mean the games that push your hardware to the edge, like DOOM and Quake I-III at the time?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: