When I see amazingly compact language implementations like this, the effect on me is not 'I want to use this language' but rather 'I should also write my own LISP'.
I wrote a lisp, a while back, and then later added an LSP for it.
Since lisp is so simple in terms of syntax what I really did was tab-completion, and info-on-hover, for the built-in functions like "car", "cdr", and the primitives I added as part of a more complex standard-library.
In my case I was writing in go and I found an LSP-server package which was trivial to use. So getting the integration with emacs, vim, etc, was really trivial:
Unless you want half your C logic to be error propagation, it’ll have to be <setjmp.h>, all its flaws notwithstanding. Manual unwinding can be done tastefully (see Tcl), but in the single-digit-KLOC weight class it’s not really a choice.
Mostly a dig at the title. Jokes aside, I love how concise this is. Not sure how I feel about readability, too many inline if {} blocks for my liking, and a bit arcane. But I'm never not gonna upvote novel languages. Something about language creation delights me.
If this project interests you but is a bit more minimal than you need, the Janet language is a slightly-less but still pretty lightweight embeddable Lisp with a strong library and community: https://janet-lang.org/
Has anyone here switched from Common Lisp or Scheme/Racket to Janet? If so, what was good/bad about the switch? Or was it just different?
I recently started learning Common Lisp, and it’s been mostly great, but my two complaints are 1) memory usage and 2) deployment (copy source or distribute a relatively huge world image—and I haven’t found any way to get a fully static build without patches, even with ECL).
Janet could solve both of those, although I’d pay a performance penalty due to it being interpreted, and I assume there are a lot fewer libraries/tools. I really like the idea of a Lua-sized Lisp, but I’m not sure if I should jump ship from a traditional Lisp yet.
You’re right, it is a feature, but I’d like the option to distribute a smaller binary without everything built in. Roswell supposedly helps with this, but I haven’t gotten it to work yet. Some commercial implementations support this, I think, but they’re priced well beyond my hobby project budget.
Janet is intriguing to me because it has a small footprint (even smaller than most Schemes). I’ve heard it’s handy for building little utilities, especially if you want someone else to be able to run them (a situation I find myself in regularly).
To provide a some extra context for other readers and mostly respond to the GP:
Janet does not do any stripping for distribution -- Janet images compiled to native executables still contain the full compiler/stdlib/runtime/everything in <1mb.
Absolutely, but it’s not even embedding that I had in mind. I want to be able to produce fully statically linked binaries, because in my experience that is the only truly portable binary solution on Linux. It’s a trade off for sure, but it’s something I’ve found useful for running on e.g. Raspberry Pi.
For SBCL, the two main approaches I’ve seen are: a) use a patched version of SBCL that is statically linked or b) build against an old glibc and hope for the best (which sadly doesn’t work on my musl libc-based distribution). I guess the third approach is distribute source and compile on the device—this is what I’m actually doing, but it seems a little silly to recompile every time, even if it’s the same architecture.
And keep in mind this is just my wishlist of functionality. It’s not necessary and I don’t expect it, but it would be really convenient.
easiest solution I found was building SBCL targetting an old glibc using zig as the c compiler:
CC="zig cc -target x86_64-linux-gnu.2.28" LD="zig cc -target x86_64-linux-gnu.2.28"
zig bundles in all the glibc headers, so I think it should build on musl libc-based distro
for more complicated programs with native dependencies there is (https://github.com/Shinmera/deploy) or guix has a lot of CL libraries already and it's fairly easy to write packages for others, you can specify all dependencies with guix and use guix pack to get a .tar with all dependencies you can unpack and run on any other linux box
For a Lisp not to have threading macros, it would need not to have macros, period. Writing these is so trivially simple you can ask ChatGPT to do it and it would get it right on the first try.
On a related note: it would be an interesting read if someone tried tracing the threading macros to some origin point(s) in the past. There seem to be two main variations, Lisp's ->/->> and |> in many functional languages (Elixir, OCaml, Raku). The construct seems to have gotten popularized by Clojure and F#, but I don't think it was invented in either (maybe rediscovered).
It had to start at some point, somewhere, but I wasn't able to locate that point.
> - Portable ANSI C (Windows, Linux, DOS — 32 and 64bit)
I skimmed through the source, and aside from reading a file from STDIN to a `static char buf[64000];`, nothing in this seems to use the POSIX API. With that buffer trimmed to an appropriate length, it appears it could run on a microcontroller, which is always a useful thing to have.
I once wrote a tiny DSL language. I used LISP (S-Expressions with parenthesis) because it semantically fit and made the parser much easier to write. I was laughed at and ridiculed by my peers. Everyone hated it that I had to attempt two other language styles just to compensate.
I had a chuckle seeing fe syntax is LISP like and I was like “See? I’m not the only one.” Many just can’t give LISP and S-Expressions their due regard. Same for Reverse Polish Notation. I was born on the wrong planet.
John McCarthy had the opposite experience: he originally planned to replace Lisp's S-expressions with a more ALGOL-like syntax called M-expressions, but other people liked S-expressions and he never got around to implementing M-expressions.
I love RPN for calculating - I still regularly use a HP11C (Swiss Micros replica).
But I hate coding or reading prefix (list) or postfix (rpn, forth, etc) languages. Once wrote a substantial project is Postscript - works fine but note easy on the brain.
Guess it comes from just too much familiarity with infix notation!
I personally find prefix and RP notations to be pretty easy to read, but writing is harder for me. I've introduced bugs into FORTH programs because I did a + b instead of a b + out of habit. I think both notations have their place and not everyone will be able to make the switch
I don't think it's worth making a FORTH unless you plan to build an optimizing compiler better than what iForth, gForth, or VFX have done, open source and maintain it, or exclusively target microcontrollers. FORTH is a great "language" but the ecosystem is tiny and the tooling is pretty minimal.
I thought part of the reason people use forth is because they don't need or want the dependencies and complexity of a big ecosystem or lots of tooling.
Maybe a little bit offtopic, but rxi also made a few Lua / LÖVE 2D libraries that I've been able to put to good use, namely shash [0], lume [1], json.lua [2] and classic [3].
When you have methods named `cons` `car` and `cdr` that appear to work in the same way as lisp's methods of the same name it's pretty clear that you're not pretending or attempting to be "novel".
While it _should_ say it's a lisp / scheme, given those it doesn't really _need_ to say that because it's pretty forehead slappingly obvious to anyone who knows what lisp or scheme are.
Even if you don't mention Lisp, people will assume you were trying to make it a Lisp and complain that you didn't follow Lisp precedent. Examples can be found in this very comment section.
This is larger, because there are functions for accessing peripherals, and the core is more standard lispy with 'caadr' et.al., and it has a compacting GC, so images can be saved as a compact blob.
I read (= a b) as unification and slowly pieced together that it was probably assignment. Not as equality as the results of all the expressions starting = were discarded.
Because it's a 50-year old convention, and there doesn't seem to be a compelling reason for ignoring it. The language then uses `is` for equality, including numbers.
It's just weird, especially given they're using old-fashioned car/cdr instead of head/tail.
> Slavery was widely confirmed for even longer. Tradition, right?
This is an egregious use of the noncentral fallacy.[1]
Conforming to this 50-year tradition makes the language more familiar and easier to learn, so it's reasonable to question why the language chose differently.
And yet, Clojure, the most used Lisp today, decided to use brackets instead of parentheses for parameter lists, breaking with 50 years of Lisp tradition. How dare they??
When Lisp first appeared, square brackets were not even standard characters. Maybe McCarthy would use them if he could.
The word "set" is much older, and is universally supported.
But most glaringly, (= a b) has a well-understood and entirely different meaning in most lisps, also since 1950s. It's comparison, not assignment. Breaking a well-set convention is a very different thing than making a new, slightly improved convention.
My original post includes "there doesn't seem to be a compelling reason for ignoring it". I'm not arguing for tradition for tradition's sake.
I don't think using = instead of define/defun/defn has a compelling reason. Clojure's use of [], {}, #(), etc., is compelling to me. I'm not a Lisp purist who thinks everything should be parentheses. Adding that syntax is helpful for the reader.
Innovation is welcome when it's beneficial. = for assignment isn't.
Using [] to style function formal parameters is completely pointless. It literally serves no purpose. It just makes it gratuitously different from other languages (not only Lisps). In most mainstream languages, both formal parameters and actual parameters use parentheses.
There would be a sense to it if square brackets shifted into some alternative semantics: (fn (...) ...) versus (fn [...] ...) doing something usefully different.
If you must use square brackets there, they are just a syntactic quirk that doesn't enable any new semantics.
Everyone who uses Clojure, and even people who don't, but know Lisp, understand the point: since parentheses are used in Lisp to delineate everything, it makes sense to use a different pair of symbols to make parameter lists stand out more.
Maybe saying it's "completely pointless. It literally serves no purpose" is a bit overly dramatic, don't you think?
You could have limited yourself to saying you don't personally like it because it breaks with tradition.
This kind of gatekeeping from ANSI Common Lisp purists is tiring. Who cares about convention when one just wants to build their own language, use S-exprs and car/cdr for the sake of it.
Here is as good of a place as any to ask: does anyone know of an embeddable scripting language pretty much exactly like Lua but with more C-like syntax?
EDIT: the best alternatives I'd previously found were rhai (still niche and weak library) and sandboxed JS (heavier than I would like and the standard library is still poor).
Maybe Wren? It has classes and method syntax too but it's mostly C like: curly-braced blocks, operators are pretty much the same, slash and slash-asterisk comments, if/while/for with parens, ... Statements are not terminated by semicolons, though.
Could macros be used to add structs? How much code would this require?
I don't feel like a language is real until you can associate data. It's literally the first thing I do when I start any software project that's not a five line script
lisp is great, I'm currently learning Janet, playing with fennel and clojure.
But a new 'lispish' should not replicate ancient historical 'accidents', IMHO, and in fact lisp itself should deprecate these names (and caar, caadr, caaaddr ... OMG).
They are a pair rather than a linked list, you could have a list in first and an item in tail. People just usually see them as lists. But they are binary trees.
Left and right might make sense. But then you get into endianness.
John MacCarthy knew that not every computer is an IBM 704, and that users want higher level languages developed on one system to be usable on another.
He stuck with those names because there is no problem with them. They are hard to improve upon in a ways that are only subjectively better while being objectively worse. For instance first and rest are subjectively better for some uses and for some people, while being objectively worse (wastefuly longer for something commonly needed, and falsely associating when the objects aren't used as linear lists). Thus at best we can adopt them as synonyms (and we have).
The were inspired by FLPL ("Fortran List Processing Language"): a Fortran library for list manipulation. FLPL, for mysterious reasons, used pretty horrible names like XCARF and XCDRF. MacCarthy must have been thinking about good naming, because he thankfully dropped the X...F.
I think if they were called something else, I'd still want them to be three-letter identifiers (not just one or two, to avoid clashes) which differ only in the middle letter, so a pattern similar to cadr, cddr ... is possible.
In quantum mechanics there is a <a|b> notation where a is the "bra" and b is the "ket". That doesn't have the right property for cadr, and would be the the target of bra and fig leaf jokes.
In TAOCP, I seem to recall, Knuth calls the fields of binary list cells alink and blink. Not sure if he wants us to pronounce that as blink (of an eye) or bee link. The names are a bit long, but aaabalink is possible. If shortened to alk and blk, that would work.
If both middle letters were vowels, that would be more conducive for the words being pronounceable. That restricts us to aiueo and y. The car using a high vowel is nice, so let's go with i.
How about: dit and dot? The dot is in the ... dot position!
One problem is that cdar becomes doit, which looks like "do it".
"FLPL, for mysterious reasons, used pretty horrible names like XCARF and XCDRF"
It's not mysterious at all, these were also named after the 704 register scheme.
Your argument that MacCarthy et al couldn't think of accurate names may be valid (naming is one of the hardest things, after all), but there are certainly 'better' names, so more likely these just stuck after the initial implementations.
"so a pattern similar to cadr, cddr ... is possible."
Honestly, these are abominations. Remember the point of high level languages etc is not just so you can write code as fast as possible, but that someone else (maybe your future self) can read and understand it easily.
By that remark, I'm referring to the X...F dirt in XCARF, not to the CAR infix that we all understand. Why the authors of FLPL saddled the identifiers with that is puzzling. (Extract the content of the address part of the register function? Why?)
Regarding that "R" in car, though standing for "register", refers to a register in memory; Aontent of the Address part of the Register cell.
A piece of memory can be called a register today (e.g. virtual machines have registers that are actually in memory, and some processors have had memory mapped registers like zero page on the 6502). "Content" continues to apply. If someone doesn't like "register", they can pretend it stands for "record". CAR: content of the A part of the record; CDR: content of the D part of the record. The only weird thing are the choices of A and D; the "address" and "data" interpretations don't make sense. But they are just one letter out of three.
Why does it have to be English?
A = antérieur
D = derrière
In English we use Latin-derived prefixes where A and D form opposite pairs, like: Ascend/Descend; Approve/Disapprove; Adduce/Deduce .. Or cases where we have different roots: Arrive/Depart. Maybe some interpretation can be found.
In Japanese, car and cdr as the verbs "karu" and "kudaru" work. There are several verbs spelled "karu": 刈る (cut, mow, clip, ...), 借る (borrow) and some others like 狩る (hunt: animals, but also refers to picking and gathering berries and such). Kudaru is 下る, to descend. We can borrow/clip/pick the item at this cell (karu) or descend down (kudaru) to the next cell.
Not everyone who codes is a native speaker, yet most languages use English-derived words. Someone to whom "if" and "while" are foreign words may not care about some English speakers nitpicking over car and cdr.
> can read and understand it easily.
If you accept (car (cdr (cdr x))), then (caddr x) is a nice abbreviation for it, related by a transparent naming scheme. The source code is smaller, and if the compiler doesn't automatically reduce (car (cdr (cdr x))) to caddr, you get better code density.
The name doesn't work in a filter pipeline (-> x caddr ...) because the letters are now backwards w.r.t. pipe direction, since they indicate right to left application.
A good way for your future self to understand everything is not to keep changing names every year. The future self of some person coding in 1968 understands (if still alive) code I just wrote in a different Lisp dialect in 2023, because I used (cadr x). It may not be familiar to Lisp outsiders, but it's an utterly unmoving target for insiders.
Imagine you were tasked with implementing a linked-list in, let's say 'C', but anything really.
So you choose 'cell' as your name for a node. OK, that's .. fine.
Now you choose 'car' and 'cdr' for each cell's 'data pointer' and 'next-cell-in-list pointer'. How many 'WTF?s'* do you think you'd get in code review? (except from lispers, I guess)
* obviously it's a rhetorical question, no-one writes or says WTF (out loud) in actual code reviews I hope.
> Now you choose 'car' and 'cdr' for each cell's 'data pointer' and 'next-cell-in-list pointer'. How many 'WTF?s'* do you think you'd get in code review?
You are tasked with writing a binary tree, and you pick car and cdr ... A key-value tuple-space... Complex numbers...
The point is they are 'wrong' in every context. For LISt Processing, they should probably default to something to do with lists.
And caaar et al are a fine example of why I completely skipped common lisp in favour of newer lisp-likes. Perhaps it happens to all languages eventually, people add a convenience here, an shortcut there, and a doodad here, and before long you have an impenetrable thicket.
And these things seem convenient, but they are a huge extra cognitive load, and impose a further learning cost even to practicing experts who have to keep up with the latest doodads.
Who cares what HN is written in? It could be written in anything. No, I do not buy the Blub programmer argument, either. Yes, maybe literal geniuses are attracted by exotic languages, but genius alone makes for poor business decisions.
the problem is with not being upfront about it. The problem is the implication that this is something new, that may solve some new problem in some new way.
Now, it MAY do something other lisp clones don't, but if it does there's zero indication of it in the readme.
"new" when talking about a repository with 15 commits, the last one being 3 years ago.
It is obviously a toy experiment and it took my 5s to realize that.
While it does not say "this is a lisp", it also does not say "this is a new shiny thing that will change the world". In fact, it says nothing except "look at what i did".