The article doesn't discuss macros, which is one of the answers to "Why Lisp?"
I didn't "get" macros until I read a footnote in the (freely available) book Practical Common Lisp. In chapter 7, it introduces the `dolist` macro.
DOLIST loops across the items of a list, executing the loop
body with a variable holding the successive items of the list.
This is the basic skeleton (leaving out some of the more
esoteric options):
(dolist (var list-form)
body-form*)
...For instance:
CL-USER> (dolist (x '(1 2 3))
(print x))
1
2
3
NIL
Buried in a footnote is this:
"DOLIST is similar to Perl's `foreach` or Python's `for`. Java added a similar kind of loop construct with the 'enhanced' for loop in Java 1.5, as part of JSR-201.
Notice what a difference macros make. A Lisp programmer who notices a common pattern in their code can write a macro to give themselves a source-level abstraction of that pattern. A Java programmer who notices the same pattern has to convince Sun that this particular abstraction is worth adding to the language. Then Sun has to publish a JSR and convene an industry-wide "expert group" to hash everything out. That process--according to Sun--takes an average of 18 months. After that, the compiler writers all have to go upgrade their compilers to support the new feature. And even once the Java programmer's favorite compiler supports the new version of Java, they probably still can't use the new feature until they're allowed to break source compatibility with older versions of Java.
So an annoyance that Common Lisp programmers can resolve for themselves within five minutes plagues Java programmers for years."
Could someone please explain the difference between Lisp macros and, say, languages that have first-class functions? I get that a Lisp macro will be expanded into the respective code, while a function's execution is different. However, at the practical (i.e., developer's) level, are there any additional benefits?
Can, say, a Lisp macro be 'partially formed', in the sense that it can expand into some boilerplate that represents an incomplete syntax tree? (Whereas a higher-order-function is necessarily complete.) I can see that being useful, but not inasmuch a it's made out.
My favorite example for this is the lame idiom you see in Java code:
if (log.isDebugEnabled()) {
log.debug("expensive" + debug + message);
}
This is "better" than just log.debug(...) because with the latter, your expensive log message argument needs to be evaluated even if debug is disabled.
However, in a language w/ macros, you just say:
(debug (str "expensive" debug message))
and these considerations are already taken care of for you:
The point was that "debug" and "message" parts in the example were arbitrarily complex expressions which were computationally expensive to evaluate. Your wrapping of log debug would still require evaluating them to get the message string unconditionally, even when debug logging is not enabled.
Of course, with support for first-class functions, you could do something like:
func debug(produceMessage ()->string) {
if log.isDebugEnabled() {
log.debug(produceMessage());
}
}
The downside there is you can still end up allocating a closure. Whereas an expanded macro shouldn't cost anything. (A sufficiently smart compiler might be able to optimize the closure allocation.)
A sufficiently modern language (like, saaaaay, D2) could also give you a type like "closure you don't intend to escape" (let's call this a "scoped closure", or if you will, "scope string delegate()"), and eschew allocation entirely without requiring optimization.
For completeness of the argument, this particular problem is solved in the Java world with string formats. With the slf4j interface, that would be:
log.debug("expensive %s %s", debug, message)
The message is not actually formatted into one string unless the DEBUG trace level is enabled. Of course, you are still passing the arguments around, but with object references that's a negligable difference.
I still appreciate the solid example of a problem macros are good at solving, though. Two ways around one problem.
It's not solved, because method arguments are evaluated eagerly. It means that message argument may not be an expensive expression, because it will be calculated independently whether debug is enabled or not. In case of macros arguments of this method could be evaluated lazily.
Calling log.debug() with and argument of `false` is a no-op? That sounds like someone bending the language to fit an idiom, because it doesn't sound like a sane API except that it enables this use case.
It is insane in an eager-evaluated calling context. You're not wrong. But in Lisp it's not insane at all to let a macro consume a parameter and yield a no-op. Think of how this plays with a JIT and having code that can dynamically switch between dev/QA/production behavior and performance profiles, just for one example.
That does something similar, sure, but what it in fact is is a macro. As snikeris was saying, you couldn't write anything quite like that #define in a macroless language like Java.
One difference is flow control. When you call a function, all the arguments are evaluated before being passed into the function. If you want to delay evaluation, you have to wrap the argument values in a function. When you call a macro, the text forms get passed with no evaluation.
Say Clojure forgot to ship with the boolean "or". "or" should evaluate its arguments one at a time (to allow for short circuiting) and return the first non-false value. You could do this with functions, but you have the source-level overhead of manually wrapping everything, and performance overhead of defining and passing an anonymous function for each argument. With a macro, at compile time you just translate the "or" macro into a simpler form that uses "if". This is how Clojure actually implements "or":
(defmacro or
"Evaluates exprs one at a time, from left to right. If a form
returns a logical true value, or returns that value and doesn't
evaluate any of the other expressions, otherwise it returns the
value of the last expression. (or) returns nil."
{:added "1.0"}
([] nil)
([x] x)
([x & next]
`(let [or# ~x]
(if or# or# (or ~@next)))))
You should have a look at Kernel, vau-calculus and F-expressions. Kernel merges the notions of first-order functions and macros by having lexically scoped definition of Fexprs which takes their argument by name and also receives the call-site environment so that you can eval the argument in it if needed.
I find it a very elegant way of having everything clean, "lambda" does not even have to be a primitive anymore.
Look at the uses for Template Haskell, for example.
That's basically Haskell's macro system. And it's used quite a lot, though it's kind of arcane.
If you want to create an abstraction that defines one or several data types, you'll think hard about whether you can use some kind of type-level programming instead—but if that's not possible, or not convenient, you can use TH macros.
For example, the `lens` package defines TH macros for creating special kinds of accessors that are tedious to write by hand.
Which things cannot be expressed without macros? Macros run at compile time, and just output normal code in the language.
I think being able to generate types is a very useful and important use of macros. In fact, (depending on whether or not your macros can have side effects) you could use macros to implement something like F#'s type providers.
The majority of macros I write could be represented with HOF and lexically-closed lambdas. That adds significant extra syntax when you use them though.
A minor advantage is that a macro will be expanded in-line; a Sufficiently Smart Compiler could transform the HOF version into something equivalent, so it's not strictly an advantage (except to compiler implementers I suppose).
I think that e.g. generalized references (i.e. the common lisp setf macro) are not possible with HOF, but HOF is more common in pure languages where assignment is eschewed anyway.
Mark Jason Dominus (author of Higher Order Perl) had some good comments on lisp macros as well:
One thing to note is that you're using the macro or lambda to delay evaluation. In a lazy-by-default language, that's unnecessary (which is a part of why macros are less useful in Haskell).
There's a class of things that don't require macros in Haskell, but I don't think that means macros are less useful in Haskell. There are plenty of things you might want them for, like generating new definitions.
Macros can provide a lot of syntactic convenience over those first-class functions, especially with heavily nested structures. For example, I can replace this monadic parser definition...
This requirement arose in Haskell for its Parsec before it became a part of the syntax, then again later with the Arrow library which later also added it to the syntax. Whenever new abstractions are discovered/created, a macroing facility, whether for lisp-like syntax or some other, helps makes all the nested functions more readable.
A higher order function doesn't serve the same purpose as a macro. A higher order function is meant to be applied, called, composed etc. A lisp macro is a different type of abstraction. For example, many people think that macros are just hiding lambda's of higher order functions. This is wrong. A macro abstracts over implementation details of a construct to make it read naturally. For example, you can write a function that opens and then closes a file like so...
with-open-file(filename, lambda file: do stuff with file)
but with a lisp macro, you only have to write
with-open-file (filename):
do stuff with file
The point of an abstraction is so you don't have to think about the implementation. Written like the latter, with-open-file is simply more natural to write this way. You only have to think, "oh, its a construct that opens a file then closes it after the body is done", rather than "oh, its a higher order function that i have to pass another function into that takes the file as an argument..." etc.
When you write with-open-file as a macro, it could be implemented as a higher order function, or it could be implemented as a low level set of GOTO statements. It doesn't matter. The macro abstracts away the low level detail, just providing the most natural way for you to use the construct.
It might not seem like the macro is doing much in that particular example, but a construct that defines a class (like defclass) is something you can write as a macro, which can expand into functions that do the actual defining. You could write a class defining construct as a higher order function, but then you'd have to constantly worry about how your construct was implemented as a function. instead of just writing something natural like
defclass tiger (animal):
age init-value: 0
name type: string
which you could write if you implemented defclass as a macro.
otherwise,
you'd have to do something crazy stupid like
how could a higher order function possibly implement that without making people using it tear their hair out?
They have to bend to the implementation, not make the abstraction bend to what's natural.
Your point seems to be that macros allow for a slightly more natural syntax for certain things, but I can do pretty much the same thing in a language with a natural HOF syntax (Ruby):
with_open_file filename do |f|
do stuff with file
end
And for your second example:
# our 'macro' function
def defclass(name, parent, &blk)
k = Class.new(parent)
k.instance_eval(&blk)
Kernel.const_set(name, k)
end
defclass :Tiger, Animal do
attr_accessor :age, :name
end
Yes, macros allow for a superset of what you can reasonably accomplish with higher order functions, but I haven't yet seen a simple practical example of where the added power is useful. I'm sure it's nice to have, but there's something to be said about a language which generally gets you 95% of the way there using a simple set of built-in operations.
Sure, ruby has nice syntax for lambda, but the entire point is not even having to worry about what goes into blocks and what not. Syntax is a bad excuse for abstraction. Lisp has higher order functions too, but a deceptively short (or "natural") syntax can easily sweep ugly semantics under the rug. I know no one would ever write your defclass hof anyways because of the extreme runtime overhead that causes. If you wanted to reimplement your defclass macro more efficiently, I don't see how you would be able to do so while not breaking all of a users code relying on that function. The point of a macro is so the implementation detail is hidden, and the interface doesn't have to change when you change the implementation. Lisp has higher order functions too. Hell, that's where ruby got it from. Higher functions have their uses, but they are not for abstracting patterns in code like macros are. They are for abstracting procedures in code on arguments received at runtime. The difference is macro expansion happens lexically, at compile time, while functions are part of the program at runtime.
If you haven't seen a practical example of what it's useful for, that's akin to the attitude of a C programmer not understanding the usefulness of higher order functions. They say, I get 95 percent of the way their with good old functions and function pointers. We would find that absurd, just as how I find the claim that a 'simple practical example of where added power of a macro is useful does not exist' is absurd
For example, see 'A unit testing framework' in 'practical common lisp' available online by Peter seibel. I can't possibly imagine how you'd be able to create a unit testing framework abstraction in Ruby as nice or efficient as the one presented with higher order functions in 26 lines of code. But it'd be cool if anyone could prove otherwise.
Notice that in your Ruby code you have to use quoted symbols like :Tiger and :age and :name, because you cannot extend Ruby's syntax with your own. Ruby has good metaprogramming facilities, but it's no substitute for a real macro system.
Semantically, I'm writing in common lisp. If you mean the syntax, that's an ad hoc thing that comes up when writing in an HTML text box and emacs isnt here. But the syntax is shallow and unimportant compared to the abstraction presented. (it's just normal lisp syntax with implied parentheses anyways, so it's a bit ambiguous)
Well, typing parens in a HTML textbox can be pretty tedious, so I understand the desire for pseudocode.
Dylan has an interesting hygienic macro system that's similar to Scheme's. I don't think Dylan allows for arbitrary code-generating macro procedures, but it's possible in principle; see [1].
It's true, as others have explained, that for many of the most common uses of macros, you can get the same effect with a higher-order function.
But since macros operate at the syntax level, they can do things functions can't do. For example, they can generate and manipulate declarations. Say you're working with abstract syntax trees (assembly trees in a CAD app might be another example). These trees are built from nodes, where each node is of some class corresponding to a syntactic construct: if-statement, addition-expression, etc. etc. There's some functionality you want to have on every node class; a common example is a "children" method that gathers up all the node's child slots into a set and returns it. It is very convenient to have a 'define-node-class' macro that automatically generates the 'children' method, so that when you add a child slot, the method is updated automatically; there's no need for manual effort to keep them in sync.
In this case, the macro is expanding to multiple top-level declarations: the class declaration along with the method declaration (probably, in practice, several methods). Higher-order functions don't begin to let you do stuff like this.
Because most node classes have their own slots. Consider this example:
class Node {...}
class Expression extends Node {...}
class Addition extends Expression {
Expression left, Expression right;
Set<Node> children() {
return [a set containing 'left' and 'right'];
}
}
There's no way to write a single method 'children' on Node that will work for all its subclasses, because the method on Node can't access the subclasses' slots -- unless you use reflection, which is ugly and slow.
The sheer number of answers you've got should tell you that indeed there is something about those macros. But you need to 'get' them yourself to appreciate them.
Each answer exposes one or more facets of macros. Indeed, there is not just 'one' thing that make them worthwhile.
Ah, but that is not the case in some languages in which arguments are evaluated lazily. Usually the iron-man version of this question is: "if I have a non-strictly evaluated language with higher order functions, do I still need macros?"
A part of the answer is: you probably don't need the kinds of macros which cover up machine-generated lambdas, which simulate non-strict evaluation in strictly evaluated Lisp programs! The compiler for your language already has these "macros" in its compiler.
The argument why "you always need macros no matter what else you have" is that your language has "hard-coded macros": the grammar rules in a compiler, which match patterns and transform bits and pieces to produce AST fragments. If you don't have macros, then that set of "hard-coded macros" is all you have.
So, even in a functional language with nonstrict evaluation, you're using macros. It's hard to make a convincing argument that they provide all the expressivity you would conceivably ever need. (And their existence and use defeats any argument that you don't need macros at all).
Which definition of "macro" are you using, if you equate lazy evaluation with a kind of "hardcoded macro"?
More importantly, "it's hard to make a convincing argument that [these hardcoded macros] provide all the expressivity you would conceivably ever need" doesn't convince me. A better argument would be to produce a compelling example where these "macros" are not enough. And by compelling, I mean something that cannot be elegantly produced in a non-Lisp language.
> Which definition of "macro" are you using, if you equate lazy evaluation with a kind of "hardcoded macro"?
That definition of "macro" equivalent with "phrase structure rule in your functional language's compiler" which takes the input structure and generates whatever code brings about the lazy semantics (which is not inherent in the x86 instruction set or what have you).
> example where these "macros" are not enough
An example is any instance of language extension where, say, the maintainers of the compiler for a functional language have to ship a new compiler to the users to get them to use the new feature.
Functional languages with lazy evaluation are not finished, right? They are developed actively.
> An example is any instance of language extension where, say, the maintainers of the compiler for a functional language have to ship a new compiler to the users to get them to use the new feature.
> Functional languages with lazy evaluation are not finished, right? They are developed actively.
Agreed, they are actively developed.
Correct me if I'm wrong, but you seem to be saying "many interesting features can be implemented in a Lisp language with a macro, therefore Lisp programmers don't need to wait for a new release of their programming language when they want these features".
I'm not convinced this is the case, or rather, that this is such a relevant case. Aren't Lisps actively developed too? Why is there such a multitude of Lisp implementations? Is any relevant real-world feature truly implementable with Lisp macros? Why is that more convenient than implementing them with functions in languages with lazy evaluation?
Maybe I'm falling prey to the Blub paradox. I'm only passingly familiar with Racket, thanks to a course in Coursera, where they introduce macros and why they are so powerful in Lisp. But I still don't see the compelling "killer example"...
> Correct me if I'm wrong, but you seem to be saying "many interesting features can be implemented in a Lisp language with a macro, therefore Lisp programmers don't need to wait for a new release of their programming language when they want these features".
That is correct. However, macros have to translate to something. From time to time you need an upgraded something for some macros to be feasible.
E.g. it's hard to "macro your way" into having first-class continuations, if they aren't in the substrate.
That is to say, without the macro treating all of its arguments as a self-contained language.
You usually want the code which is in the macro forms to smoothly interoperate with outside code, such as make lexical references to surrounding bindings.
This is the fuzzy limit. Macros have to write stuff in some language, which is no longer macro-expandable. That language has to be reasonably expressive in its semantics for what the macros want to do.
So you think of a feature you'd like in your language. Let's consider the process you'd have to go through to use that feature.
In most languages, you have to write the maintainers about the feature. You then have to convince them that the idea is good -- and this is by no means guaranteed; if they think your idea is bad, you're out of luck, and can never use the feature you'd like. Then someone has to implement it. Then you have to wait for a release containing the feature. Then you have to test your code with the new version. Then you have to upgrade all your deploys, development environments, and testing machines to the new version. Then you can use the new feature.
In a Lisp with macros, you have to implement the feature. Then you can use it.
This is why macros are useful. You get to modify the language you're using, but still cut out the entire loop of the maintainers of that language.
Yes, I understand that argument, but I still find it unconvincing.
Some features you just can't implement with macros, you need a change in the "substrate" (see kazinator's answer below). For the rest, I simply don't see how they are language-level features. They are just things you need for your project, in which case, why can't you simply implement them as a library?
Even if you ignore the above, there's probably a good reason why the language designers don't want to approve your language-level feature. Yes, sometimes it's simply red tape or politics, but it can also be that you -- the applications programmer -- simply aren't well-versed in language design and can't think past your particular use case :) This wouldn't mean the feature is worthless (after all, you need it!) but maybe it's not meant to be a language-level feature, but instead... a library function, which you can write in most general purpose languages.
But this was only added in 2004!^2 So for almost 10 years, you had to manually iterate over stuff. How would you implement this as a library? Well, you could write a function that lets you write:
Would this work in a lambda? Well, if the language you're using has real closures, yes -- but does it? Do you know offhand? With a macro, your code will work.
> Even if you ignore the above, there's probably a good reason why the language designers don't want to approve your language-level feature. ... it can also be that you -- the applications programmer -- simply aren't well-versed in language design and can't think past your particular use case :) This wouldn't mean the feature is worthless (after all, you need it!)
That seems like evidence for my point -- macros let you build the language up for your own use case, not anyone else's. Without macros, your choices are "either everyone can use it, or no one can use it". Macros let you have a choice of "well, I can use it, even if no one else wants it, if I find it useful."
Macros can be viewed as libraries that act on the language itself. There isn't a difference between language-level features and "library functions" in a language with macros.
[1] Without getting into a Turing Tarpit. We're talking about using things in easy ways, not what is technically possible but ugly and kludgy.
Agreed about your Java example. However, for the purpose of this discussion, let's assume we're talking about modern, well-designed languages without ugly kludges and with access to nice features such as lazy evaluation and real closures.
> Macros can be viewed as libraries that act on the language itself. There isn't a difference between language-level features and "library functions" in a language with macros.
I simply don't see why this is such a big deal. I need to see a real-world example (which, understandably, might be difficult to explain in a HN thread) of something that can be achieved with Lisp that is not reasonably achievable in elegant ways in other, non-Lisp modern languages. Again, let's assume we both understand the Turing Tarpit.
By the way, I don't want to sound dense. I understand some features you only "get" when you use them. What little I've seen of Lisp (Racket, actually) seemed very interesting! It's just that I can't get that enlightened moment where I see why Lisp macros are that important in the real world. This is important to me because macros are one of the key features Lispers use to try to convince other programmers Lisp is awesome. And I can see they are interesting and useful; I just fail to see why they are a such big deal that they set Lisp apart and that, for example, Paul Graham would call Lisp his "secret sauce".
Many have said that the real benefit is that you wind up turning Lisp into the language that is perfect for your domain.
When you start your project, you don't know enough to design the perfect language for your domain. You start coding in Lisp, and you begin to uncover patterns that express your domain. Eventually, you find your way towards building a small set of macros that beautifully, expressively capture your domain.
Look for where Paul Graham talks about "bottom-up" programming versus "top-down" programming, and you'll find what he has to say about this. He says you do both in Lisp. Bottom-up is "changing the language to suit your problem."
One that helped some Java friends understand is passing blocks of code, but still having it look like just writing code. Imagine instead of try/catch/finally, a transaction/commit/rollback in Java:
transaction {
// everything in here is in one transaction
} commit {
// do stuff if the commit is successful
} rollback {
// do stuff if we rollback
}
All the try's and catch's can be stuff into the macro. It can be made to nest transactions within transactions.
For all practical purposes, you can't add that to Java. You'll always have to wrap up your transactions in boilerplate.
The Lisp equivalent of what I want, the resulting code would look like:
There are languages where I could define three blocks of code and pass them:
transaction(stuff, commit-stuff, rollback-stuff)
But that separates their definitions from their implementations.
How could you write the Lisp transaction expression in another language so that it looks like it's part of the language? (serious question, CL is the only language I use capable of being that close) Maybe I could torture Ruby to come close, but it would be far more difficult than the macro I had to write for Lisp.
>For all practical purposes, you can't add that to Java. You'll always have to wrap up your transactions in boilerplate.
Aren't you wrapping the lisp code in boilerplate when doing the macro too? This appears to be the same as your other example. Java has lambda expressions (since Java 8) that could do this.
If you are wrapping it in a macro, how is it different than wrapping it in a method? In Java 8, with lambda expressions, you can write code to wrap your transaction example to get something exactly equivalent to
>transaction(stuff, commit-stuff, rollback-stuff)
and you would be keeping the definition in the implementation. I also seem to be missing why it's important to be keeping the definition and implementation together.
So there's some boilerplate that needs to happen. With macros, you write the macro to insert the boilerplate, and then you never think about it again. You don't write it, you don't read it, it's not in the way. Without macros, you have to write the boilerplate every time you write the code. You have to read the boilerplate every time you read the code.
Which do you find more readable? Notice how there's no boilerplate in the macro version.
> This appears to be the same as your other example. Java has lambda expressions (since Java 8) that could do this.
What are the odds that Java 8 has provided everything you could want in out of Java? Macros let you add things in a better way than you could otherwise get. Look back to my prior example of Java 5's expanded for. You see how useful that was? If Java had had macros, people wouldn't have had to suffer through nine years without the improved for loop.
> What are the odds that Java 8 has provided everything you could want in out of Java? Macros let you add things in a better way than you could otherwise get. Look back to my prior example of Java 5's expanded for.
Again, please understand we non-Lispers find this argument utterly unconvincing :) Java in particular is a terrible example: it's a language that for many years remained in the dark ages, and now it's finally getting modern features retrofitted into it, while at the same time attempting to keep some sort of backwards compatibility, and the whole process is very painful.
Let's all agree to stop talking about Java. Let's assume we all agree working in a language that until very recently didn't have lambdas is painful. And that requires a horrific amount of boilerplate.
Let me re-throw your question back at you: what exactly can Lisp do, which has practical implications, that a modern language with lambdas, closures and lazy evaluation cannot accomplish in elegant ways? If you mention Java again, you lose :P
You seem to be concerned by the use of "Java". It's a fine example, and one I think I've explained why very well. It also seems petulant to declare Java off-limits.
But let me try again. For _any given language_, there are things you may want that the language doesn't provide. Macros let you do that in an elegant way. The example I gave above -- based off LanceH's -- of transactions is a way where the macro-based solution is more elegant than non-macro solutions.
Here's another example. Arc, like any language, has a built-in way of setting variables. It's called `assign`, and can be used as follows:
arc> (assign a 3)
3
arc> a
3
But no one uses it. Instead, people use =. = is a macro that's provided with the language. Because it's a macro, that means that if it wasn't provided, you could write it yourself.^1
What's the benefit of = ? It lets you set values of more than just variables:
arc> (= my-table (table))
#hash()
arc> (my-table 'key) ;;look up the value
nil
arc> (= (my-table 'key) 'value)
value
arc> (my-table 'key)
value
Note that in the second prompt, we're attempting to look up the value of 'key in the hashtable my-table, and we see that there isn't one there. We then set it in the third prompt, and look it up again in the final.
This works on the concept of "places". The "place" we try to set in `(= (my-table 'key) 'value)` is the association of 'key inside my-table.
Why is this beneficial? If you know how to get a value out of a data structure, you can now set it. This code is extremely clean and understandable compared to a non-macro version. It exhibits the principle of least surprise, and it's obvious how to set other data structures in an elegant way.
You can't do this without macros.
[1] If your objection here is "but it comes with the language", you're missing the point.
I disagree your example of Java is fine. It's not petulant to declare it off-limits, just as it's not petulant to declare COBOL off-limits. We are discussing features and extensibility of finer languages than Java.
> The example I gave above -- based off LanceH's -- of transactions is a way where the macro-based solution is more elegant than non-macro solutions.
Here we disagree. In your example of transactions, you didn't shown that macro-based solutions are more elegant than non-macro based solutions; you merely showed that Java before version 8 wasn't very good (and when someone replied "but we can do better with Java 8!" you basically replied "ok, but how do you know Java 8 is enough for some other unspecified problem?"). Your answer is unconvincing, especially since non-macro-based solutions to your example in other languages, such as Scala, are equally elegant to Lisp's, because Scala has support for first-class functions and closures. Let me preempt a "but how do you know that's enough for Scala?"... I don't know. Show me why it's not enough!
Re: your second example with assign and the = macro. I admit I don't understand it yet; I'll have to think some more about it.
edit: let me go back to this assertion:
> For _any given language_, there are things you may want that the language doesn't provide. Macros let you do that in an elegant way.
I find this problematic, for two reasons. First, we've acknowledged macros cannot solve everything; after all, there are new releases and multiple implementations of Lisp languages. What someone else said: "if it's not in the 'substrate', macros can't do it". Second, that macros let you do some (admittedly cool!) things doesn't automatically show that these same things cannot be accomplished in reasonably elegant ways in other languages. One thing doesn't imply the other!
>In your example of transactions, you didn't shown that macro-based solutions are more elegant than non-macro based solutions; you merely showed that Java before version 8 wasn't very good...
I think it wasn't clear what I was referring to. I was talking about not Java, but a Lisp-style solution. Here it is with macros:
The macro-based solution -- which doesn't mention Java -- is more elegant. You don't need the lambdas if you use macros (why? Well, you don't always want to rollback, right?)
>...when someone replied "but we can do better with Java 8!" you basically replied "ok, but how do you know Java 8 is enough for some other unspecified problem?"
That's the point -- a language with macros is extensible in a way that languages without macros aren't. So unless you believe that your language happens to be perfect, having macros would make the language more powerful.
>Re: your second example with assign and the = macro. I admit I don't understand it yet; I'll have to think some more about it.
Feel free to contact me if there's anything else I can explain -- I'm probably going to forget to check this thread soon.
>...we've acknowledged macros cannot solve everything; after all, there are new releases and multiple implementations of Lisp languages.
I'm not sure why new Lisp releases show that macros are not useful. You could write anything in assembly, but other languages are still released. And design is important -- which sets of functions, and macros should be provided with a language? Does having a release of Scala that includes functions mean that there's no need for user-defined functions?
>Second, that macros let you do some (admittedly cool!) things doesn't automatically show that these same things cannot be accomplished in reasonably elegant ways in other languages. One thing doesn't imply the other!
It doesn't mean that, no. However, I don't see elegant ways to do this kind of thing in other ways. If you can show me some, I'd be interested.
The problem that I have with lisp macros is that the elegance you gain at the syntax level is effectively a tradeoff with pragmatism when other people read and use the code. Java was developed with parts of C++/C as inspiration and parts of that language were left out. Particularly, operator overloading was left out (which can be viewed as a very restricted example of modifying the language), presumably because it's not an immediate thought when viewing the code that the operator isn't doing what you expect. While it's undeniable that macros make the syntax nice to look at, the same argument that lisp becomes a new language as you write your program means that every separate codebase has a lot more reading to understand because you have to go through all the macros. New releases in most languages introduce new features (and standardize functions, fix bugs, etc), as far as I can see, new releases in lisp enforce a standard (common) base set to decrease the amount of work required in learning new codebases.
Yes, macros can make code very hard to read, if designed badly. Of course, so can functions, variable names, and program flow.
But Lisp with macros is very different from C++ with operator overloading. With C++ operator overloading, you only know if a given line has something you don't understand (that is, an overloaded operator) by looking at every other file in your project. With Lisp macros, you know that you're dealing with something new because you don't recognize the first token in the s-expression. You might not know it's a _macro_ rather than just a _function_, but you know it's something you need to investigate.
Basically, in Rumsfeld's terminology, an overloaded operator is an unknown unknown, but a Lisp macro is a known unknown. A macro's behavior may be confusing, but its existence isn't. And that's a very big difference.
Agreed, I'm still unconvinced for the same reasons. In Scala or Haskell (or any language which supports first-class functions and lazy evaluation, I guess) the "transaction" example is easily done.
DOLIST is similar to Perl's foreach or Python's for. Java added a similar kind of loop construct with the "enhanced" for loop in Java 1.5, as part of JSR-201. Notice what a difference macros make. A Lisp programmer who notices a common pattern in their code can write a macro to give themselves a source-level abstraction of that pattern. A Java programmer who notices the same pattern has to convince Sun that this particular abstraction is worth adding to the language. Then Sun has to publish a JSR and convene an industry-wide "expert group" to hash everything out. That process--according to Sun--takes an average of 18 months. After that, the compiler writers all have to go upgrade their compilers to support the new feature. And even once the Java programmer's favorite compiler supports the new version of Java, they probably still can't use the new feature until they're allowed to break source compatibility with older versions of Java. So an annoyance that Common Lisp programmers can resolve for themselves within five minutes plagues Java programmers for years.
One could write a macro that allows infix notation for arithmetics:
(arithmetics 1 + 2 - 3) = (- (+ 1 2) 3)
These kind of syntactic transformations are what macros enable.
TXR Lisp is completely strictly evaluated, like many other Lisp dialects. Function argument expressions are reduced to their values, in left to right order. Then the application of the resulting values to the function takes place.
The special mlet ("magic let" or "mutual let") construct has allowed the expression which initializes s, (lcons 1 x), to refer to x.
This works because both mlet and lcons are macros. The lcons macro (rather, the code generated by the macro!) returns a lazy cons cell, without immediately evaluating its arguments 1 and x. In the case of x, this is a damn good thing because x is not yet initialized! When the lazy cons is accessed (when the list object is printed), the evaluation of x takes place. By that time, x holds the lazy cons cell, and since the variable x is in scope of the argument x in (lcons 1 x), the lazy cons is able to force, setting its CDR field back to itself, creating not a lazy list, but a circular list.
With lcons, I we can make a fibonacci function quite similarly to how you might do it in Haskell:
(defun fib2 (a b)
(lcons a (fib2 b (+ a b))))
We call this as (fib 1 1) and it gives us a lazy list.
Here, we have a marriage between a lazy data structure and a macro. Without lazy conses, the lcons macro would have no target language to expand into. Without the lcons macro, lazy conses can't be used in the above convenient way; we would have to write fib2 in terms of the macro expansion:
$ ./txr -p "(sys:expand '(defun fib2 (a b)
(lcons a (fib2 b (+ a b)))))"
(defun fib2 (a b) (make-lazy-cons (lambda (#:lcons-0001)
(rplaca #:lcons-0001 a) (rplacd #:lcons-0001 (fib2 b (+ a b))))))
The circular list mlet, when fully expanded looks like this, by the way:
It's a gritty oat-meal of delays, lambdas, forces, and cons manipulation. One thing that is conspicuously absent amid the toenail clippings: what happened to the x variable? Haha!
A lisp macro is a code transformer run by the compiler using the full features of the language to generate code.
Macros allow for the arbitrary evaulation of it's arguments (rather than the standard left to right order before a function call), allowing you to do syntactic extensions without the added boilerplate that functional languages can require.
Essentially, each macro allows you to define a mini-language that is parsed by the compiler that returns code that is then compiled. It takes a while to groc, but once you do you can never really go back.
This is only ~90% likely to be correct, since it's second-hand information and I don't use LISP actively.
A LISP macro is a syntax transformation. It lets you write code in the way you want to, instead of whatever level of abstraction you used to have.
I'm not sure about 'partially formed' code output, but 'partially formed' input is definitely possible. The way to invoke a LISP macro need not be valid LISP. In short, LISP macros excel at creating domain-specific languages.
In my Lisp-esque language I use a temporary macro to automate the creation of some similar standard library procedures. This happens at runtime in the stdlib source file that is loaded:
# Define procedures named int? float? etc that test the type of a value.
(def def-type-predicate (mac (type-name)
`(def ,(string-to-symbol (join "" $type-name "?"))
(proc (x) (eq? (type x) ',type-name)))))
(def-type-predicate int)
(def-type-predicate float)
(def-type-predicate bool)
(def-type-predicate string)
(def-type-predicate symbol)
(def-type-predicate file)
(def-type-predicate nil)
(def-type-predicate pair)
(def-type-predicate procedure)
(def-type-predicate macro)
(zap def-type-predicate)
The crucial thing about what happens there being that the (def foo? ...) value produced by each macro invocation then gets evaluated in the root/top-level environment and so results in a "global" procedure definition. Using them:
But it's nice to have single-parameter ones for FP list stuff. I suppose that 2-parameter version could have their order swapped and do partial application on top of it.
Anyway, was just sharing something I had fun making. No language wars intended.
The first order function is evaluated at runtime, every time the program is executed. Macros are expanded at compile time, so they will be calculated just once. Thus the syntactic sugar added by the macro doesn't incur a time penalty.
This can provide an important speed improvement for complex macros or code used often, in tight loops or frequently called functions (it's like using inline methods in .h files in C++).
You can support new paradigms, just as a library. For example, core.async was written as a macro. So Clojure basically got Go channels in a library, not a language upgrade.
Macros can fundamentally change code. Codewalkers. Back when OOP was a big thing (probably still is), Lisp programmers would probably amuse themselves by making object-oriented extensions to the language via macros, and sending them to each other.
This means that you are empowered, not just a language implementer. (First-class functions are important, and typically a better idea than "Guess I'll write my own macro!" but they only go so far.)
The difference is subtle. With high order function you coule define (abstract out) a new idiom, while with proper macros you could define a new special form.
In case of using high-order procedure all its arguments will be evaluated in order before application of the procedure while in case if a macro you could explicitly define all the transformations (evaluation rules) for each argument. This is why it is called a special form - it has its own evalustion rules.
Shortcircuiting if or and are canonical examples.
So, with macros you are extending Lisp with new special forms.
Your question doesn't make sense. High order functions and macros are orthogonal concepts.
Lisp supports first-class, high order functions. In fact, it was probably the first language to do so.
Macros are different. And they're not just "expanded." Think of macros as full blown Lisp functions that run at compile time and generate new Lisp code, that is itself compiled at compile time. Since Lisp code is represented as Lisp's list data structure, it's super easy. The macro system does allow expansion/replacement (like C's preprocessor), but that's just scratching the surface.
I'm sorry for asking a dumb question; I'm a sysadmin not a software developer:
Is this similar to needing a dozen nearly-identical lines of code, writing one, then using Excel to manipulate the other 11 lines into what you want, then copying it back (and cleaning tabs in the process)?
Essentially programmatically writing the program - is that what Lisp's macros are?
Essentially programmatically writing the program -
is that what Lisp's macros are?
Macros are often described that way. "Lisp macros let you write code that writes code!" While that's a true statement, it's kind of useless for coming to an initial understanding of what macros let you really do.
Think of it this way. Programming languages have built-in control flow operators like if-then, do, while, and for. Macros let you write your own operators - that work just like the ones built into the language itself.
Here's an example. Let's say you want to print a list of names.
In a language without an operator specifically for going over lists - and without macros - you may have to write:
If you have a language that supports macros, you can create a macro called forEachItemInList. And then it'll work just like it came with the language all along:
forEachItemInList(string currentName in namesList)
{
print(currentName);
}
If you count how many words the non-macro solution took, it's 13. The macro solution takes 7. Having to solve a problem with more code means more time to understand, teach, write, test, debug, and document - so savings like this can really add up.
And there's also the subjective part. Having to write code like in the first example just isn't satisfying. You have to repeat yourself constantly. You may think, "Look, I loop over lists all the time. And it's always the same pattern: I call GetListIterator(), I do a 'while' loop while it HasMoreItems(), then I call GetNextItem(), etc. Why can't I just inform the compiler of the general pattern - and then tell it how to fill in the blanks when I need to actually loop over a list?"
The fundamental unit of Lisp is expressions of the form (FOO ...). There are three basic cases.
1. FOO is one of a finite list of special forms. Those are the irreducible base of that language.
2. FOO is a function. In which case the ... is evaluated as Lisp (possibly including doing things like expand out other functions) before being passed to that function.
3. FOO is a macro. In which case the ... bit is turned into a list of things, but NOT evaluated, passed to the macro to be rewritten, and then the result is turned into an expression that is then evaluated by the same rule.
This is not completely true. There is a preprocessing step called reader macros that can do things like turn `(this expression) into (QUOTE this expression) to save typing. But those are very sparingly used.
Not a dumb question at all, and yes, that's more-or-less what Lisp macros are, albeit more elegant because you're using the same language to write your code as you use to write the macros that manipulate the code.
More or less. It allows you to write generic code you can reuse. In this case, you are extending the language. Feel free to post these "dumb" questions, or send them my way.
There are some idiosyncrasies here: 'nil' is shorthand for the identity function (in this case, the two-argument, two-value identity function), and the use of keyword symbols is weird and potentially problematic, though I've never run into a problem with it in practice.
This is the map-reduce model of iteration -- obviously not with support for distributed computation, but nonetheless useful for programming in the small. And far more elegant, to my eyes, than that LOOP monstrosity :-)
It's kind of cool how in lisp you're supposed to do macros to change major aspects of how the language behaves, but in C if you try having a little bit of fun with #define and the pre-processor everybody starts getting just extremely rude at you.
To be fair, C #define macros are extremely dangerous and error prone in ways that Lisp macros aren't.
I haven't bought into this "Lisp macros are the best thing since sliced bread" idea, but without a doubt they are infinitely more awesome than C macros.
>So an annoyance that Common Lisp programmers can resolve for themselves within five minutes plagues Java programmers for years
I get the sentiment, but this is seriously advocating for unmaintainable code that likely breaks in strange corner cases and requires 10 times as long for a new person to the code base to understand.
First contributor, "I know my code should be able to handle out of memory errors in this corner case, but I know I won't use it that way so it shouldn't be an issue."
Second contributor, "Hey this macro from First contributor looks nice, I'm going to use it."
A pile of shit emerging whenever programmers are lazy and try to cut corners or when they try to do things they don't fully understand is not unique to macro writing.
Right, the point i was making is that not everyone is a 'rockstar' and macros just give you quite a bit more rope to hang yourself with. Awful java/python/whatever is obvious because you can't redefine control flow so easily. With macros you can have something as fundamental as loops be broken.
Note that this is a double edged sword. Everyone is having is own little DSL world. A thing that Lispers often enjoy, but it's an art, a beautiful one, and a beautiful freedom too, and most people want a python with easy to access libraries more than that.
Lisp aims itself into niche problems where there's nothing good enough done yet. For the rest people won't care about the ability to fit the language to their need, they want 'productivity' and if a pattern become useful, backpressure will force language designer / BDFL to change it.
I agree. Macros are what made lisp, not parentheses.
However, if we broaden the concept a little, we can do this type of macros for any language with a meta-layer such as MyDef. e.g. if you observe certain foreach pattern in Java, you can make a macro for that pattern and have the meta-layer look out and translate that for you (like having an automatic translator between you and javac. All you need is a scope type macro. Conventional macro packages like M4 does not provide such, but MyDef does. Example:
&call each_member, AList
# java code that work on $(member)
where the macro may be defined as
subcode: each_member(list)
Enumeration e = $(list).elements();
while (e.hasMoreElements()){
String name = (String) e.nextElement();
$(set:member=name)
BLOCK
}
Where it is understood that the definition can be any literal block pattern.
The parentheses are huge, though. It enables "Code is data" and "Data is code" in a powerful way.
About Lisp, people always say "Macros are great" and "Code is data" and "Data is code", but it's hard to see what they mean without good examples. I mean, you can write code that writes code in any language that has a `print` statement. And obviously code is data, so what is that aside from some pseudo-philosophic BS?
There's a lot of discussion in this thread about macros - so here's an example of Code and Data Being One, for those that are unconvinced. I hope it'll shed some light.
I have a slang dictionary website. It's backed by a database now, but it used to be statically-generated HTML. I represented the data as XML. It looked something like this:
So then I needed an XML parser to parse the data. Maybe it parsed the XML into a object model that the code would navigate and output the appropriate HTML. Or maybe the code got callbacks according to node type and would output the appropriate HTML then.
XML is a pretty verbose format, so - this being a Lisp example - we could probably save some typing if we represented the data as an S-expression. The above XML would become something like this:
Now we need to write the Lisp code to convert that S-expression data into the appropriate HTML output.
We'll need some Lisp functions to handle the nodes and their attributes (such as the definition text and part of speech) and write the HTML for them. We could use an S-expression parser library to load the data, and then walk through it and call those functions. But that's not necessarily the best way. We can simplify it by creating exactly 3 functions that take some arguments and output HTML: term, part-of-speech, and definition.
Since Lisp code is - like the data - also represented as S-expressions, once we've written those 3 functions, the data is literally executable Lisp code.
Notice, the end result is just data, as terse and minimal as possible. But to act on (aka "interpret) that data, he wrote functions called "term", "part-of-speech" and "definition."
So now, that data is code. Code is data and data is code.
Agreed - that example is amazing. However, the devil's advocate in me can't help but ask: what if I want more than one transformation? What if I want to generate both HTML and, say, JSON for returning that information from a web service?
Hmm... it might work if I use some sort of global parameter: I first "execute the data" (damn!) with the output type set to HTML and then do it again with an output type of JSON.
There's an answer there too. In Lisp, it is possible to have local functions. In other words, you're defining a function, and inside it you have a couple of functions that are local to the containing function.
That's one way for "term" and the other items to have different definitions.
There are probably other even better ways to do it.
It is possible with macropy as well. But both hylang and macropy depend on the load module hook to run the transformation iirc. So the transformation don't work the the __main__ module.
Soviet-era authority figure: "Western-style freedom has its good points---if applied judiciously".
If you're a proper Lisper, you use macros like it's going out of style, and other proper Lispers love your code for it.
All programs have their own dictionary of whatever it is they define, whether it be macros or variables. You can no more understand a function call just by looking at it than a macro call. (Should functional decomposition be introduced "judiciously" into large, monolithic blocks of code that have everything "at a glance" in one place?)
I guess the "problem" with understanding this perspective for us non-lisp people is that we may not be able to imagine all the places that we could have used macros (or something equivalent) if we haven't even learnt it. You don't know what you don't know, or in this case, you don't know how to use something you haven't used. But this seems to be the same for boatload of programming features that many people aren't taught with their first language, but rather later - higher order functions, lazy evaluation/generators/etc., type-level functions... we might at first think "this is a special tool only to be used in certain circumstances/only to be used by wizards". And then it might turn out that they are useful all the damn time.
The net result, at least for me, is that I get spoiled and then it is hard for me to go back 'more primitive ways'. :)
Macros are easy to understand if you are shown that whatever language you are using already has them. The difference is that they are locked in and wrapped behind at least two layers.
Firstly, there rigid surface syntax which customizes the look of every macro at the character level. For instance, in C, the do ... while(); loop must have a trailing semicolon. (No Lisp macro has such requirements.)
Secondly, that surface syntax translates to a limited set of abstract syntax tree forms which is not extensible.
Lisp macros open that up. The outer layer of varnish is stripped away, so any conceivable abstract syntax tree form has a notation; you do not have to invent new character-level surface syntax in order to work with a new form. And then, custom recognizers for arbitrary forms can be written by the language users, which do tree to tree transformations.
So then how you use these things once you have them is the same way that yu use the features of programming languages that you know.
E.g. loop is a macro in Lisp. We use it like this:
[1]> (loop for x from 1 to 10
for a = 2 then (* 2 a)
collect (list x a)))
((1 2) (2 4) (3 8) (4 16) (5 32) (6 64)
(7 128) (8 256) (9 512) (10 1024))
Someone wrote the loop macro, so in this situation I'm just a user. I don't care whether this is a macro, or built-in to the language like "foreach x in list do".
When I use loop, I'm benefiting from macro-writing.
The benefits are far-reaching. For instance, language experimentation takes place in the user base, not behind the closed doors of an ivory tower (ISO committee or whatever). Language ideas are packaged as code and shared around.
Technical problems turn into social problems, as someone noted.
This doesn't really address the issue of readability or understanding though. You still have to go through someone else's uncommented code and figure out what their weirdo macros actually do versus having standard, documented language features that you already understand.
Because there's not a ton of standardization (and for other reasons) you end up with a million different dialects of lisp and a fragmented community, in comparison to other language families.
And you can do open, community-driven standardization and language enhancement. Python is a decent example of this.
If someone writes "weirdo macros" and you take them away, that same someone will write weirdo code using something other than macros. Either way, you will have to understand what they are doing. In the worst case, that person will expand, by hand, the code their macros would have written. Now you can't fix a bug in the expander and cheaply re-expand.
>When reading other people's code you now effectively have to learn what "language" they use too
And? This is true of code in languages without macros, too. You have too learn all the vocabulary, all the types they use, all the functions, all the structure of the program— macros are just a tool for taking these domain-specific things and packing them into a denser syntactic abstraction. A well-written macro improves readability in both the short and long terms.
I mean, say I write a couple of macros to deal with a database and I write code like
(with-sql-connection ("localhost" ...)
(select *
from "customers"
where (= "lastname" "Smith")))
Do you really think that's less readable than the nonsense you'd have to write a typical "framework," or worse, building the command string by hand?
When reading other people's code you now effectively
have to learn what "language" they use too.
That's well said, and in fact it's not uncommon to talk about Lisp's capacity for crafting the language to solve the problem at hand.
Paul Graham's essay "Programming Bottom-Up" explains it really really well:
"Experienced Lisp programmers divide up their programs differently. As well as top-down design, they follow a principle which could be called bottom-up design-- changing the language to suit the problem.
In Lisp, you don't just write your program down toward the language, you also build the language up toward your program. As you're writing a program you may think 'I wish Lisp had such-and-such an operator.' So you go and write it. Afterward you realize that using the new operator would simplify the design of another part of the program, and so on. Language and program evolve together. Like the border between two warring states, the boundary between language and program is drawn and redrawn, until eventually it comes to rest along the mountains and rivers, the natural frontiers of your problem.
In the end your program will look as if the language had been designed for it. And when language and program fit one another well, you end up with code which is clear, small, and efficient."
As others have pointed out, no more than unfamiliar classes and so forth.
In any language, you can think of programming as building mini languages to solve problems. You have nouns and verbs - types/classes/instances and methods/functions/operators. Lisp gives you those parts of speech and also lets you manipulate the fundamental grammar as well!
My experience with heavily metaprogrammed Ruby is that this story has a dark side. Yes, when language and program fit one another well, you end up with code which is clear, small, and efficient. That is true... until you need to add a new feature that was not accounted for in the language design. Now you have to wade into the code of the "compiler".
Abstraction generally gives up flexibility to obtain conciseness. Certainly a balance must be struck, but "not flexible enough" has caused me vastly more pain than "not concise enough". As a consultant who frequently wades into other people's code, I generally consider metaprogramming to be a scourge.
After programming in CL for several years, I can honestly say that when encountering new syntax that people have introduced in an app, it's no harder to follow along with it by reading the macro definition than encountering an unknown function. And when in doubt, you can just macroexpand the syntax and see what it's doing under the hood.
>When reading other people's code you now effectively have to learn what "language" they use too.
Paul Graham addressed this in On Lisp (page 59) [0]:
>So yes, reading a bottom-up program requires one to understand all the new
>operators defined by the author. But this will nearly always be less work than
>having to understand all the code that would have been required without them.
>If people complain that using utilities makes your code hard to read, they
>probably don’t realize what the code would look like if you hadn’t used them.
>Bottom-up programming makes what would otherwise be a large program look
>like a small, simple one. This can give the impression that the program doesn’t
>do much, and should therefore be easy to read. When inexperienced readers look
>closer and find that this isn’t so, they react with dismay.
>We find the same phenomenon in other fields: a well-designed machine may
>have fewer parts, and yet look more complicated, because it is packed into a
>smaller space. Bottom-up programs are conceptually denser. It may take an effort
>to read them, but not as much as it would take if they hadn’t been written that way.
> I'd say it's probably worth that cost, if used judiciously.
Can be said of anything so it really doesn't say anything at all.
This is could be a good criticism in theory, in practice
you can observe how it works out. Fukamachi uses @ everywhere, attila-lendvai uses his def-star library ej (def (class foo) ...) instead of (defclass foo ...), some people use iterate, etc and all is good.
People can write shit code sans macros no problem. I'd rather have them than don't. After all macros embody the whole point of programming[0]
> The act of describing what you want the machine to do is interleaved with the machine actually doing what you have described, observing the results, and then changing the description of what you want the machine to do based on those observations. There is no bright line where a program is finished and becomes an artifact unto itself.
Author doesn't mention macros explicitly, but I think the above quote is referring to them implicitly.
The book you linked has examples of macros being used for writing a music database, parsing binary files, a unit testing framework, and several other things I haven't reached yet. Those are much better examples of macros than basic control structures. The 'dolist' form is a toy example.
All Turing-complete languages are equivalent and can solve the same set of problems. The issue is not whether you can or can't do it, but how much work it will take and how painful the process will be.
Also when you learn some Lisp, you start to realize how often people are greenspuning[0] things in their projects.
I don't think there are very many "real world" problems that you CAN solve in Lisp, and CANNOT solve in Python. Anything [0] you can write in Python, you can likely solve in Lisp, probably in similar line count (without macros).
One thing I __really__ miss from working in lisp is the idea that I can reload things in the repl. In Python, once I've imported something, I can't really redefine it without pasting in the definition, which makes iterating on a class definition much harder. In Lisp, I can hit a key in my editor and the running REPL gets the new definition, and I can start working with it (or rewriting tests, etc).
The power of the REPL in lisp is __amazing__. I love me some IPython (it's awesome), but there just isn't the same tight integration between that and a running system. The "default" Python likely has all the tools you need to do that, but it just isn't presented as the Way you Do It.
Django's auto-reload when code changes is an example of this. The trouble is, I can't get to a REPL easily within that, without invoking ipdb. I don't know how to integrate my editor with the Python process in a similar way, etc.
All that said, I still love coding in Python. Hy makes me excited, but then I just started writing python-with-parens and wasn't sure what I had gained. :)
0: I'm sure there is someone who can give a counterexample, but I cannot thnk of any.
Thanks a TON! I am Frequently Slightly Annoyed by pressing control-. and having to re-import things, declare things, etc. I'm looking forward to using this.
I reload things in ipython all the time, just do an execfile (and maybe be sure to define __name__ to something besides '__main__' so it doesn't trigger the usual checks).
The sage version, and maybe this is backported to ipython by now, I don't know, you can do an %attach on a file and it gets reloaded into the workspace every time it changes.
The best you'll get are examples of something solvable in Python being "beautiful" in Lisp. Then some real world Lisp examples will be references to a 20 year old storefront generator and the initial release of reddit.
Lisp(s) are certainly better than Python in every way, except when it comes to successful projects completed.
I think you need a bigger reference frame of Lisp's usage over its 50 year history that extends even to 2015. But even then, quoting Kent Pitman: "Please don't assume Lisp is only useful for Animation and Graphics, AI, Bioinformatics, B2B and E-Commerce, Data Mining, EDA/Semiconductor applications, Expert Systems, Finance, Intelligent Agents, Knowledge Management, Mechanical CAD, Modeling and Simulation, Natural Language, Optimization, Research, Risk Analysis, Scheduling, Telecom, and Web Authoring just because these are the only things they happened to list." (My additions are video games and Mars rovers.)
Part of the visibility issue is that there are very few big-name open source Common Lisp success stories which aren't CL implementations, or CL ecosystem tools. Most of the things in that list are big-ticket enterprisey applications.
Lisp's Eclipse is called GNU Emacs and it's free software. It comes with more than a million lines of Lisp code supporting all kinds of development tasks.
It's also not Common Lisp. I get that emacs lisp is a pretty good example of a lisp, but I do find it interesting that the Lisp designed to be the widely-applicable industry standard hasn't seen something on the same level.
> So what's stopped these from getting to Emacs' level of popularity?
Emacs is not an editor. Emacs is a family of editors. You are probably talking about GNU Emacs.
They never tried and it would not make sense. GNU Emacs exists already and supports Lisp development very well. The other tools have concentrated on other things: GUI-based IDEs for Lisp.
> Emacs is not an editor. Emacs is a family of editors. You are probably talking about GNU Emacs.
You tell me, you brought it up!
> They never tried and it would not make sense. GNU Emacs exists already and supports Lisp development very well. The other tools have concentrated on other things: GUI-based IDEs for Lisp.
Emacs is for more than Lisp development, though. It's not popular because you can do Lisp in it, it's popular because you can do everything in it. So we're back to my earlier question, which is why we haven't seen major, broad-based wins for Common Lisp, on the scale that we have for other languages.
> Emacs is for more than Lisp development, though.
Not Emacs, GNU Emacs. That's what I wrote.
> we haven't seen major, broad-based wins for Common Lisp, on the scale that we have for other languages.
Common Lisp tends to be used in very specialized areas. It's a complex language.
Though sometimes it has been used where you don't see it, but you may be affected. American Express runs a Lisp based system checking credit card transactions. Should be running for two decades or longer. Amazon was using Lisp to compute some stuff on their shopping pages. CIA and NSA use it to spy on us. Lots of aircrafts (Airbus & Boeing) and cars (Jaguar, Ford, ...) were designed with Lisp-based CAD systems. NASA uses it for checking software correctness. Chip makers like AMD have used it to check processor designs for correct operations. There are many of those applications. Google's flight search engine has its core written in Lisp. Dwave wrote the software for their quantum processor in Lisp. There is a broadband internet of satellite company running Lisp on their antennas. Parts of the precursor software of Apple's Siri were written in Lisp. That's the stuff what it was originally was designed for...
Any real world problem solved in Lisp, relative to being solved in Python, eliminates the superfluous problem of Python being involved: a syntactically Fortran-like cumbersome scripting language with somewhat Lispy semantics.
Note that you can program the CL system by expressing yourself in Python:
Common Lisp is not only a language, but also a platform (analogous to Mono or JVMs). It has a model of computation: programs expand fully down to a set of special forms, which then compile. You can build languages on top of this.
I'm not quite sure how being able to extend syntax is a theoretical problem. It was just a basic example for brevity.
There are many more examples of macros that solve real world problems, such as writing a compiler for an embedded DSL that is tuned to solving your real problem that allows expressivity and brevity that you would never see without the use of macros.
>Aren't macros breaking the homocionicity of the lisps?
No. How would macros break homoiconicity? They expand into atoms and lists (and other datatypes), the same stuff of which macro-free programs are made.
>And making maintenance more difficult?
No. Unless you intentionally write unmaintainable macros. It's the same as if you write unmaintainable functions, classes, etc. They're just abstracting a different thing— syntax.
>aren't you re-inventing a new language with macros?
No, you're expanding the language; you're just expanding the grammar instead of the vocabulary.
>Where does the "First rule of the macro club" coming from? When should you break it?
I have heard the “first rule of the macro club” to be “don’t write macros”. The idea is before writing a macro, you should try writing it as a function instead. If that is possible, that is usually better, because functions, unlike macros, can be passed around as first-class values, and I think they are easier to debug.
You should break that rule only when the behavior can’t be written as a function, such as these cases:
• The call needs to avoid evaluating its arguments. For example, the `if` built into the language is sometimes defined as a macro.
If `if` were a function, it would first print both statements, and then return the return value of `print` in whichever branch. By making `if` a macro, it can avoid evaluating the branch that is inapplicable.
• The call relies on information about the environment only available at compile-time. Perhaps the macro reads a configuration file on the developer’s computer to decide how to set something up.
• You have profiled the program and determined that it is better to run the function at compile-time. For example, you might want to make `(regex "[a-z][a-z0-9]+")` compile the string to a regular expression at compile-time instead of run-time.
>You have profiled the program and determined that it is better to run the function at compile-time. For example, you might want to make `(regex "[a-z][a-z0-9]+")` compile the string to a regular expression at compile-time instead of run-time
I think it would be more appropriate to use a compiler macro here.
If used right, macros make things much more maintainable. If you have a hundred nearly-identical codeblocks, where only (say) a string constant is varying, and suddenly you need to change what those blocks do, bam, you've got a hundred blocks to change. If those blocks had been refactored with a macro, all you have to do is change the macro once. It also prevents the new guy from coming and doing a bandaid fix to line #72 making it slightly different from the 99 lines around it and making your life a living hell.
The code block in question might be something that doesn't really deserve a whole function on its own, and could also involve local control-flow statements (return/continue/break/etc.) that can't really be outsourced to a function nicely. For example, you're parsing a line from a flat file and you want to assign a value to a certain variable based on the first word on the line. So you have a few dozen/a hundred lines of
if ( key == "hitpoints" ) {player.hitpoints = value; return;}
if ( key == "mana" ) {player.mana = value; return;}
Those "return"s make it rather awkward to use a function. At best, said function would have to return a bool and you'd end up with something like
if ( maybe_assign( key, "hitpoints", &player.mana ) ) return;
which hardly gains you anything and in fact reduces readability a fair amount.
Similar, similar, only if you don't look too closely: what about break and continue? They are usually supported by native foreach construct but are difficult to build with macros..
what about break and continue? They are usually
supported by native foreach construct but are
difficult to build with macros..
For break, DOLIST uses return. I haven't worked with Lisp in some time, so I don't remember if it has a dedicated continue. But a continue is just a break in a loop that doesn't loop. Or a GOTO by another name. Or a jump to a particular case in a switch.
But if your looping macro uses a native looping construct or another macro that supports break and continue, then your looping macro will inherit that support, provided that you're careful when writing the macro to do nothing that will break it.
If your looping macro doesn't use a construct that supports break and continue, then you're still in luck. Continue I described above. Like continue, break is also GOTO by another name. You've got TAGBODY, GO, CALL/CC, etc.
Why do you think that supporting break and continue in macros is difficult?
> Why do you think that supporting break and continue in macros is difficult?
In many languages, emulation of control loop do not support break and continue so I thought that this was the case also here, but apparently I'm wrong, sorry for my 'too quick' post and thanks for the correction.
probably, one evil thing is you have to deal with every other's abstraction and it will make you frustrated when there is a large codebase and a deadline.
* Another programmer's 15 function API, and a document on how to use it properly?
* Or another programmer's 15 function API, along with three macros which use the API properly, and capture all the scenarios I need based on a couple of examples.
The interactive model is insanely cool. When building a toy game engine a while back (https://github.com/orthecreedence/ghostie) I saved probably half the development time by being able to redefine functions/values while the game was running.
The old way of lisping is to prototype in lisp, then build in a "real" language (c/java). However nowadays the lisp implementations (CCL/SBCL specifically) are fast/advanced enough to the point where you can prototype in lisp, then just use add some type specifiers and boom there's your app. Hell, with ECL you can even embed your lisp program into another one, while still achieving compiled speeds.
Even better you can attach the repl to a remote instance. I had a problem a little while ago that could only be reproduced on the server. I could connect to the repl over ssh and evaluate and modify code directly.
Compare that to a similar problem I had with a C# app we had. For that I had to stick in a load of logging code, check it in then wait half an hour for the CI server to deploy before running and checking the logs.
Do you know if the attaching to a remote instance is available in Racket? I spent some time learning racket a year or two ago, but was under the impression they took out some of the really cool features (or I never discovered them)
Not a racket user, but I'm almost certain that you can do this with Racket. Your editor doesn't care whether your REPL session is local/remote...it just connects to a an address/port. You can set this to be 127.0.0.1 or whatever remote server you want to connect to.
Most REPLs will only accept connections from localhost by default (a sensible idea). In this case you just need to setup an ssh tunnel to the machine and connect through that.
You can get this in other languages too. For example, Flask (a Python web framework) has a fantastic debug-mode error page that totally changed the way that I think about web development. Any time an exception is thrown in a view function (and this includes the exceptions that you idiomatically throw for HTTP 4xx and 5xx errors) the debug-mode error page would have a stack trace (obviously), but also an interactive REPL that could be opened at any stack frame in that trace. It wasn't necessary all that often, but when it was, boy was it a fantastic way to work.
Not quite as automatic, but XDebug with the Codebug client gives you this exact functionality for PHP. Saved me many a headache over the past few years, and makes tracing data flow in a program I don't know as simple as it can be.
What sort of tools are required for this? I'd like to start taking Lisp seriously, but workflow stories like this tend to hinge on using Emacs/SLIME. Is that always the case?
EDIT: I suppose what I'm asking is whether you could elaborate more on what this looks like, in practice.
Basically I have a horizontally split 'screen' session, code in the top and repl in the bottom, and I just ctrl+[c+c] to send paragraphs from the top to the bottom. I don't remember if vim-slime comes with it or if I augmented it, but I also do ctrl+[c+f] to send the current Lisp/Clojure form, ctrl+[c+l] to send the current line... Or just ctrl+a+tab to switch screen windows and type in the REPL directly.
Actually no, I use vim/slimv almost exclusively. I think Sublime2 might also have features that let you "hook into" a remote REPL, but I'm not very familiar. There may be other editors with lisp integration as well, but I'm not sure what they are.
If you do like vim, slimv is a really great option for lisping.
> The reason that code represented as XML or JSON looks horrible is not because representing code as data is a bad idea, but because XML and JSON are badly designed serialization formats.
By that same token, a Volkswagen Beetle is a badly-designed boat.
XML was never designed as a data serialization format. It's a markup language. It was designed to sprinkle structure and metadata into large human-readable plaintext documents.
Likewise, JSON is a subset of a general-purpose programming language's literal notation that happened to be very fast to parse in a browser by virtue of the browser implementing that language.
Personally, I don't think s-exprs are a particularly great serialization format either. The problem is that there's no one-sized-fits-all for serialization. What we value is brevity, but basic information theory tells we can only make expressing some things more terse by making others more verbose.
When you say some format is badly-designed, all you're really saying is that it isn't optimized for the kinds of data you happen to want to serialize.
> XML was never designed as a data serialization format. It's a markup language.
Those two things are not mutually exclusive.
> Likewise, JSON is a subset of a general-purpose programming language's literal notation that happened to be very fast to parse in a browser by virtue of the browser implementing that language.
That's true. That is not in conflict with anything I said.
> The problem is that there's no one-sized-fits-all for serialization.
No, that's not true. S-exprs really are a global optimum in the space of serialization designs. All the alternatives are logically equivalent to S-exprs but with extra punctuation that makes them arguably harder to read, but inarguably harder to write. That is why S-exprs are the ONLY syntax ever designed (some would say "discovered") by humans that has been successfully used to represent both code and data.
The comment you were responding to got deleted, which makes it a little hard to figure out what's going on there.
But I am completely nonplussed at your assertion that markup and serialization are mutually exclusive. There is a 1-to-1 correspondence (actually multiple 1-to-1 mappings) between XML and S-expressions, so whatever you can do with sexprs you can do with XML modulo some trivial transformation. The ONLY difference is in the amount of punctuation and redundancy.
> the distinction between strings and symbols is important
Yeah, that's a good point.
> neither XML nor JSON has it
That's not quite true. It's not that JSON doesn't have symbols, it's that Javascript doesn't have symbols. And XML doesn't have symbols natively, but you can easily gin them up yourself, e.g. <symbol>foo</symbol> or <symbol name=foo />.
Obviously you can encode S-expressions in XML (including symbols). But you have to add additional structure to do it. The point is that XML, following the markup metaphor, doesn't work this way out of the box. And, in fact, I've never seen anyone (except, I guess, you) go to the trouble of making all the distinctions in XML, such as the string/number distinction, that S-expressions make -- and I have seen people get into trouble for failure to do this.
It's a psychological/sociological point rather than a technical one, but metaphors matter in design.
Is there any loss less binding of XML to s expressions? I've never seen one.
Usually example where I see people rewrite XML to s expressions (like in this thread) are very lossy -- its easy to be pretty by throwing away most of the information!
One downside to S-exprs compared to, say, JSON: they do not have direct support for unordered mappings (hash tables, dictionaries, whatever you want to call them). You can represent them as trees, but basically every language these days (including, of course, Lisps) has a mapping type as a core concept; requiring the user to figure out what parts of the input data should be converted to that type is annoying, and makes the format less self-documenting (i.e. it may not be immediately apparent whether there can be duplicate keys or not).
Here I have chosen a "hash capital H" syntax for hash tables. The first part () has the hash attributes (there are none, so it's an eql-equality-based hash table, with strong keys and values). Then, entries consisting of two element list pairs give the keys and values.
This hash prefix notation builds on existing Lisp concept of using simple prefixes to distinguish various kinds of objects. In Common Lisp we have:
#(1 2 3) ;; this is a vector
#C(3.0 4.9) ;; this is the complex number 3.0 + 4.9i
The scanning is very simple: you just recognize the prefix like #C( or #( and then recurse into the scanner for list elements that stops at a closing parenthesis.
No whitespace is allowed: it cannot be # (1 2 3) or # C (3.0 4.9).
TXR Lisp above not only reads back the hash notation, so it can be used as a literal, but allows backquoting over it. We can splice keys and values into the syntax to produce a hash table:
> Among other things, it makes writing interpreters and compilers really easy, and so inventing new languages and writing interpreters and compilers for them becomes [...] a part of day-to-day Lisp programming.
This matches my experience. The barrier between application and language/compiler development disappears, and instead you get a rich new feedback loop between the two. Overall complexity decreases (<-- a big deal), and many hard things become easy.
The effects of this compound over time, so what starts as a mere notational difference turns into a deeply different programming style, one I find so enjoyable that doing without it feels like going back into a straitjacket after escaping.
> The act of describing what you want the machine to do is interleaved with the machine actually doing what you have described, observing the results, and then changing the description of what you want the machine to do based on those observations.
That sounds really powerful and interesting, but wondering how often is that used in practice? I can imagine maintaining and understanding a large self-modifying program like that could get a bit complicated for a team.
I think he's describing interactive programming, not necessarily self-modifying programs. With that said, yes interactive programming is incredibly useful and I do it all the time.
It really shines on apps with a lot of state (such as a game) where normally you'd have to quit, change the code, recompile, run the app, and reproduce the original state as closely as possible. With lisp you can just replace the function you are debugging while the app is running.
That said, lisp can be used for self-modifying programs pretty easily. The idea that the default data structures it manipulates are the same as the code itself makes it easy to build data structures with the purpose of being evaled (not saying this is ever a good practice).
And essentially, macros are self-modifying programs that run before the actual program runs.
> I think he's describing interactive programming, not necessarily self-modifying programs. With that said, yes interactive programming is incredibly useful and I do it all the time
I do it too in python via ipython. Maybe what the author meant is "Why exploratory programming using a REPL is great".
Like any powerful technology, Lisp's interactivity and dynamism can be used for good or it can be used for evil. Yes, it takes a little discipline to keep things from spinning wildly out of control. But it's well worth the effort.
The powerful idea is the ability to modify the program as it is running rather than writing a self-modifying program which indeed would be much more complicated.
A good example would be the Emacs text editor. You can use it to modify the source of Emacs itself while it is running. You never need to reboot it to test new code because you can interactively evaluate new functions and replace existing ones.
As for working in teams, usually each developer runs their own REPL instance isolating them from the changes made by other developers. You can still pull a coworker's commit and evaluate it into your running instance without even restarting the program under development.
I think Ron was referring to the development process, rather than the finished program. The way Lisp languages and environments handle REPL-based interactive development is significantly more powerful and flexible than e.g., Rubython or even Haskell.
Modifying a running program to make a change as you develop it is an incredible speedup, relative to (recompiling and) restarting all the time, and even relative to reloading whole changed files.
One issue with many coders not getting Lisp, is that environments open source environments feel short of classical Lisp enviroments like LispWorks and Allegro CL 9.0.
Having used the old Smalltalk and Oberon environments, sadly not Lisp ones, I really thing many still don't get it.
(I have a subtle optimization for S-expression syntax)(I am surprised nobody ever thought of it)(When S-expressions are in a sequence use an extra (special) delimiter plus the regular token separator to separate expressions)(Maybe use dot? (period I think some call it))
Like so. I think it could catch on. And you get rid of so many round bracket block delimiters (at least for S-expressions on the same level. for nesting you obviously need them) that it makes reading a lot easier. Also make the language modal and have the default be the indicative mood. Maybe replace "." with "?" for interrogative mood? Maybe elide ".)" to ")" as another optimization?
(define-record-type omap-entry
make-omap-entry key item next prev.
omap-entry?,
key omap-entry-key set-omap-entry-key!.
item omap-entry-item set-omap-entry-item!.
next omap-entry-next set-omap-entry-next!.
prev omap-entry-prev set-omap-entry-prev!)
(To mark `omap-entry?` as being a value, not the S-expression `(omap-entry?)`, I decided to place a comma after it instead of a period. The alternative is requiring no punctuation and making newlines significant.)
Well, I suppose it does look better.
However, the idea is of limited applicability. While looking through the example project (https://github.com/axch/test-manager), I had trouble finding some code where this would actually be useful – most code has too much nesting.
That’s why I prefer another solution for removing excess parens from Lisp syntax: making whitespace significant. It improves the syntax in cases where your periods would help, and it applies in additional cases as well.
“Sweet-expressions” (http://readable.sourceforge.net/) is an implementation of that. Here is the above code ran through the `sweeten` tool to convert it to sweet-expressions:
define-record-type
omap-entry
make-omap-entry key item next prev
omap-entry?
key omap-entry-key set-omap-entry-key!
item omap-entry-item set-omap-entry-item!
next omap-entry-next set-omap-entry-next!
prev omap-entry-prev set-omap-entry-prev!
Absolutely no parentheses necessary, while still preserving homoiconicity. You don’t even have to remember the `)` at the end of the last nested line. And I chose this example code to look good with your idea – when the code has more nesting, sweet-expressions look even better.
About the interrogative mood `?` you describe: it might simplify `if` statements. But that would require removing the convention where boolean-returning functions have a name ending in `?`, such as `omap-entry?` in the example. It’s a tradeoff.
Sweet-expressions is a cool idea. Was it Python that started the whitespace-is-significant trend? I'm wondering why someone doesn't create a Scheme or a Lisp with this as valid syntax out of the box. I think you'd still need to support regular S-expressions though, right?
> S-expression syntax is a well-designed serialization format
I must press, well-designed according to what measure? OP presents the well-designed serialization format as a strict positive instead of the truth: a tradeoff compared to other unnamed qualities.
I haven't been here in a while now but when Ron shows up in the mainstream that's reason enough.
Why Lisp ?
Definitely the reasons he points out.
I worked at a large e-commerce retailer for a few years, one that you might have heard of or even purchased something from. I had the experience of building a few systems in Lisp and also in a few mainstream imperative languages. It was tough and really lonely even though I had real success.
With Lisp I could do things that whole teams could only dream of in Blub and in much shorter periods of time.
The problems weren't really technical but more psychological and social when it came to Lisp. Big problems. I wasn't very successful in resolving many of them.
I still use Lisp here and there but have been drawn to the ML family in the past few years. Strong static type systems got a hold of me for better or worse.
All languages suck, it's just the degree that differs.
For me, I think the most important reason for working with lisp and its variants is that it allows you to (and you should) code from the middle.
When I first started programming, and for a long while, I'd think top down. What do I got? Then, how can I iterate through that to do the operations I want and then build up what I need?
With lisp, the ability to reuse code in closures is so easy, you can afford to think a different way. What are the most basic, clean and simple, operations I need to do? (not necessarily in order I need to do them) This includes conversion, summing results, iterating through a structure, etc.
After writing functions for those, you can quite easily plug them all together without much thought.
You end up with code that is non-redundant, clear, reusable, easy to debug, and flexible.
There are more things wrong with XML than the redundant multi-character closing delimiters. The big problem is the "M" in "XML", which stands for "Markup". The driving metaphor is that you start with a bunch of text, then add tags to mark it up, meaning to indicate what the internal structure is.
The problem with this is that the stuff between the tags is always and only text. There's no notion of a token. In Lisp, 2 and "2" are different things: the first a number, the second a string. In XML, that distinction is not in the file; it's only in the schema.
A version of this problem shows up even in your example. Shouldn't you have written this?
Ah, you say, but we understand that whitespace around the body of each element is to be ignored. Okay, but where does that information live? I don't even know if that can be stated in the schema; it's up to the app consuming this stuff to know that. Supposing it does -- then, what if I wanted a set containing the string " abc"?
See what I mean? The driving metaphor of "markup" is fundamentally broken. A better metaphor is that of a human-readable serialization format for trees -- such as S-expressions or JSON.
"[T]he problem [XML] solves is not hard, and it does not solve the problem well." -- Siméon and Wadler [0]
There are certainly good reasons why lisp is the way it is. I dislike reading that style of code, though. I don't like how you have to read it in a strange sort of top-to-bottom-but-also-inside-out way (really this happens in all languages but it's especially bad in lisp because there are no infix operators and so forth), and of course the common gripe about all the parens.
And to be honest, I am not really interested in how hard it is to write a compiler for the language I use.
Clozure CL is easy to install/run on just about any desktop platform (windows included). On *nix, SBCL is a good choice.
I've heard good things about MOCL but haven't tried it myself, and I know getting ECL working on mobile is an uphill battle (but achievable if you have the time). ECL switched maintainers recently, so maybe mobile is something they will focus on in the future.
Seconding the Clozure CL recommendation. It supports threads on Windows, which really expands your options if you want to do web development in Common Lisp.
I run Linux on my web servers but develop from a Windows desktop. CCL works great on both.
Fourthing CCL - it's stable on Windows and fast enough for writing games :).
Also, it's the best free CL for targeting Raspberry Pi - unlike SBCL, CCL supports native threads there, and with the weak but quad-core processor of RPi2, the difference is very noticeable.
I'm surprised how many Windows-based CL devs there are. I am too, however I use SBCL and it works pretty well too, although I have stumbled upon strange Windows-only bugs with it.
The only downside to Clozure CL that I found was the fact that it requires SSE2 instruction support from the processor. There are still some processors around that don't support that -- which can be a bummer if you want to use Clozure CL on one of those machines. Unfortunately, one doesn't always have the option of upgrading the hardware to get around that.
I don't think the developers have worked around that, though I have seen some conversation about it in the past. My limited Google searches on the subject today didn't suggest that anything has changed since the last time I looked at it.
Edit:
Oh, duh, I already said all of this about 4 1/2 years ago:
Another issue I've had is that you can't use 32-bit libraries from a 64-bit image, even with a multilib (in linux or windows). In other words, if your DLLs/SOs are 32bit, you have to use the 32bit CCL executable.
I'm hacking on TXR these days which contains a Lisp dialect called TXR Lisp. Lisp hacking and research is fun, in particular if you have the freedom of your own dialect.
The current public release TXR Lisp still has an embarrassingly shoddy implementation of "places": expressions which not only evaluate, but serve as assignable locations. I implemented most of the place-manipulating operators (set, inc, push, ...) as special forms, and only a small repertoire of places is hard-coded in the interpreter. (And that, by the way, creates one of the impediments against going to a compiler.)
I have thrown all that out and created a macro-based system for places. Instead of copying the Common Lisp one, I drummed up something new.
Common Lisp lets programmers register assignment places as "setf expanders" which return five values: lists of temporary variables, access and store forms and such. (Google for "CLHS get-setf-expansion").
I have taken a somewhat different, though necessarily closely-related approach. The creator of a new place provides two or three functions. These functions take a place and a piece of code, and wrap it with macrolets for accessing the place. These macrolets are given names that the caller specifies. Of course, I have a macro for writing these functions as a capsule.
The three functions provide a way for handling the access and update of a place, a simple store to a place without an access, and the deletion of a place.
This third item is new. In TXR Lisp, there is a del operator: a place can be vaporized. Not all places support deletion, but many do. For instance you can do (del [some-vector 3]) to delete an item from a vector. Or a range. Hashes support deletion. (del (car x)) works: what happens when you delete the car of a cons cell is that the cdr is popped, and the popped item is transferred into the car. Thus if x holds (1 2 3), the (2 3) part is popped to produce (3) and the 2 moves into the car, resulting in X holding (2 3). (I just realized that we need place insertion to complement place deletion!)
Here is how a (vecref <vector> <index>) place is defined:
The rlet macro is something I just invented days ago. It is like let, but it detects and optimizes cases when a symbol is bound to another symbol, and turns these into symbol macrolets. Demo:
$ ./txr -p "(sys:expand '(rlet ((a b) (c (form))) (list a b c)))"
(let ((c (form))) (list b b c))
Look, the (a b) disappeared in the form expansion, and b was replaced by a. That's because the macro expansion is this:
$ ./txr -p "(macroexpand '(rlet ((a b) (c (form))) (list a b c)))"
(symacrolet ((a b)) (let ((c (form))) (list a b c)))
rlet also propagates simple constants, as shown by the variable d:
$ ./txr -p "(sys:expand '(rlet ((a b) (c (form)) (d 42)) (list a b c d)))"
(let ((c (form))) (list b b c 42))
Of course, let can only be replaced by rlet in certain circumstances, when we don't rely on the binding actually providing storage semantics. Above, we can only replace a with b, if a is not assigned anywhere. And since d is replaced with 42 blindly, it cannot be assigned.
Obviously, rlet is something you don't need if you have an optimizing compiler which eliminates useless temporary variables and propagates constants! I just wanted nicer expansions from the places system.]]
So anyway, the defplace creates three functions which provide advice for how to correctly access, update and delete a place. They wrap this advice, which takes the form of local functions or macros, and any additional lexical definitions they need, around a supplied piece of code, and that code can refer to that.
Here is how this advice is used, in the implementation of the swap operator. This shows you the real beauty of the system, because we can write a completely robust swap, but we don't have to use anything resembling CL's get-setf-expansion and its five cumbersome return values:
What is the env doing there? We have to pass the macro-time environment down, because places can be macros! For instance what if we do (swap a b), but a is actually a lexical symbol macro for (car x) and b is something similar: the with-update-expander macro will take care of expanding these places, so it fetches the correct advice for the expanded places.
So note how we have a completely naive looking pieces of code, a three point swap:
All we did was generate some gensyms, use a couple of macros to obtain the place-accessing expertise from some generated lexical helpers, and then wrote a straightforward swap.
$ ./txr -p "(sys:expand '(swap a b))"
(let ((#:g0044 a)) (sys:setq a b) (sys:setq b #:g0044))
$ ./txr -p "(sys:expand '(swap (car c) (cdr c)))"
(let ((#:g0044 (car c))) (sys:rplaca c (cdr c)) (sys:rplacd c #:g0044))
$ ./txr -p "(sys:expand '(swap 1 2))"
./txr: unhandled exception of type eval-error:
./txr: during expansion at string:1 of form (swap 1 2)
./txr: message: form 1 is not syntax denoting an assignable place
$ ./txr -p "(sys:expand '(swap [f x] [g y]))"
(let ((#:g0067 [f x])) (let ((#:g0056 [g y])) (let ((#:g0044 #:g0067))
(sys:setq f (sys:dwim-set f x #:g0056)) (sys:setq g (sys:dwim-set g y #:g0044)) #:g0044)))
Note the [] notation is completely generic, so f and x could be lists (or vectors, hashes, strings). In this case, f and x are themselves expected to be places. Thus [f x] must be a place, and f also must be a place. The reason is that if we manipulate a list, we may have to assign the new list to the old variable. In general, sys:dwim-set could cons up a new list.
So in summary, Lisp is always fresh and keeps me interested in hacking. Even old things (solved problems like assignment places) can be looked at in a new light.
Check out this recursive implementation of shift, which is like Common Lisp's shiftf:
(defmacro shift (:env env . places)
(tree-case places
(() (eval-err "shift: need at least two arguments"))
((place) (eval-err "shift: need at least two arguments"))
((place newvalue)
(with-update-expander (getter setter) place env
^(prog1 (,getter) (,setter ,newvalue))))
((place . others)
(with-update-expander (getter setter) place env
^(prog1 (,getter) (,setter (shift ,*others)))))))
$ ./txr -p "(sys:expand '(shift a b c d))"
(prog1 a (sys:setq a (prog1 b (sys:setq b (prog1 c (sys:setq c d))))))
> The expressive power of Lisp has drawbacks. There is no such thing as a free lunch.
This line doesn't hold on its own but as the ending of the article it was very powerful.
I had heard that extra power can make it easier to do things wrong, but the idea that extra power can cause problems even when you do things right is fascinating.
Why Lisp? Because the developers that know and love Lisp are normally highly above average and get things done. For sure, there is a much smaller pool to chose from, but once you are far from what is currently hip, you get developers that actually care.
Nah, Lisp is nowhere near the most minimal way to serialize code. It's plaintext, so it's denormalized. Something akin to bincode is the most minimal. But that requires a non-plaintext editor...
There are whole books which answers that question. The one I am aware of is pg's "On Lisp" which explains everything clearly without any "Haskellish or monadic mysticism".
There is also the famous essay "Beating an average" and this very site as a walk of their talk.
There are also SICP and PAIP and Norvig's Lisp Style Guide.
I didn't "get" macros until I read a footnote in the (freely available) book Practical Common Lisp. In chapter 7, it introduces the `dolist` macro.
Buried in a footnote is this:"DOLIST is similar to Perl's `foreach` or Python's `for`. Java added a similar kind of loop construct with the 'enhanced' for loop in Java 1.5, as part of JSR-201.
Notice what a difference macros make. A Lisp programmer who notices a common pattern in their code can write a macro to give themselves a source-level abstraction of that pattern. A Java programmer who notices the same pattern has to convince Sun that this particular abstraction is worth adding to the language. Then Sun has to publish a JSR and convene an industry-wide "expert group" to hash everything out. That process--according to Sun--takes an average of 18 months. After that, the compiler writers all have to go upgrade their compilers to support the new feature. And even once the Java programmer's favorite compiler supports the new version of Java, they probably still can't use the new feature until they're allowed to break source compatibility with older versions of Java.
So an annoyance that Common Lisp programmers can resolve for themselves within five minutes plagues Java programmers for years."
http://www.gigamonkeys.com/book/macros-standard-control-cons...