Which definition of "macro" are you using, if you equate lazy evaluation with a kind of "hardcoded macro"?
More importantly, "it's hard to make a convincing argument that [these hardcoded macros] provide all the expressivity you would conceivably ever need" doesn't convince me. A better argument would be to produce a compelling example where these "macros" are not enough. And by compelling, I mean something that cannot be elegantly produced in a non-Lisp language.
> Which definition of "macro" are you using, if you equate lazy evaluation with a kind of "hardcoded macro"?
That definition of "macro" equivalent with "phrase structure rule in your functional language's compiler" which takes the input structure and generates whatever code brings about the lazy semantics (which is not inherent in the x86 instruction set or what have you).
> example where these "macros" are not enough
An example is any instance of language extension where, say, the maintainers of the compiler for a functional language have to ship a new compiler to the users to get them to use the new feature.
Functional languages with lazy evaluation are not finished, right? They are developed actively.
> An example is any instance of language extension where, say, the maintainers of the compiler for a functional language have to ship a new compiler to the users to get them to use the new feature.
> Functional languages with lazy evaluation are not finished, right? They are developed actively.
Agreed, they are actively developed.
Correct me if I'm wrong, but you seem to be saying "many interesting features can be implemented in a Lisp language with a macro, therefore Lisp programmers don't need to wait for a new release of their programming language when they want these features".
I'm not convinced this is the case, or rather, that this is such a relevant case. Aren't Lisps actively developed too? Why is there such a multitude of Lisp implementations? Is any relevant real-world feature truly implementable with Lisp macros? Why is that more convenient than implementing them with functions in languages with lazy evaluation?
Maybe I'm falling prey to the Blub paradox. I'm only passingly familiar with Racket, thanks to a course in Coursera, where they introduce macros and why they are so powerful in Lisp. But I still don't see the compelling "killer example"...
> Correct me if I'm wrong, but you seem to be saying "many interesting features can be implemented in a Lisp language with a macro, therefore Lisp programmers don't need to wait for a new release of their programming language when they want these features".
That is correct. However, macros have to translate to something. From time to time you need an upgraded something for some macros to be feasible.
E.g. it's hard to "macro your way" into having first-class continuations, if they aren't in the substrate.
That is to say, without the macro treating all of its arguments as a self-contained language.
You usually want the code which is in the macro forms to smoothly interoperate with outside code, such as make lexical references to surrounding bindings.
This is the fuzzy limit. Macros have to write stuff in some language, which is no longer macro-expandable. That language has to be reasonably expressive in its semantics for what the macros want to do.
So you think of a feature you'd like in your language. Let's consider the process you'd have to go through to use that feature.
In most languages, you have to write the maintainers about the feature. You then have to convince them that the idea is good -- and this is by no means guaranteed; if they think your idea is bad, you're out of luck, and can never use the feature you'd like. Then someone has to implement it. Then you have to wait for a release containing the feature. Then you have to test your code with the new version. Then you have to upgrade all your deploys, development environments, and testing machines to the new version. Then you can use the new feature.
In a Lisp with macros, you have to implement the feature. Then you can use it.
This is why macros are useful. You get to modify the language you're using, but still cut out the entire loop of the maintainers of that language.
Yes, I understand that argument, but I still find it unconvincing.
Some features you just can't implement with macros, you need a change in the "substrate" (see kazinator's answer below). For the rest, I simply don't see how they are language-level features. They are just things you need for your project, in which case, why can't you simply implement them as a library?
Even if you ignore the above, there's probably a good reason why the language designers don't want to approve your language-level feature. Yes, sometimes it's simply red tape or politics, but it can also be that you -- the applications programmer -- simply aren't well-versed in language design and can't think past your particular use case :) This wouldn't mean the feature is worthless (after all, you need it!) but maybe it's not meant to be a language-level feature, but instead... a library function, which you can write in most general purpose languages.
But this was only added in 2004!^2 So for almost 10 years, you had to manually iterate over stuff. How would you implement this as a library? Well, you could write a function that lets you write:
Would this work in a lambda? Well, if the language you're using has real closures, yes -- but does it? Do you know offhand? With a macro, your code will work.
> Even if you ignore the above, there's probably a good reason why the language designers don't want to approve your language-level feature. ... it can also be that you -- the applications programmer -- simply aren't well-versed in language design and can't think past your particular use case :) This wouldn't mean the feature is worthless (after all, you need it!)
That seems like evidence for my point -- macros let you build the language up for your own use case, not anyone else's. Without macros, your choices are "either everyone can use it, or no one can use it". Macros let you have a choice of "well, I can use it, even if no one else wants it, if I find it useful."
Macros can be viewed as libraries that act on the language itself. There isn't a difference between language-level features and "library functions" in a language with macros.
[1] Without getting into a Turing Tarpit. We're talking about using things in easy ways, not what is technically possible but ugly and kludgy.
Agreed about your Java example. However, for the purpose of this discussion, let's assume we're talking about modern, well-designed languages without ugly kludges and with access to nice features such as lazy evaluation and real closures.
> Macros can be viewed as libraries that act on the language itself. There isn't a difference between language-level features and "library functions" in a language with macros.
I simply don't see why this is such a big deal. I need to see a real-world example (which, understandably, might be difficult to explain in a HN thread) of something that can be achieved with Lisp that is not reasonably achievable in elegant ways in other, non-Lisp modern languages. Again, let's assume we both understand the Turing Tarpit.
By the way, I don't want to sound dense. I understand some features you only "get" when you use them. What little I've seen of Lisp (Racket, actually) seemed very interesting! It's just that I can't get that enlightened moment where I see why Lisp macros are that important in the real world. This is important to me because macros are one of the key features Lispers use to try to convince other programmers Lisp is awesome. And I can see they are interesting and useful; I just fail to see why they are a such big deal that they set Lisp apart and that, for example, Paul Graham would call Lisp his "secret sauce".
Many have said that the real benefit is that you wind up turning Lisp into the language that is perfect for your domain.
When you start your project, you don't know enough to design the perfect language for your domain. You start coding in Lisp, and you begin to uncover patterns that express your domain. Eventually, you find your way towards building a small set of macros that beautifully, expressively capture your domain.
Look for where Paul Graham talks about "bottom-up" programming versus "top-down" programming, and you'll find what he has to say about this. He says you do both in Lisp. Bottom-up is "changing the language to suit your problem."
One that helped some Java friends understand is passing blocks of code, but still having it look like just writing code. Imagine instead of try/catch/finally, a transaction/commit/rollback in Java:
transaction {
// everything in here is in one transaction
} commit {
// do stuff if the commit is successful
} rollback {
// do stuff if we rollback
}
All the try's and catch's can be stuff into the macro. It can be made to nest transactions within transactions.
For all practical purposes, you can't add that to Java. You'll always have to wrap up your transactions in boilerplate.
The Lisp equivalent of what I want, the resulting code would look like:
There are languages where I could define three blocks of code and pass them:
transaction(stuff, commit-stuff, rollback-stuff)
But that separates their definitions from their implementations.
How could you write the Lisp transaction expression in another language so that it looks like it's part of the language? (serious question, CL is the only language I use capable of being that close) Maybe I could torture Ruby to come close, but it would be far more difficult than the macro I had to write for Lisp.
>For all practical purposes, you can't add that to Java. You'll always have to wrap up your transactions in boilerplate.
Aren't you wrapping the lisp code in boilerplate when doing the macro too? This appears to be the same as your other example. Java has lambda expressions (since Java 8) that could do this.
If you are wrapping it in a macro, how is it different than wrapping it in a method? In Java 8, with lambda expressions, you can write code to wrap your transaction example to get something exactly equivalent to
>transaction(stuff, commit-stuff, rollback-stuff)
and you would be keeping the definition in the implementation. I also seem to be missing why it's important to be keeping the definition and implementation together.
So there's some boilerplate that needs to happen. With macros, you write the macro to insert the boilerplate, and then you never think about it again. You don't write it, you don't read it, it's not in the way. Without macros, you have to write the boilerplate every time you write the code. You have to read the boilerplate every time you read the code.
Which do you find more readable? Notice how there's no boilerplate in the macro version.
> This appears to be the same as your other example. Java has lambda expressions (since Java 8) that could do this.
What are the odds that Java 8 has provided everything you could want in out of Java? Macros let you add things in a better way than you could otherwise get. Look back to my prior example of Java 5's expanded for. You see how useful that was? If Java had had macros, people wouldn't have had to suffer through nine years without the improved for loop.
> What are the odds that Java 8 has provided everything you could want in out of Java? Macros let you add things in a better way than you could otherwise get. Look back to my prior example of Java 5's expanded for.
Again, please understand we non-Lispers find this argument utterly unconvincing :) Java in particular is a terrible example: it's a language that for many years remained in the dark ages, and now it's finally getting modern features retrofitted into it, while at the same time attempting to keep some sort of backwards compatibility, and the whole process is very painful.
Let's all agree to stop talking about Java. Let's assume we all agree working in a language that until very recently didn't have lambdas is painful. And that requires a horrific amount of boilerplate.
Let me re-throw your question back at you: what exactly can Lisp do, which has practical implications, that a modern language with lambdas, closures and lazy evaluation cannot accomplish in elegant ways? If you mention Java again, you lose :P
You seem to be concerned by the use of "Java". It's a fine example, and one I think I've explained why very well. It also seems petulant to declare Java off-limits.
But let me try again. For _any given language_, there are things you may want that the language doesn't provide. Macros let you do that in an elegant way. The example I gave above -- based off LanceH's -- of transactions is a way where the macro-based solution is more elegant than non-macro solutions.
Here's another example. Arc, like any language, has a built-in way of setting variables. It's called `assign`, and can be used as follows:
arc> (assign a 3)
3
arc> a
3
But no one uses it. Instead, people use =. = is a macro that's provided with the language. Because it's a macro, that means that if it wasn't provided, you could write it yourself.^1
What's the benefit of = ? It lets you set values of more than just variables:
arc> (= my-table (table))
#hash()
arc> (my-table 'key) ;;look up the value
nil
arc> (= (my-table 'key) 'value)
value
arc> (my-table 'key)
value
Note that in the second prompt, we're attempting to look up the value of 'key in the hashtable my-table, and we see that there isn't one there. We then set it in the third prompt, and look it up again in the final.
This works on the concept of "places". The "place" we try to set in `(= (my-table 'key) 'value)` is the association of 'key inside my-table.
Why is this beneficial? If you know how to get a value out of a data structure, you can now set it. This code is extremely clean and understandable compared to a non-macro version. It exhibits the principle of least surprise, and it's obvious how to set other data structures in an elegant way.
You can't do this without macros.
[1] If your objection here is "but it comes with the language", you're missing the point.
I disagree your example of Java is fine. It's not petulant to declare it off-limits, just as it's not petulant to declare COBOL off-limits. We are discussing features and extensibility of finer languages than Java.
> The example I gave above -- based off LanceH's -- of transactions is a way where the macro-based solution is more elegant than non-macro solutions.
Here we disagree. In your example of transactions, you didn't shown that macro-based solutions are more elegant than non-macro based solutions; you merely showed that Java before version 8 wasn't very good (and when someone replied "but we can do better with Java 8!" you basically replied "ok, but how do you know Java 8 is enough for some other unspecified problem?"). Your answer is unconvincing, especially since non-macro-based solutions to your example in other languages, such as Scala, are equally elegant to Lisp's, because Scala has support for first-class functions and closures. Let me preempt a "but how do you know that's enough for Scala?"... I don't know. Show me why it's not enough!
Re: your second example with assign and the = macro. I admit I don't understand it yet; I'll have to think some more about it.
edit: let me go back to this assertion:
> For _any given language_, there are things you may want that the language doesn't provide. Macros let you do that in an elegant way.
I find this problematic, for two reasons. First, we've acknowledged macros cannot solve everything; after all, there are new releases and multiple implementations of Lisp languages. What someone else said: "if it's not in the 'substrate', macros can't do it". Second, that macros let you do some (admittedly cool!) things doesn't automatically show that these same things cannot be accomplished in reasonably elegant ways in other languages. One thing doesn't imply the other!
>In your example of transactions, you didn't shown that macro-based solutions are more elegant than non-macro based solutions; you merely showed that Java before version 8 wasn't very good...
I think it wasn't clear what I was referring to. I was talking about not Java, but a Lisp-style solution. Here it is with macros:
The macro-based solution -- which doesn't mention Java -- is more elegant. You don't need the lambdas if you use macros (why? Well, you don't always want to rollback, right?)
>...when someone replied "but we can do better with Java 8!" you basically replied "ok, but how do you know Java 8 is enough for some other unspecified problem?"
That's the point -- a language with macros is extensible in a way that languages without macros aren't. So unless you believe that your language happens to be perfect, having macros would make the language more powerful.
>Re: your second example with assign and the = macro. I admit I don't understand it yet; I'll have to think some more about it.
Feel free to contact me if there's anything else I can explain -- I'm probably going to forget to check this thread soon.
>...we've acknowledged macros cannot solve everything; after all, there are new releases and multiple implementations of Lisp languages.
I'm not sure why new Lisp releases show that macros are not useful. You could write anything in assembly, but other languages are still released. And design is important -- which sets of functions, and macros should be provided with a language? Does having a release of Scala that includes functions mean that there's no need for user-defined functions?
>Second, that macros let you do some (admittedly cool!) things doesn't automatically show that these same things cannot be accomplished in reasonably elegant ways in other languages. One thing doesn't imply the other!
It doesn't mean that, no. However, I don't see elegant ways to do this kind of thing in other ways. If you can show me some, I'd be interested.
The problem that I have with lisp macros is that the elegance you gain at the syntax level is effectively a tradeoff with pragmatism when other people read and use the code. Java was developed with parts of C++/C as inspiration and parts of that language were left out. Particularly, operator overloading was left out (which can be viewed as a very restricted example of modifying the language), presumably because it's not an immediate thought when viewing the code that the operator isn't doing what you expect. While it's undeniable that macros make the syntax nice to look at, the same argument that lisp becomes a new language as you write your program means that every separate codebase has a lot more reading to understand because you have to go through all the macros. New releases in most languages introduce new features (and standardize functions, fix bugs, etc), as far as I can see, new releases in lisp enforce a standard (common) base set to decrease the amount of work required in learning new codebases.
Yes, macros can make code very hard to read, if designed badly. Of course, so can functions, variable names, and program flow.
But Lisp with macros is very different from C++ with operator overloading. With C++ operator overloading, you only know if a given line has something you don't understand (that is, an overloaded operator) by looking at every other file in your project. With Lisp macros, you know that you're dealing with something new because you don't recognize the first token in the s-expression. You might not know it's a _macro_ rather than just a _function_, but you know it's something you need to investigate.
Basically, in Rumsfeld's terminology, an overloaded operator is an unknown unknown, but a Lisp macro is a known unknown. A macro's behavior may be confusing, but its existence isn't. And that's a very big difference.
Agreed, I'm still unconvinced for the same reasons. In Scala or Haskell (or any language which supports first-class functions and lazy evaluation, I guess) the "transaction" example is easily done.
DOLIST is similar to Perl's foreach or Python's for. Java added a similar kind of loop construct with the "enhanced" for loop in Java 1.5, as part of JSR-201. Notice what a difference macros make. A Lisp programmer who notices a common pattern in their code can write a macro to give themselves a source-level abstraction of that pattern. A Java programmer who notices the same pattern has to convince Sun that this particular abstraction is worth adding to the language. Then Sun has to publish a JSR and convene an industry-wide "expert group" to hash everything out. That process--according to Sun--takes an average of 18 months. After that, the compiler writers all have to go upgrade their compilers to support the new feature. And even once the Java programmer's favorite compiler supports the new version of Java, they probably still can't use the new feature until they're allowed to break source compatibility with older versions of Java. So an annoyance that Common Lisp programmers can resolve for themselves within five minutes plagues Java programmers for years.
One could write a macro that allows infix notation for arithmetics:
(arithmetics 1 + 2 - 3) = (- (+ 1 2) 3)
These kind of syntactic transformations are what macros enable.
Which definition of "macro" are you using, if you equate lazy evaluation with a kind of "hardcoded macro"?
More importantly, "it's hard to make a convincing argument that [these hardcoded macros] provide all the expressivity you would conceivably ever need" doesn't convince me. A better argument would be to produce a compelling example where these "macros" are not enough. And by compelling, I mean something that cannot be elegantly produced in a non-Lisp language.