As much as I like Lisp and even pure functional languages (such as my own Fexl), I still like that old-school loopy, branchy, bracey, break-ish, drop-down to the iron, procedural feel of straight-up C and even Perl.
You won't get a language war from me, but there's a huge difference in "feel" between those two major language classes, and strangely, I kinda like both of them. Sure I can do some impressive reasoning in the functional realm, but even in the procedural realm I can do iron-clad reasoning and correctness-preserving transforms that (I hope) would make Dijkstra proud.
It's no coincidence they feel different. The Lisp family of languages was based on theoretical mathematical considerations (Lambda calculus), with implementation as an afterthought.
The Fortran family of languages was based on the hardware's capabilities, with theoretical considerations like expressiveness as an afterthought.
We are fortunate to have reached an age where we have CPU cycles and RAM to spare so that languages that were not designed with performance-related restrictions in mind are becoming practical for a wide variety of applications.
I concur. Fexl can whip combinators around so fast it makes my head spin. But I still like writing C, even if it's only to write a Fexl interpreter.
Consequently the whole purpose of Fexl is to be a thin functional "layer" on top of C. That way as a C programmer I can escape to functions to avoid all the gnarly horrors of memory management and such, and as a Fexl programmer I can escape to C to embrace the hard-edged bits and bytes. That way if you asked me whether I was writing the thing in C or in Fexl, I couldn't really give you a straight answer. ;)
That "while" loop you see there is essentially the only loop in the entire core of Fexl. (Obviously "plug-ins" can do their own loops.)
That loop keeps reducing the parent node as long as it's still of type_app. Then the loop ends and it reduces whatever the parent node has become as a result of the evaluation loop.
I'm a total Common Lisp whore but my code's still pretty procedural and loopy (although not bracey). I use iteration far more than recursion. Like someone else already mentioned the excellent ITER and LOOP macros take care of that.
Maybe it was just me, but back in the day when I was paid to write Lisp I was pretty fond of the Common Lisp loop macro. So my Lisp code was pretty loopy...
People don't seem to understand what the word "Domain" in "Domain Specific Language" means. It means this: http://domaindrivendesign.org/
Syntax extension (which in the case of macros is not about syntax at all, but about controlling when evaluation happens, something else that people do not understand about macros) is not about inventing a new, stupider way of writing for loops. It is about clearly expressing domain concepts.
For an article about "missing the point," the author manages to fail to understand both how syntax extension works and how it's used.
I'm not really sure why you bring up DSL's, unless you consider something like Python or Perl a DSL as well.
In that case: are you saying that, for instance, Perl is only useful for 80% of the cases for which that language is intended to be used?
Nope. You can "do it all" in any Turing-complete language. But you can often do it more elegantly with a DSL. Once a language reaches a certain threshold of power, making a DSL is indistinguishable from ordinary coding.
The point of the article was that creating a sufficiently powerful language, that doesn't offer easy ways[1] of extending the syntax, does not warrant the criticism that the language has 'too much syntax'. If anything, it demonstrates the power of being able to create syntax. I don't understand how your comment relates to that point.
[1] Of course you can "do it all" in any Turing-complete language, but theoretical possibilities are often much less interesting than practical impossibilities. In Python or Java, creating new syntax is so hard it might as well not be possible.
Exercise for the student -- find the contradiction here:
The point of the article was that creating a sufficiently powerful language, that doesn't offer easy ways[1] of extending the syntax, does not warrant the criticism that the language has 'too much syntax'.
Evidently, you are as inexperienced with DSLs as the author.
Of course you can "do it all" in any Turing-complete language, but theoretical possibilities are often much less interesting than practical impossibilities. In Python or Java, creating new syntax is so hard it might as well not be possible.
Insufficient knowledge is probably why you mistook the sense of the statement you are replying to by 180 degrees.
Exercise for the student -- find the contradiction here:
You consider 'sufficiently powerful' and "doesn't offer easy ways of extending the syntax" to be contradictory. Does that mean you consider only Lisps 'sufficiently powerful'?
Insufficient knowledge is probably why you mistook the
sense of the statement you are replying to by 180 degrees.
Do you mean extending the syntax of Python or Java is easy? Otherwise I'm not sure what the knowledge is you think I'm lacking.
"So why not create syntax that catches 99% of the cases for which you intend the language to be used and abolish the ability to create further syntax?"
Because then I've got a syntax that works for 99% of the use cases you've already thought of, and that doesn't overlap very well with the use cases I have or that either of us will come up with in the future.
You can't expect people to be reasonable when it comes to language
flamewar. Personally, I think weak advocacy is worse than no advocacy
at all.
So why not create syntax that catches 99% of the cases for which you
intend the language to be used and abolish the ability to create
further syntax?
Although it's possible to create new syntax in Common Lisp via reader
macros -- I don't think lispers recommend to extend the syntax. But
when you miss a language feature, instead of fighting against the
language, you could implement it. I don't remember exactly who said
it in one the SICP video lecture, Harold Abelson or Gerald Sussman,
but he said something like
Lisp is not good at solving a particular problem. What Lisp is good
at is extending the language to solve a class of problems.
The problem with syntax is that it's difficult to use correctly with
macros. It's no wonder that most introduction books to C warns
against the pitfalls of using macros. Even the simple ones could bite
you if you're not careful. Consider:
#define DOUBLE(x) (2*x) /* should be (2*(x)) */
#define MAX(a, b) ((a) < (b) ? (b): (a))
Even the last one is not immune against multiple evaluation problems
if you try to evaluate MAX(a++, b). In Lisp you could avoid this by
using gensym and local bindings. That being said, C macros and Lisp macros are
completely different beasts. And I recommend reading ``On Lisp'' for
advanced use of Lisp macros.
> The ability to create syntax comes at a cost and the existence of so many other programming languages shows one thing very clearly: not everyone is willing to pay that cost.
The cost of Lisp is that the syntax is all the same. So a conditional looks the same as an assignment statement, looks the same as a function call, looks the same as a plus operator.
Lisp syntax is bad, because many humans find it easier to scan source code if different constructs use different syntax (whether that is just because it is what they are used to, I don't know). But Lisp syntax is also good, because it allows macros.
You could get round this problem by having two levels of a language, one with C-like syntax which compiles to Lisp-like syntax (which may itself be compiled). So this:
What you think has 'no' syntax, is the syntax for s-expressions. S expressions are relatively trivial, but not every s-expression is a valid Lisp form. S-expressions are a syntax for DATA.
The syntax of Lisp, the programming language, is described on top of s-expressions. Syntax is concerned with the structure of valid expressions of a language. So you have an additional layer of syntax which describes, on top of s-expressions, the syntax of Lisp.
There is some basic syntax:
* data like numbers, strings are valid Lisp programs
* function calls are valid Lisp programs. Function calls have a non-trivial syntax with rest, optional and keyword args, lambda functions, ...
* special forms like CATCH, FLET, IF, QUOTE, SETQ, ... all have their defined syntax
* macro calls, here the macro has to implement the syntax specified in the standard. users can do with their macros what they want.
For example
(if (foo-p) (this) (that) (these))
is a syntax error, though it is a valid S-expression.
Similar
(loop for i below 10 (print i))
is also wrong. Correct would be for example
(loop for i below 10 do (print i))
So, what you may want is to remove the data syntax (s-expressions) from Lisp and replace it with some other layer of syntax that is not based on prefix parentheses-heavy s-expressions.
There are lots of languages that have different syntactical approaches. Lisp has its own, which makes it slightly unusual (the syntax is used to support the code-is-data paradigm), less popular, but also makes sure that it will find its users - those users who want a language with a flexible syntax (one that can be extended by the developer) on top of a simple data syntax.
People have been doing this for as long as Lisp has existed; the idea "we'll just use these s-expressions until we figure out the real syntax" goes back to the very beginning. If this were really a major issue, something would have been figured out by now. Instead, it's an idea that has great appeal to dilettantes but has never caught on with practitioners. Surely there is a reason for that?
I don't have much experience reading Lisp, but I think that some structures do stand out. The difference is that you look at the shape of code chunks rather than special characters on a line. For example, LET blocks with lots of assignments stand out to me because of the chunk of pairs at the top. DO blocks look similar but the increment functions separate them. LOOP stands out for having a lot of symbols and not many parentheses.
Aside from that, some lisps have tendencies (although not requirements) that symbols with common uses have special characters to stand out more. For example, foo for dynamic variables, +foo+ for globals, foo! for destructive operations (in scheme), foo? for predicates (scheme + clojure), and = for assignment (in arc). The choice of what to emphasize is different, but it was a conscious choice on the part of the language creators.
Sexps are better than strings because they are more structured. It would be even better to represent code with separate data types for every construct instead of the relatively unstructured sexps.
s-expressions are an external syntax. You can create any data structure you want from them, given the right reader.
S-expressions are not INTERNAL to Lisp, only external. Thus s-expressions are defined on characters. Comparing s-expressions to strings makes no sense.
The representation of code is a different issue. You could read an s-expression and represent it internally as a string.
As I commented, get back to me when you've got a non-lisp syntax that doesn't result in bugs from precedence/associativity errors.
If infix was actually "natural", there'd be only one set of precedence/associativity rules and there wouldn't be bugs due to folks getting it wrong.
With very few exceptions, folks use parentheses to try to protect themselves from their ignorance of their language's precedence rules. (C++ has over 10 levels of precedence.) Even then,they occasionally get it wrong.
And, it's worse than that on teams because different folks have different knowledge (and illusions) of the precedence rules.
Barring the complaints about syntax and the assumptions made in the article, I would still say the one reason someone should learn LISP is that most likely one will have an entire career of writing logic the "other way", I am not disparaging the "other way", just stating that it is far more prevalent and if you are learning something for personal growth LISP will give you a more diverse perspective than using a language that is an evolution of the "other way".
I concede that Lisp's notation probably isn't common sensical; and that learning it is a significant investment for many programmers, who are hardwired to quickly understand the conventional notation.
However, many important developments in math and science ran counter to common sense. In math, people with common-sense arguments resisted numbers which seemed "irrational", "imaginary" and "negative." (Supposedly, there were times when you could die over them.) Euclidean geometry seemed so obvious that people were scared to discuss non-euclidean ones. In science, even Newton had some self-ridicule about spooky action at a distance, like gravity:
"That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an Absurdity that, I believe, no Man who has in philosophic matters a competent Faculty of thinking could ever fall into it."
I did not mean to imply that Lisp's notation itself was against common sense - just that the article made a very good common-sense point about its expressiveness coming at a price that's often not worth paying, relative to the advantages it gives you over languages with less expressive but more goal-oriented syntax.
You won't get a language war from me, but there's a huge difference in "feel" between those two major language classes, and strangely, I kinda like both of them. Sure I can do some impressive reasoning in the functional realm, but even in the procedural realm I can do iron-clad reasoning and correctness-preserving transforms that (I hope) would make Dijkstra proud.