Hacker News new | past | comments | ask | show | jobs | submit login
Complaints I’m Seeing About Common Lisp (2007) (danweinreb.org)
29 points by fogus on Oct 7, 2009 | hide | past | favorite | 12 comments



I guess many of the complaints are from people trying Lisp on their spare time, especially the cosmetic ones I think (cadadaddr, rplacaca, the parens, etc). If you have to learn a language for work or school, you just plow through those cosmetic nuisances in a couple of 8- or 16-hour days and get it done with. Maybe once in a while, in an idle moment, you may wonder over your cup of chamomile tea, "did they have to name the filter command 'grep', I mean really?" But you probably are not going to go trolling any internet forum about it. On the other hand, if you are learning a language for your own edification, then Lisp (and C, and the Unix shell, and OLE and Smalltalk and Perl) is forcing you to put up with obnoxious cruft in your own dime, so to speak. Kind of the difference between enterprise and consumer software. If you buy a piece of software (say, a game) for yourself, you are going to be ticked off if it's not perfect. On the other hand, if you buy some software for your company, a few rough edges are OK since you are not paying it from your own pocket.


I agree. My spare time is precious, and so are my spare brain cycles. If I have to spend a few hours at work researching and choosing a networking library, then fine. It's boring, but at least I'm getting paid. If I sit down in the evening to do some hopefully fun and/or stimulating hacking on a neat problem, and instead I end up researching and downloading libraries for functionality that most other languages have built-in, I chalk that up as an utterly wasted evening, and I blame the language.


Actually, lisp macros are potentially "bad" [1] not because you need to understand macros further down in the lexical environment, but because you need to understand macros further up. This makes your code lexically nonlocal.

To understand (f a b) without macros, you need only understand f, a and b. With macros, you also need to understand all sexps containing (f a b) since a macro might be rewriting (f a b) to something else entirely.

It's a similar issue as with side effects. To understand a system with side effects or concurrency, you need to understand the runtime environment. To understand macros, you need to understand the lexical environment surrounding a function or macro call.

There are software tools to help (debuggers/macroexpansion), but they still make reasoning harder.

A past discussion which goes into more detail: http://news.ycombinator.com/item?id=656203

[1] Here, "bad" == "potentially hard to reason about". [edit: I put "bad" in quotes to indicate I'm not making a value judgment, but merely using the same language as the author.]


Macros violate the property that any subexpression can be evaluated and understood on its own. This is meant to be one of the benefits of functional programming. I agree here.

The counterpoint to this is that they don't break this property as severely as side effects do, because these violations can be algorithmically understood at compile time. While understanding macros is not a computable task in the general case, it is in the specific case where the program successfully compiles after only a finite amount of time.

This raises the question: is there some abstraction that can give us some of the power of macros with less violation of the above property? Scheme-style syntax-case macros and Ruby-style code block parameters help with shortening common idioms. Python-style compile mechanism subversion (metaclasses) and user-defined infix operators help with DSLs.

Nothing else is quite as powerful as the macro (pretty much by definition). We might be able to get something almost as good, and a lot cheaper in terms of semantic overhead.


I read the link you listed and the problem that you ran into. I've only had about a day's worth of Clojure programming and year or two of Common Lisp but I'm fairly certain that I wouldn't have fallen into your particular trap. Reason being that a quick glance at the docs would have told me that doto was a macro and what it did.

Of course you know that. Your argument is that in a purely functional language, all you need to know is the function call subtree where you are. The same block of code always evaluates the same way regardless of context. Which is neat.

However, your example begs the question: If you didn't know what doto did, why where you tying to modify what it was being passed? You clearly didn't know what any of the subforms did either; otherwise you would have noticed that the methods appeared to be called on objects that didn't have them. So yes, in Lisps, and in side-effecty languages, you can't treat code as algebra and refactor it without knowing whether the operators in your code are macros or functions, and what, in general, they do.

Except, then again, sometimes you can. In your case, you definitely could have. If you wrote your macro to generate exactly the code you were trying to refactor, you wouldn't have hit this problem. If you're changing the logic of the code, you need to be aware of how it's logic works, but if you are just factoring out bits of code with macros, you just need to be faithful to the original, in which case, it doesn't matter whether the macro is in another macro or contains global values. It should simply produce the same code and execute the same.


So in your scheme (no pun intended) "good" means "preserves orthogonality and easy to reason about."

Most of the problems mentioned in the article have been addressed in Smalltalk. (Not just about Macros, but about features and supporting standards. Other problems, like the minimalist precedence rules (all math operators have the same precedence) come with the language.)


I put "bad" in quotes to indicate I'm using the same language as the author of the article. I like macros (and side effects) in spite of their dangers. I just think lisp advocates are being simplistic when they suggest macros are not more dangerous than functions (as the author of this piece does):

"...in order to understand function A that uses macro B, of course you have to know what macro B means, but in order to use function C that calls function D, you have to know what function D does. Big deal."


Macros are a power tool. Heck, pointers are a power tool! You can do fantastic stuff with them. You can also create a big mess, fast! Doesn't mean we should live without electric drills, just use them intelligently.


You seem to be arguing against a claim I didn't make. I'm not against macros or powerful language features. I simply argued against downplaying their dangers.

Pointers are also a dangerous power tool. I would have objected similarly if a C advocate said this:

"...in order to understand function A that acts on char * B, of course you have to know what where char * B points, but in order to use function C that references const string& D, you have to know what D contains. Big deal."


No arguments, just an observation. Sorry if it seemed like I was disagreeing.


That was 2 years ago, this it TODAY. The situation has changed.

I am unfit to teach Dan Weinreb about Lisp, but let me silence others who are ready to jump on the Lisp wah-mobile.

http://www.cliki.net/index

---

Streams (user defined)

Gray Streams, simple streams.

Threads and locking

Bordeaux Threads

Modern networking: e.g. sockets, TCP and HTTP client and server, URL’s, email, etc.

Usocket, iolib, hunchentoot, drakma, puri, and Melbase.

Web Services (WSDL etc.)

Friends don't let friends use big-W Web Services. Use s-xml-rpc, cxml-rpc, cl-json-rpc or yason-rpc.

Relational database access

CLSQL, Postmodern, CL-RDBMS, PlainODBC.

Persistence (Lisp-friendly)

CLSQL, Postmodern, Elephant, CL-STORE, CL-SERIALIZE, ManarDB, Rucksack, Cl-Perec, Cl-Prevalence.

Meta-object protocol for CLOS

CloserMOP.

System definition facility

ASDF, or wait for Fare's XCVB.

Other general-purpose access to the operating system’s facilities

CFFI, Osicat, Iolib.

XML

CXML.

Math

Define "math".

Graphics

Define "graphics"; hint: everything from CL-OPENGL for realtime 3D graphics to Skippy, for generating GIF, to gordon and cl-flash for generating SWF/Flash files, to CL-PDF for generating PDF, and Lispbuilder-SDL.

GUI frameworks, platform-independent

CLIM is our standardized GUI but I use lambda-gtk to build Win32 and Linux GUIs with CLISP, Lispworks and SBCL. Failing that, there is LTK.

Text manipulation

Define "text manipulation".

High-performance (asynchronous) I/O

Iolib.

Access to printers

CFFI is your friend here. There is no portable implementation of the Lisp Machine HARDCOPY-FILE, but this is not stopping anyone from printing from Lisp. LispWorks has a print-command special variable.

Internationalization

Internationalization is a process, not a tool. I write "arabic" code in Lisp.

Unicode strings

Natively supported by all major Lisps now. CLISP, SBCL, CMUCL, Franz, LispWorks, and Closure.

Generating HTML

CL-WHO, HTML-TEMPLATE; there are pretty much as many HTML generating things in Lisp as there are Lispers doing web applications. Everyone wrote one, but CL-WHO is the best, imo.

X Window System

You said "GUI" above, and a few times you hinted at portability. Anyway. Every Unix Lisp supports CLX. There is Portable-CLX if you want to play with it. A whole X window manager was written in Lisp, stumpwm.

Foreign function interface

CFFI.

Regular expressions

CL-PPCRE.


Even before I clicked on the comments, I expected a response from you debunking the points in the article. :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: