(plus (minus (div b (times 2 a)) (times (div 1 (times 2 a)) (sqrt (minus (pow b 2) (times 4 a c))))))
or
plus(minus(div(b, times(2,a)), times(div(1, times(2,a)), sqrt(minus(pow(b,2), times(4, a, c))))))
APL just has a small handful of symbols with really simple definitions and, heck, you already know what +, -, ×, and ÷ mean. Scheme actually has a ginormous dictionary by comparison, and in practice you do need to read the manual anyway, so discovery is a moot point.
APL is actually simpler in that regard in practice, and the gains you get in readability from the symbols are just like above. Better yet, though, which of the above do you immediately think, "that 1/(2a) can be factored out"? Oh, and here's APL for the above:
(-b÷2×a) + (1÷2×a)×((b*2)-4×a×c)*1÷2
Quickly empowering your brain to spot high-level patterns across your codebase like that is a huge strength of APL.
Converting math into code is extremely easy. Converting code back into math is the thing to optimize for. The compiler can figure out how to optimize the machine code for you, but the reader needs to be able to work backwards.
The thing that array languages get wrong is that code is read more often than it is written, and repeating yourself is worthwhile in order to make it easier to understand for the next person who looks at it. Compilers are extremely good at recognizing patterns and optimizing it for numerical code. Humans are really bad at recognizing patterns from unique sequences of characters.
> “The thing that array languages get wrong is that code is read more often than it is written”
It seems to me that array language designers understand this, but they make a different trade-off by prioritizing density.
A screenful of an APL-family language can contain a program that might take thousands of lines in a Java-style language. There’s power in being able to see the whole thing at once.
Would a passenger jet airplane be easier to use if the cockpit only had an iPad and the pilots would have to navigate through UI trees to find available actions? It would certainly be more discoverable to the ordinary person, but the pilots would probably be deeply unhappy with this design. In this analogy, an APL-style program can be like a “cockpit” full of instruments that you designed yourself for that exact job.
Except you sneak in alien symbols: +, - (used both as a unary and a binary operator!), *, ^, and /. See how readable they make things, though? :)
Your quadratic_roots is, indeed, nice in isolation. I'd even go as far to say that it's a good pedagogical piece. However, in production code pedagogy is not what I want to optimize for. Production code often repeats patterns with slight variations on a theme: What if you only want real roots? What about complex a and b? What about just grabbing the discriminant?
It's easy enough to modify quadratic_roots, but then you either get a combinatorial explosion of function definitions, or you end up with a function with extra parameters to select the different variations, and you often end up with a deeply nested function call graph, e.g. replacing sqrt(b^2 - 4*a*c) with discriminant(a, b, c), which makes quadratic_roots more annoying to read.
In practice, defining a function or some abstraction barrier is making an architectural decision. Ideally that decision would correspond perfectly to some fundamental feature of the problem you're trying to solve, but in practice we rarely are coding with perfect problem domain knowledge, right? In practice, our functions/classes/abstractions end up accumulating cruft, right? Why is that?
Where we traditionally handle complexity by setting up abstractions to let us hide parts of that complexity, APL is good at using a different tactic, called "subordination of detail." Done well, this looks like writing very simple, direct code that empowers your basic language tools to take on domain-specific meaning without introducing any abstractions.
Here's an example that I came across recently: t[p]=E. The primitive operations are simply equality comparison (x=y) and array indexing (x[y]). However, in the specific problem, t[p]=E effectively selects parts of an AST that correspond to expressions. It's just a friggin' index operation and equality comparison! Normally this kind of operation would hinge on a reasonably large hierarchy of datatype definitions, traversal patterns, and hard-to-read performance hacks.
Instead, the APL (t[p]=E) is crazy short, obvious in meaning, and you can literally just read the computational complexity right from the expression! What other language does that? Granted, you gotta learn a little APL first.
> The thing that array languages get wrong is that code is read more often than it is written
Personally, I find APL quite pleasant to read. You really should give it an honest try.
> Converting math into code is extremely easy. Converting code back into math is the thing to optimize for.
Yes and that's what programming languages like APL do. Whole algorithms for manipulating equations represented as matrices can be groked with just a few symbols. At least to my mind, it's a very efficient way to write code, at least for domains that fit this model.
I learned APL in the mid '70s in high school. The math department offered it as an option to those of us learning linear algebra. One course reinforced the lessons from the other. Those of us in the APL course found ways to solve other problems not directly related to linear algebra, and we marveled at how compact the code was (especially compared to Basic, which a friend and I had learned the year before).
In my professional experience most code never gets read by anyone other than the original author and when it is read it's often reluctantly. If you have any links to studies which show that code is read more often than it is written, I'd be interested to read them.
I, just, can't believe you could think that. Have you never debugged code you didn't write? There are six billion people in the planet and you think it's more likely that a given piece of professional code will only ever be read by one person? A person leaves the company and you just delete everything they wrote so nobody can read it?
I'm guessing this means you don't have any studies to back your claim. Neither do I, but in my experience most code is left to run when someone leaves the company. If issues come up, someone is assigned to support the code and nine times out of ten (okay, maybe eight) the person on support decides to rewrite the code. FWIW, the six billion people stat hardly seems relevant, right?
So you have a study? Because I’ve seen way more people who think readability in code is paramount, and I don’t think I’ve ever seen the take that it’s usually never read again.
It is good, but author, when he try to "sell" his language, uses terse, alienating, notation, and not this one (but using unicode multiply and arrow instead of ASCII * and = or <- looks like show-off without any practical advantages for me anyway).
I am the author by the way. The intent wasn't really to try to sell the language. The language is what it is, but the thing I wanted to highlight was that the entire solution (language combined with the user interface) provides a better foundation for working with arrays of data than a spreadsheet. Perhaps that point would have been more clear if I had removed the actual code since it takes away from the more interesting point.
As for the language, the longer version looks like an imperative solution please it is an imperative solution. Kap allows you to write code that looks mostly like your average scripting language:
a ← 0
sum ← 0
while (a < 10) {
sum ← sum + a
a ← a + 1
}
sum
But if all you are going to do is write code like that, you might just as well write it in Javascript (well, unless you want to take advantage of support for things like bignums, rationals and complex numbers).
Of course, an actual Kap programmer wouldn't write the code above like that. They'd write it like this instead: +/⍳10
There is a perfectly valid argument that the ⍳ in the example above could be written in a different way, and sure, Rob Pike chose to use actual words in his version of APL, or you could use J where the example would be written as +/i.10
But none of that is really important, and most people in the array programming community doesn't care whether you write "plus reduce iota 10", or +/⍳10. What they really care about it how the idea of using array operations completely eliminates loops in most cases. That's what is interesting, not the choice of symbols.
> APL just has a small handful of symbols with really simple definitions
> Scheme actually has a ginormous dictionary by comparison
Oh, and also, I should point out that Scheme has no symbols to learn. Quite small compared to the 50 odd that APL has. The reason for this is that users already know all of the symbols in Scheme. "log" exists as a concept in people's brains already because it is a word and also the name for the function it describes. You know the word "log". You can start typing "log" and it will show up in auto-complete. ⍟ (circle with small star in it) doesn't exist in people's head and you can barely make out what it is without a special font and you cannot type it without a special input method. People do not know the word "⍟". It is an entirely new concept for something people understand perfectly well and all it does is reduce the length of a line by 2 characters.
First of all, no one is suggesting you get rid of good old +-/* and friends. So more accurate would be:
(+ (- (/ b (* 2 a))) (* (/ 1 (* 2 a)) (sqrt (- (^ b 2) (* 4 a c)))))
;; or just
(/ (+ (- b) (sqrt (- (^ b 2) (* 4 a c))))
(* 2 a))
Secondly, another advantage of lisp is that you can change the format of expressions as needed.
;; With Scheme's SRFI-105 – Curly Infix Expressions
{{-(b) / {2 * a}} + {{1 / {2 * a}} * sqrt{{b ^ 2} - (* 4 a c)}}}
;; With a macro
(math - b / (2 * a) + 1 / (2 * a) * (sqrt (b ^ 2 - 4 * a * c)))
;; Reader macros
#math(-b/(2*a) + 1/(2*a) * (b^2 - 4*a*c))
What this represents is that lisp allows you to decouple the way code is input and displayed from the content of it. Wolfram Language uses a similar system to display mathematical expressions as though they had been typeset with LaTeX. A lisp system would likely support something similar. In the source code, you would have:
(formula (+ (- (/ b (* 2 a))) (* (/ 1 (* 2 a)) (sqrt (- (^ b 2) (* 4 a c))))))
But the editor would display:
(formula -b/(2a) + 1/(2a) √(b²-4ac))
Finally, I find this post to be very disingenuous in that it doesn't address the plethora of other strange symbols used in the blog post, to which my comment was referring. Only talking about standard arithmetic symbols that everyone already understands completely missing the issue I was addressing: that people won't use something if they can't understand it without memorising an entire manual of foreign symbols. Everyone knows what + means, many people can understand "reduce", but no one who hasn't already learned APL will understand the meaning of ⍒. What's worse, they won't even be able to ask someone what it means: "Hey bob, what does "triangle with a line through it" mean?" Well which way is the triangle pointing? And is the line vertical or horizontal? Is it a full triangle or is the bottom missing? So if you show your boss a spreadsheet with this stuff in it, he's going to have to put his finger right on your monitor to point to the triangle he's talking about, and that's gonna leave you with a nice finger smudge on your display. This simply isn't ergonomic.
Hard disagree. Which do you find more readable?
or or APL just has a small handful of symbols with really simple definitions and, heck, you already know what +, -, ×, and ÷ mean. Scheme actually has a ginormous dictionary by comparison, and in practice you do need to read the manual anyway, so discovery is a moot point.APL is actually simpler in that regard in practice, and the gains you get in readability from the symbols are just like above. Better yet, though, which of the above do you immediately think, "that 1/(2a) can be factored out"? Oh, and here's APL for the above:
Quickly empowering your brain to spot high-level patterns across your codebase like that is a huge strength of APL.