Converting math into code is extremely easy. Converting code back into math is the thing to optimize for. The compiler can figure out how to optimize the machine code for you, but the reader needs to be able to work backwards.
The thing that array languages get wrong is that code is read more often than it is written, and repeating yourself is worthwhile in order to make it easier to understand for the next person who looks at it. Compilers are extremely good at recognizing patterns and optimizing it for numerical code. Humans are really bad at recognizing patterns from unique sequences of characters.
> “The thing that array languages get wrong is that code is read more often than it is written”
It seems to me that array language designers understand this, but they make a different trade-off by prioritizing density.
A screenful of an APL-family language can contain a program that might take thousands of lines in a Java-style language. There’s power in being able to see the whole thing at once.
Would a passenger jet airplane be easier to use if the cockpit only had an iPad and the pilots would have to navigate through UI trees to find available actions? It would certainly be more discoverable to the ordinary person, but the pilots would probably be deeply unhappy with this design. In this analogy, an APL-style program can be like a “cockpit” full of instruments that you designed yourself for that exact job.
Except you sneak in alien symbols: +, - (used both as a unary and a binary operator!), *, ^, and /. See how readable they make things, though? :)
Your quadratic_roots is, indeed, nice in isolation. I'd even go as far to say that it's a good pedagogical piece. However, in production code pedagogy is not what I want to optimize for. Production code often repeats patterns with slight variations on a theme: What if you only want real roots? What about complex a and b? What about just grabbing the discriminant?
It's easy enough to modify quadratic_roots, but then you either get a combinatorial explosion of function definitions, or you end up with a function with extra parameters to select the different variations, and you often end up with a deeply nested function call graph, e.g. replacing sqrt(b^2 - 4*a*c) with discriminant(a, b, c), which makes quadratic_roots more annoying to read.
In practice, defining a function or some abstraction barrier is making an architectural decision. Ideally that decision would correspond perfectly to some fundamental feature of the problem you're trying to solve, but in practice we rarely are coding with perfect problem domain knowledge, right? In practice, our functions/classes/abstractions end up accumulating cruft, right? Why is that?
Where we traditionally handle complexity by setting up abstractions to let us hide parts of that complexity, APL is good at using a different tactic, called "subordination of detail." Done well, this looks like writing very simple, direct code that empowers your basic language tools to take on domain-specific meaning without introducing any abstractions.
Here's an example that I came across recently: t[p]=E. The primitive operations are simply equality comparison (x=y) and array indexing (x[y]). However, in the specific problem, t[p]=E effectively selects parts of an AST that correspond to expressions. It's just a friggin' index operation and equality comparison! Normally this kind of operation would hinge on a reasonably large hierarchy of datatype definitions, traversal patterns, and hard-to-read performance hacks.
Instead, the APL (t[p]=E) is crazy short, obvious in meaning, and you can literally just read the computational complexity right from the expression! What other language does that? Granted, you gotta learn a little APL first.
> The thing that array languages get wrong is that code is read more often than it is written
Personally, I find APL quite pleasant to read. You really should give it an honest try.
> Converting math into code is extremely easy. Converting code back into math is the thing to optimize for.
Yes and that's what programming languages like APL do. Whole algorithms for manipulating equations represented as matrices can be groked with just a few symbols. At least to my mind, it's a very efficient way to write code, at least for domains that fit this model.
I learned APL in the mid '70s in high school. The math department offered it as an option to those of us learning linear algebra. One course reinforced the lessons from the other. Those of us in the APL course found ways to solve other problems not directly related to linear algebra, and we marveled at how compact the code was (especially compared to Basic, which a friend and I had learned the year before).
In my professional experience most code never gets read by anyone other than the original author and when it is read it's often reluctantly. If you have any links to studies which show that code is read more often than it is written, I'd be interested to read them.
I, just, can't believe you could think that. Have you never debugged code you didn't write? There are six billion people in the planet and you think it's more likely that a given piece of professional code will only ever be read by one person? A person leaves the company and you just delete everything they wrote so nobody can read it?
I'm guessing this means you don't have any studies to back your claim. Neither do I, but in my experience most code is left to run when someone leaves the company. If issues come up, someone is assigned to support the code and nine times out of ten (okay, maybe eight) the person on support decides to rewrite the code. FWIW, the six billion people stat hardly seems relevant, right?
So you have a study? Because I’ve seen way more people who think readability in code is paramount, and I don’t think I’ve ever seen the take that it’s usually never read again.
It is good, but author, when he try to "sell" his language, uses terse, alienating, notation, and not this one (but using unicode multiply and arrow instead of ASCII * and = or <- looks like show-off without any practical advantages for me anyway).
I am the author by the way. The intent wasn't really to try to sell the language. The language is what it is, but the thing I wanted to highlight was that the entire solution (language combined with the user interface) provides a better foundation for working with arrays of data than a spreadsheet. Perhaps that point would have been more clear if I had removed the actual code since it takes away from the more interesting point.
As for the language, the longer version looks like an imperative solution please it is an imperative solution. Kap allows you to write code that looks mostly like your average scripting language:
a ← 0
sum ← 0
while (a < 10) {
sum ← sum + a
a ← a + 1
}
sum
But if all you are going to do is write code like that, you might just as well write it in Javascript (well, unless you want to take advantage of support for things like bignums, rationals and complex numbers).
Of course, an actual Kap programmer wouldn't write the code above like that. They'd write it like this instead: +/⍳10
There is a perfectly valid argument that the ⍳ in the example above could be written in a different way, and sure, Rob Pike chose to use actual words in his version of APL, or you could use J where the example would be written as +/i.10
But none of that is really important, and most people in the array programming community doesn't care whether you write "plus reduce iota 10", or +/⍳10. What they really care about it how the idea of using array operations completely eliminates loops in most cases. That's what is interesting, not the choice of symbols.
The thing that array languages get wrong is that code is read more often than it is written, and repeating yourself is worthwhile in order to make it easier to understand for the next person who looks at it. Compilers are extremely good at recognizing patterns and optimizing it for numerical code. Humans are really bad at recognizing patterns from unique sequences of characters.