Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are a lot of comments of the type "it's unreadable/write-only/unmaintainable" here. It's a natural reaction; I know, I was there. It looks different. But readability is in the eyes of the beholder, not the language. I made this point in my APL book (https://xpqz.github.io/learnapl) -- just because I can't read Japanese does not make Japanese unreadable. Since I wrote that book, I've spent a few years immersing myself in APL, gradually rewriting thousands of lines of Python into APL, and it's such a productivity boost. My code is an order of magnitude smaller. More, sometimes. Fast, too.

For the right use cases, it's unbeatable. Sure, you probably wouldn't want to write an OS kernel in it, but anything that reads a bunch of data, mashes it up, and spits out some result, APL is a hand-in-glove fit. And with modern SIMD processors, APL really screams.

https://xpqz.github.io/learnapl

https://xpqz.github.io/cultivations

Drop in on https://apl.chat if you're interested.



Recently someone posted Decker here on HN, a HyperCard-inspired environment. It includes its own programming language, Lil, which looks like it took some APL influences but made the core design more based on Lua. Meaning the language hopefully feels familiar enough avoid scaring people off.

I wonder if something that might be a way forward, similar to how lots of imperative languages now have language constructs that makes it easier to do functional programming in them: steal enough from the array languages to make other languages less bad. Or maybe that's what NumPy and Julia already are, which also would show the limitations of that approach. I dunno, I've read about array languages out of theoretical interest but never actually programmed in them.

[0] https://beyondloom.com/decker/index.html

[1] https://beyondloom.com/decker/lil.html


I'm extremely confident that array language features will seep into the mainstream in the same way that functional programming features have been doing for the last 10 years or so.

New language designers will have to defend why they don't have array language features rather than why they do.


> But readability is in the eyes of the beholder,

Unfortunately, given that the program is going to be read by people, what we care is about what the beholders say.

> just because I can't read Japanese does not make Japanese unreadable

A nice comparison: Japanese is indeed harder to read than other non-symbolic languages. Symbols are harder to recognize because there are far more of them than letters in alphabets (latin, greek, cyrillic). For example, do you think it's easy to recognize the difference between 陳 and 陣? The more symbols you have, the harder they are to process, be it Japanese or APL.


Yet millions of Japanese people are fairly adept at reading it. To be clear, APL actually has a tiny set of symbols. In day to day use, perhaps two dozen. Remember that these aren't "keywords" in the traditional sense, but more like the standard library. Contrary to popular belief, it's actually pretty easy to learn to read, as usually the symbols are mnemonic, or referencing well-known mathematical counterparts. Flipping a matrix in various ways is a circle and a line. For example, ⍉ means "flip diagonally" -- in other words, a transpose. To flip around the horizontal axis, guess the symbol: ⊖. Around the vertical? ⌽. The ceiling of a number is borrowed from maths: ⌈ etc. Counting the elements in an array is ≢ -- evoking the symbols you scratch on a piece of paper when tallying something up.


> Yet millions of Japanese people are fairly adept at reading it.

Yes, because it's their native language. Doesn't say anything about the difficulty of it. My mothertongue is Spanish and I'm fairly adept at using it, and still doesn't mean that some aspects make it harder than other languages (verb conjugation, gendered nouns).

> or referencing well-known mathematical counterparts.

As a mathematician, other than the very basic symbols, most of the ones used in APL are not familiar, or are used in different contexts. For example, × is cross product and in APL is regular multiplication, ⍟ is logarithm even when "log" is what's used in math, and "⊥" looks to be polynomial evaluation when that's usually the symbol reserved for orthogonality.

> Counting the elements in an array is ≢

Funnily enough, that symbol is "not equivalent" or “not congruent” in math. Never would have though it refers to the symbols in a paper.


When used with 2 arguments, e.g. `a ≢ b`, it indeed is called "not match", i.e. are `a` and `b` the same. When used with 1 argument, `≢ a` it will give you the number of elements.


Oh, that’s… not confusing at all.


Yeah, the fact that the same symbol is overloaded with two meanings for the one-arg and two-arg (infix) cases adds some overhead to reading APL code. The precedence rules are very simple and uniform in isolation, but personally I find that applying them to (mentally) parse a nontrivial line of code is one of the hardest parts of getting started with APL.


> The ceiling of a number is borrowed from maths: ⌈ etc.

Isn't it the other way around - Iverson invented this symbol for this purpose and the standard mathematic notation adopted it?


Reading information rates are pretty similar across major written languages. (1.42 ± 0.13 texts/min). Ideographs take longer to recognize but contain more information.

Japanese readers do indeed take in information slightly more slowly than average, but not as slowly as Finnish readers -- a language with 26 latin-based characters.

[1] https://iovs.arvojournals.org/article.aspx?articleid=2166061 (summary) https://irisreading.com/average-reading-speed-in-various-lan...


> Ideographs take longer to recognize but contain more information.

I think this is a huge part of what makes it hard for people. You have to read denser code slower. APL code can easily have 1/10th of the characters when compared to code in popular languages. If you read at the same speed then you're reading... ten times as fast. That's a bit much to ask. You could read a five times slower and you're still covering information at twice the speed.


Wonder if is there a sweet spot, sort of the language equivalent to what ternary is to number systems, the integer base with best theoretical radix economy (number length vs. Maximum different states coded)


> Japanese is indeed harder to read than other non-symbolic languages. Symbols are harder to recognize because there are far more of them than letters in alphabets (latin, greek, cyrillic).

Do you have a study that confirms that kanji are harder to read for Japanese speakers than romanized transcriptions?

In English or Spanish, we also don’t read individual letters to reconstruct the words that they represent. We read the shape of each word. This is why “floor” and “flour” are harder to tell apart than “floor” and “flower.”


I don’t have the studies at hand but I don’t think it’s such a difficult notion to accept. For starters, there are more symbols than letters, which means you have more things to learn.

But the important thing is how symbols/words are processed. For any given concept, we might have the symbolic representation, the auditory representation and/or the pictoric representation. The good thing about words is that, even if you haven’t seen the written word before, it maps to the auditory representation (more or less depending on the language). Kanji doesn’t (in fact, the same symbol might map to different “words” and sounds) and it’s not a very precise pictoric representation either.

This is not to say that Japanese is impossible to learn or much more inefficient. Just that it makes it harder to learn and read. In a similar way other languages have traits that make them easier or harder. For example, English is harder because of the inconsistent mapping between writing and pronunciation. French is harder than other languages because of the number of vowel sounds and how important it is to make them different. Spanish is harder because of gendered nouns and the fairly complicated conjugation norms for verbs. Of course native people get used to those things, still doesn’t mean they don’t make things harder or easier.


It is true that learning to read the kanji is harder than learning to read, e.g., Spanish. But I understood your original statement to not be about ease of learning but ease of reading (once learned). It is not obvious to me that Japanese is at a disadvantage here when we compare kanji vs. romaji within the same subject.

By the way, you might know this, but if a Japanese speaker doesn’t know a kanji, they are not at a complete loss. Words often consist of multiple kanji so they can infer from the meaning and possible pronunciations of the other kanji in the word. The context in which the unknown kanji appears helps too. And in literature aimed at young readers, they often also show the pronunciation written next to more unusual kanji (furigana). All of this is less relevant to APL though.


Usually hard to learn and hard to use are very correlated concepts. If something is hard to learn, it usually requires more mental energy to read. For an example closer to APL, mathematical notation is harder to read than the verbose equivalents. You not only need to keep the concepts in mind but also how the symbols are translated to the concepts. Maybe when you’re very familiarized with them they’re as easy and a little bit more terse, but it’s easy to slip back if you’re tired or just woke up less bright that day.

And that’s just in math, where there isn’t as much symbol density as in APL and where symbols don’t depend on context too much (it’s usually not a good practice and avoided if possible to have the same symbol having very different meanings depending on the context).


I just counted how many distinct symbols are in five projects I've been working on lately. They use 30–50 primitives each. Across all of them, it is 56.


"Fast" is not relevant - the runtime would be just as fast if the code was written in plain words. Code size is also irrelevant unless you're called Elon Musk. The only claim left is that it's a productivity boost, but I just can't see how a language that needs a special keyboard to be written can be written more productively than actual words, especially with modern IDEs.


Speaking for myself, I find that whenever I think "I just can't see how..." it makes me want to figure out what it is that others can see that I can't. This is the reason I learnt APL, actually, after seeing half a page of k code and feeling annoyed that I couldn't read it. k subsequently led me to APL.

You can try it for yourself -- NumPy is a fair "Iverson ghost" -- APL without the symbols: it has a similar enough array model, and most of the APL primitives as functions or methods. APL lets you express linear algebra in a very natural way. Doing it in NumPy is much more convoluted.

Or try Rob Pike's (of Go fame) "ivy" -- his attempt at an APL-with-words (https://github.com/robpike/ivy).


Sorry, I'm not embarking on a quest to learn APL because somehow the idea can't be explained.


The claim isn't that writing is faster, the claim is that there's less pointless boilerplate to write, read, or think about. You didn't write your comment as:

"quote fast unquote is not relevant hyphen the runtime would be just as fast if the code was written in plain words period code size is also irrelevant unless you apostrophe are called initial capital elon musk period the only claim left is that it apostrophe s a productivity boost comma but i just can apostrophe t see how a language that needs a special keyboard to be written can be written more productively than actual words comma especially with modern initialism ide s period"

because symbols are useful. And people don't have a problem with peppering code with chorded symbols like () ++ {} : <> ? without demanding that they be turned into words because words are easier. In fact people struggle when making new languages (Rust, C# new versions) to find more symbol combinations, reaching for things like Python's triple quote strings """ """ and :: and ""u8 and so on.

There's nothing inherently better about shift-2 shift-2 shift-2 shift-2 shift-2 shift-2 than altgr+i

Why aren't the common operations of transforming collections short, sharp, and getting out of your way? Why are they ceremonious "map a lambda which takes a parameter with a name and ..." when you don't need that.


I type each APL symbol with a single chord, e.g. ∊ is AltGr+e and ⍷ is AltGr+Shift+E. Each keystroke is basically instant.


You don't worry about repetitive strain injury from all that chording? I try to avoid that sort of thing as much as possible; even though I use emacs, I do it with evil bindings. Capslock rebound as control is useful, but the bottom row modifiers seem problematic. I can't easily reach those modifier keys while also keeping my fingers on the home row; I have to either contort my thumb or pinky in a bad way, or move my entire hand (which is usually how I use those keys.) Either way, keeping hands straight and my fingers positioned on the home row seems much safer and comfortable. I just can't imagine using a language where I have to altgr for every character I type.


Hasn't really been an issue. Occasionally, I map CapsLock to AltGr so I have a left-side APL shifting key too — it doesn't see much use, though. Also remember that it isn't "every character I type". Actual APL primitives only comprise a relatively small fraction of the total code. I just did a rough computation on four things I've been working on recently and they had 4%, 5%, 6%, and 7% non-ASCII APL glyphs, respectively.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: