I'm glad to finally see a non-Iversonian array language. I've dabbled in J a bit, and it is pretty nice.
For those who haven't used array languages, the easiest comparison would be NumPy. NumPy is like a small array DSL on top of Python. Instead of having to manually iterate over arrays, you can (kinda) treat them like a single item; NumPy handles the details of iterating.
I could deal with how terse J (plus APL and BQN) can be, except for the sheer number of operators. I've heard that you can figure out what they do by their shape, but that doesn't help sort through 200+ operators (aka primitives) on NuVoc.
There are definitely some interesting thing in the Iversonian languages that aren't really found anywhere else. For instance, you can use a modifier (aka adverb) to swap the order of parameters to an expression. You can also convert an expression to a scan (aka reduce, IIRC) by adding a modifier on the end.
Apart from the syntax, GPU acceleration for an array language is an absolute slam dunk. Given how much people pay for K/Q licenses to use for low-latency HFT, I'm baffled GPU acceleration wasn't ubiquitous the second GPGPUs came out.
J is a big outlier both in terms of the number of primitives and how complicated an individual one can be. The ordering is K < BQN < APL < J in both number of symbols and total functionality (K and J tend to pack many things into one primitive but not enough to change the ordering). So it sounds like you might want to try K as well. If you're not aware, ngn/k is a newer free and fast implementation. Nial and Q are also options for languages influenced by APL that use words more.
Futhark makes some sacrifices for its amazing performance, in terms of things like the ways you can use higher-order functions. The popular APL-related languages don't do this, so that any given program might be working with big flat arrays that are good for the GPU or lots of small ones in complicated structures that aren't (Futhark does flatten nested structures but not everything can fit into this system). I don't think high performance is the primary requirement for any of them, even K/Q, which is why projects like Co-dfns that introduce limitations to compile on the GPU don't create interest in the APL community. I discuss issues with compilation at https://mlochbaum.github.io/BQN/implementation/compile/intro... , with Futhark mentioned under "Typed array languages". GPU acceleration without full AOT compilation is also possible, but you kind of need loop fusion to get much out of it, which doesn't fit the model of an interpreted array language well.
(Aside, APL-family languages are really fun for making music if you can get them set up right; SuperCollider even says they took some influence from J. See https://lochbaum.bandcamp.com/album/bqn-tracks for my synthesized tracks and source code link.)
Hi Marshall. I've enjoyed listening to you all on Array Cast.
I've always been put off by the weird character sets, but I guess it's time to finally get over it. I've been meaning to try BQN, especially since it's FOSS.
Having only 2 arguments to a function is another major pain point for me. Boxing isn't an answer there, though it could be usable with destructuring or something.
My initial introduction to array languages was an article about K for HFT from several years back. There was a lot about how much performance they were able to squeeze out using a tiny interpreter, which might be why I always think of array languages together with performance.
I think there's a niche for an array language geared towards big datasets. Essentially something like Tensorflow that streams data between the CPU and accelerators. The advantage over regular Tensorflow would be neater syntax and maybe some fancier operators.
Thanks for the link on the music, I'll check it out. I've been thinking about playing procedurally-generated music, particularly with GANs and the like. Array languages would be great to play with algorithms, since that code should be reasonably easy to translate to a TF pipeline for training production neural networks.
Anyway, this post is probably long and incoherent enough by now. Major respect, and thanks for doing what you do.
Small might not be the right word. I mainly had broadcasting in mind when I wrote that.
>Is there anything missing...?
Not sure I'd call APL esoteric, given it was originally developed at IBM.
My knowledge of both array languages and NumPy is limited, but one thing missing might be partitioning using a mask.
An example of that would be the following snippet of J, from a WIP implementation of MTF encoding:
mask <@(_1&|.)`<;.1 dict
The line noise in the middle partitions dict by mask and uses a cyclic gerund to apply a different function to each partition. The ;.1 is the partition modifier; before it are the two functions separated by a tick. The second function just boxes its argument (due to different sizes). The first function rotates its argument by -1 places and then boxes it
Honestly, I'd really prefer to unpack that statement into 20 lines of clear code. Not because I'm not familiar with and not used to this syntax: my preference equally applies to similar terse Numpy statements which I can understand (but which are error prone and require mental effort).
Tacit expressions like that are interesting and very rewarding to write, but I'd agree that it's way too terse. The Iversonian array languages tend to be like that, since they're derived from mathematical notation. (Though I have heard that tacit expressions are pretty much absent from production APL code.)
The aspect I find most interesting is the way primitives can be combined to build up functionality. It's closer to a declarative syntax, where the focus is on the end result instead of the steps to get there.
Futhark looks very cool. I generally don't buy into programming dogma, but I'm a big fan of data-oriented design these days; on top of efficiency gains, I find thinking in terms of data transforms and flows very intuitive. Plus, it's always great to have another tool to use/toy to play with
Thank you! Yeah, at some point it would be nice to do a little library of sound effects, and there we should definitely have some filters like that.
To be clear, we're not doing anything in the browser (yet). Futhark literate takes a .fut file and generates a markdown file, optionally with some image, video or sound files. So (unlike your language) this is all "offline".
For those who haven't used array languages, the easiest comparison would be NumPy. NumPy is like a small array DSL on top of Python. Instead of having to manually iterate over arrays, you can (kinda) treat them like a single item; NumPy handles the details of iterating.
I could deal with how terse J (plus APL and BQN) can be, except for the sheer number of operators. I've heard that you can figure out what they do by their shape, but that doesn't help sort through 200+ operators (aka primitives) on NuVoc.
There are definitely some interesting thing in the Iversonian languages that aren't really found anywhere else. For instance, you can use a modifier (aka adverb) to swap the order of parameters to an expression. You can also convert an expression to a scan (aka reduce, IIRC) by adding a modifier on the end.
Apart from the syntax, GPU acceleration for an array language is an absolute slam dunk. Given how much people pay for K/Q licenses to use for low-latency HFT, I'm baffled GPU acceleration wasn't ubiquitous the second GPGPUs came out.