I'm very interested in learning more about "Include common notation and theory as a special case" but the documentation for that isn't yet ready, and I also have only elementary knowledge of Haskell. If this means what I think it means, this project could be a great boon to anyone working with non-western music.
For instance, when I first started learning programming my motivation was to create a program to render Byzantine chant, which has a completely different visual representation. Here's an example with the byzantine notation above the western notation (the bottom staff is the ornamentation implied by the byzantine notation): http://www.cappellaromana.org/wp-content/uploads/2014/04/Che...
If anyone can offer tips on how to approach this problem, I'd be grateful. But I might have to play with this. The code for my ill-fated attempt for byzantine chant is here, if anyone is interested: https://github.com/muraiki/byzscribe
As a conservatory student and a long time user of LilyPond, it seems very impractical as a daily musical notation tool after a quick glance at the syntax.
ie: octavesUp 4 c in The Music Suite vs. c'''' in LilyPond
ie: Handling of staves and parts (as well as notes vertical to each other in multiple staves) as a horizontal stack in the code is very opposite of how music notation/engraver needs to think. In LilyPond (and any other music engraving system), you define each part/line as a separate entity and tell the system to put them together. Any discrepancies in how they actually stack up (if one line is not the full duration of a measure at one point) will result in an error with a debug statement.
However, there seems to be some philosophical difference from LilyPond these developers are after. Very interesting and will have to follow.
That said, I won't be switching from ABC for my day-to-day music writing. I see The Music Suite as being useful for, e.g., writing a fugue, where the music can be elegantly expressed programmatically.
Ooooo, I never knew about ABC! It looks really concise and easy to use. Does it scale well to big symphonic scores? (Or if you're not into that kind of a thing, at least bigger projects than one page melodies of single lines?)
It's a complete joy to use. I use abcm2ps, which adds a few extensions and is pretty powerful.
I mainly use it for single melody lines (albeit with complex ornamentation), but have done a few multi-part pieces. I can send you a sample when I'm not on my phone. The only downside is that it's hard to see the relationship between hhe parts--afaik you can't interleave them. But the actual typesetting works great.
It's definitely worth a try, especially if you have a technical background. (Oh! Another cool feature: it has its own version of cascading style sheets, which provides a really nice separation between content and piesentation.)
Okay, so I'm totally new to clojure, but I'm trying to run pitch.clj that you linked. I installed leiningen and added overtone as a dependency. Then I cloned overtone to get pitch.clj. Now how in the world do I actually run the code in pitch.clj?
The parent poster didn't mean that you should just arbitrarily execute pitch.clj, but that it was a particularly elegant piece of code in terms of capturing pitch semantics with beautiful Clojure.
I love it when people write good docstrings for their functions. I always write them first and then finish the code. It makes it so much clearer what you are intending to do and helps cut down unnecessary comments.
If you're generally interested in this subject, a popular toolkit for computational musicology outside of Haskell is Music21, in Python: http://web.mit.edu/music21/
If you're interested in Haskell and music, you might like Renick Bell's "Conductive" for Haskell-driven live coding music performance: http://www.renickbell.net/doku.php?id=conductive (video: http://www.youtube.com/watch?v=J5TskLgsdBU from last year's Linux Audio Conference; he gave another talk and concert this year but I don't think the videos are available yet)
If you're generally interested in high-level functional language code to analyse musical intention, the IDyOM project from Marcus Pearce in my own group (http://code.soundsoftware.ac.uk/projects/idyom-project) just made a release of their Common Lisp software for predictive modelling of musical structure.
Very cool stuff! Haskell is indeed great for this kind of thing; perhaps I will get around to trying it out. A (very minor) issue:
> Actually, scat and pcat used to be called melody and chord back in the days, but I figured out that these are names that you actually want to use in your own code.
I would have preferred to see `melody` and `chord` over s/pcat because that's what they're doing. The whole point of a declarative DSL like this is to be descriptive. When I first saw `scat` I wondered if it was a shorthand for `staccato`, or perhaps even a reference to scat singing? On the other hand, not only are `melody` and `chord` very obvious, but they are also general enough that they are rather unlikely to be used as variable names, except perhaps as one-offs.
There's an easy solution if you prefer to use those:
melody = scat
chord = pcat
I think the rationale is that you'd most often want `melody` to refer to the melody of the whole song rather than some sequence of notes within your melody.
or somesuch (sorry about the notation, I've never used Haskell - my point is about the nesting).
I would also love to see ways to define particular scales and modes and then specify operations within that scale space without necessarily articulating the individual notes. Of course at that point you're getting into compositional tooling rather than transcription.
Anyway, nice work and great documentation so far. Also good that there's a whole bunch of converters.
I don’t know much about Haskell, but I looked up `let`’s documentation, and I think it already does support embedded structures like you ask for. You can just write `p2 = delay (1/4) $ p1` (and you can also leave out the `$` because `p1` is already atomic). I think the site’s example repeats the definition of `p1` just to make reading the example clearer, not because you are forced to code like that.
The function `$` that you mention is just application:
> f $ x = f x
So, using `$` infix like so
> f $ x
Is equivalent to
> f x
And likewise
> delay (1/4) $ p1
is equivalent to
> (delay (1/4)) p1
with extra parenthesis to make associativity clear.
This begs the question: when is `$` ever a useful function? There are two cases that come to mind:
1) Precedence. `$` has the lowest precedence of any function, and can thus be used to omit otherwise necessary parens; this
> f (g x)
is equivalent to
> f $ g x
2) Use in higher order functions. Imagine you have a function `all` that takes a predicate function `f` and a list of values `xs` and returns `True` if `f x` returns `True` for all `x` in `xs`. So `all (> 3) [4,5,6]` returns `True`. But, imagine now that you have a list of predicates and one value: can you still make use of `all` in this case?
Yes, you could do something like this:
> all (\f -> f 42) [even, (> 3), (< 100)]
But that's a little verbose. If we look at `(\f -> f 42)`, all it's doing is taking a function and immediately applying `42` to it - hey, that sounds like `$` if it was partially applied to `42`! Here's a simpler version of the previous expression:
> all ($ 42) [even, (> 3), (< 100)]
So that's pretty much all there is to `$` and why one may omit it in the original expression.
This looks really cool and I'm excited to play around with it. I had been using Euterpea for a while last year (http://haskell.cs.yale.edu/euterpea/), but as far as I can tell that project has gone cold, and it's a huge pain to get working on an update haskell-platform.
EDIT: Never mind, looks like Euterpea has actually seen some development (and improved documentation) since I last used it.
Very nice, I wish it had support for some simple synthetic piano sound. Of course one can play it easily with MIDI-export, but direct feedback would be even more nice.
There are lots of algos for this. The ones that work well are complex. None of them work that great on full pieces of complex music with many instruments playing at once.
The Ableton Live music package has a great one built in, which appears to do some kind of probabilistic modelling for harmonies. You can send all kinds of crazy noises into it and the resultant chord progressions become increasingly avant-garde, but remain musical.
These algorithms often output midi (ableton does) and it seems you could put that into this Music Suite software and have it spit out notation.
Look in the literature and you will find tons of research on this stuff.
Thanks nbouscal, thanks for posting. Can you explain to me how your Haskell project is different from VexFlow which a lot of peeps to do HTML5 music notation rendering.
The example [listen] link points to a .mid file which some browser setups don't know how to handle (for instance, my Chromium simply downloads the file whenever I click the link, then downloads it again when I click the file in the bottom download bar).
Please offer an example audio rendering in a media format which is more generally supported, thank you.
For instance, when I first started learning programming my motivation was to create a program to render Byzantine chant, which has a completely different visual representation. Here's an example with the byzantine notation above the western notation (the bottom staff is the ornamentation implied by the byzantine notation): http://www.cappellaromana.org/wp-content/uploads/2014/04/Che...
Another different visual representation is that of shakuhachi music: http://www.rileylee.net/shaku_notation.html
If anyone can offer tips on how to approach this problem, I'd be grateful. But I might have to play with this. The code for my ill-fated attempt for byzantine chant is here, if anyone is interested: https://github.com/muraiki/byzscribe