I am very intrigued by Nimrod. The language seems to have goals that overlap with e.g. Rust, but with a bunch of really interesting design decisions (e.g. GC by default, but first class support for manual memory management). Given the amount of force driving Rust (and Rust's PR machine), compared to Nimrod which seems to be really pushed forward by one single person, I'm really impressed by how far Nimrod got.
I really want to start using Nimrod for real work.
While you're right for probably most classes of application, this is not an absolute truth. There are reasons not to use the GC - eg you're developing a browser, or a GC for a language, and many besides.
I see a similar thread where you made a similar statement, so in favor of not rehashing that whole thread, here it is:
And adding onto the other reply - Nimrod's GC has a realtime mode where you can specify when to run, and the maximum time. I made a (small) game in Nimrod and called the GC every frame for the remaining time (it can be used like a blocking high-accuracy timer). Testing the GC I couldn't get it to take longer than a couple microseconds - intentionally smashing my 16GB heap to hell. Why does a 16GB heap take so little time to GC? Because Nimrod's GC doesn't scale - it's deferred reference counting. Only cycle detections scan the whole heap, and you can disable those optionally (I do, I don't like designing cyclic stuff without explicitly knowing it gets broken).
Oh, I strongly disagree with that logic. A browser is exactly the sort of situation where a GC is useful. I mean, it only took Mozilla 2 decades to get Firefox to not leak memory like a sieve.
There are many reasons for leaks in Firefox, and none of them had to do with not using GC for everything. In fact, there were failed attempts to do exactly that (XPCOMGC), which failed due to performance problems. A lot of those "leaks" were just cases of using too much memory, which pervasive GC actually hurts due to the lack of prompt deallocation (which deferred reference counting loses).
GCs are simply not appropriate for every use case.
Reference counting is not a panacea; once you start wanting to break cycles (which history tells us you will), you start having to deal with stopping the world or concurrent collection. If you don't have thread-safe GC, then you have to either copy all data between threads (which limits the concurrent algorithms you can use) or you lose memory safety.
Finally, your implicit claim (that Rust's safe memory management is more vulnerable to leaks than GC) is untrue. Rust's safe manual memory management is no more vulnerable to leaks than GC. The compiler automatically destroys memory when it is no longer reachable.
`it only took Mozilla 2 decades to get Firefox to not leak memory like a sieve`
I'm sure you're aware that this is quite an unfair/exaggerated statement to make. But yes, I'm all in favor of language features that help prevent memory leaks.
But the reason the smart folks at Mozilla don't just switch to using a GC for all of Firefox (as do none of the other major browser vendors) is due to GC pauses sucking for user interaction. If you don't think that's a concern, or have a solution for it, please elaborate.
Memory leaks waxed and waned as development focus changed. I remember Firefox's memory usage being quite decent around version 2, and even into 3. Later, some things got bloated, though a large part of the memory problem was due to misbehaving plugins. The memshrink project managed to fix even memory leaks across plugin authors' projects.
Also, note that a GC does not automatically mean no memory leaks. For instance, see how leaky Gmail is (was worse according to their dev team).
Nimrod's GC is not thread safe, last I checked. So if you aren't careful to avoid races on shared memory, you can segfault. Also, you incur full heap scans to clean up cycles.
Yes it is. Threads don't share a GC - there's no implicit sharedness. Memory is copied between threads. And of course, it doesn't prevent manual management of shared memory (just not GCd) so you CAN use locks to do so. Just like other languages.
I turn off the cycle collector in my realtime apps. I prefer designing a clean, solid system that isn't reliant on cycles without my direct knowing. I guess that's just my inner control freak though.
What in the HELL are you talking about? The thread local GC won't even produce anything on the shared heap - it's thread local. Shared memory is manual memory only - Just like C, C++, Ada, and every other manual memory management language. And when the shared GC (which will have to be used explicitly) is implemented - it'll be just like Java, OCaml, and every other shared memory garbage collected language. What in God's does that even mean - segfaults when you use shared memory? It only segfaults if, like in every single other language, you didn't take the time to think out your design and are dereferencing dead memory.
Oh, and I said that it has locks like every other language. I didn't mean shared memory like every other language.
Finally - if you're just being smug at how smart rust is for having lifetime tracking and all those pointer types/restrictions - I don't think it's all that great; Nor did the gaming community when they got their hands on it last; Nor do many others who agree in the opinion that rust is just too complex while being too restricted.
Calm down. All I was saying was that Nimrod is in a somewhat isolated space in which memory management is automatic and safe except for when memory is shared between threads. I'm a bit skeptical of this, because memory management is at its most difficult exactly when multiple threads are involved. So I'm glad to see Nimrod is moving to a thread-safe GC (and I have nothing against Nimrod and would like to see it succeed).
Hybrid automatic and unsafe manual memory management (when the unsafe portion is for something really common like shared memory) is not something I'm really a fan of; it gives up safety while retaining the disadvantages of automatic memory management (lack of control, overhead). I think that safe automatic or fully manual schemes are the ones that have won out in practice because they get to fully exploit the advantages of their choices (safety in one case, control in the other).
Just one thing to point out - people wrongfully consider that an alternative to a non-deterministic GC is necessarily a manual memory management. There are automatic and deterministic memory management mechanisms, e.g. smart pointers in C++, Rust.
> I really want to start using Nimrod for real work.
I have actually used both Rust and Nimrod for real work (same project, first Rust then rewrite in Nimrod). My experience is that Nimrod is far easier to handle than Rust. It feels like Python with the native speed of C. There are a lot of nice features built into Nimrod, for instance native Perl regular expressions and seamless import of C functions. As for me it is the most productive language I ever encountered. I know many languages.
Initially, I was kind of 'meh' towards Nimrod since Rust already looked so good. But when I recently read about the features, especially the metaprogramming and compile-time features, it seems like it could be really fun and powerful to program in.
I really enjoy the language so far. For me, it's the perfect middleground between C and python: a fast compiled language, and one where you can be as productive as in python.
I also tend to prefer catching errors early, and having a typed language that warns and errors at compile-time is great.
I've found some (arguably minor) things to be kinda messy. Maybe the result of a one-person language that grows too much before getting some feedback.
1) Everything is (strangely) called a procedure, and then there's syntax to differentiate arguments that will be modified in place (proc myproc(myarg : int, inplacearg : var int)). Kinda weird, and a lost opportunity to have checking for pure functions at compile time.
2) import vs. include. Why have include at all to shoot yourself in the foot if you have cheap namespacing?
3) If vs. When ?
4) varargs feels kinda unnecessary
5) Case-insensitive. Oh... why?
On the bright side, I like how OO was implemented.
I disagree that it's a middleground between C and Python though. I see it more like an evolution of Pascal, it has the same niceties (ALGOL-like, static types, builds executables) and some things added on top (no VM, but GC'ed, metaprogramming).
Oh, it does bring a lot to the table that C and Python don't have. You're totally right, it is not a middleground per se, it is just the language I was looking for.
If I try to answer your points:
-> 1) The procedure keyword comes from Pascal. I am not shocked by it, when I learned programming the teacher used to call them procedures also.
The 'var' in procedure arguments could be considered the opposite of the 'const' of C/C++. Everything is const by default in Nimrod, but if you want something modified in place you indicate it.
-> 2) Herm... I got nothing. I haven't used include, I didn't see the use for it yet.
-> 3) 'when' is compile-time, 'if' is at runtime.
-> 4) Unnecessary? varargs is quite useful. For example, if you use redis, you can do db.del("akey", "anotherone", "otherkey"), instead of having to fiddle with an array. Varargs makes some calls cleaner.
-> 5) "The idea behind this is that this allows programmers to use their own preferred spelling style and libraries written by different programmers cannot use incompatible conventions." from the Nimrod manual (http://nimrod-lang.org/manual.html). It forces you not to name functions and variables too closely. So you won't be able to have different things named myStuff and my_stuff because it will refer to the same variable or proc. You enforce your own writing style. That is debatable. You have others enforcing a style, like with gofmt. The case insensivity did not disturb me, though (but I admit it surprised me at first).
1) I'm not a fan of both the name `proc` and `var` because it allows you to mix inplace editing with return values and makes a mess of the (IMO, precise) meanings "function" and "procedure" have.
If I were starting a new language, I wouldn't pass the opportunity of disallowing mixing these concepts, so there's a way to reason about pure functions.
3) I get that, but it feels like something that could've been optimized away by the compiler, they didn't bothered and instead bloated the syntax. Not a fan of the naming too.
4) Just IMHO, but this kind of magic feels out of place on a static language. In something like Python, variable arguments aren't as opaque since there's an underlying object being passed around (a list or a dict), and your arguments can be of any type.
1) Conversely, while Ada used to not allow in out parameters for functions, it is allowed as of Ada 2012 [1]. There's certainly an argument to be made either way. For what it's worth, I'm not a big fan of functions with side effects myself.
3) You can't optimize it away. A compile-time conditional statement has to allow for undefined identifiers and such, but for a runtime conditional statement you want to have the compiler signal an error even if it can statically determine that the condition is always true or false.
3. If == runtime control structure. When == COMPILE time control structure...code in a failing when clause is not compiled. Basically the equivalent of something like an #ifdef pre-processor macro in C.
I looked at http://golangtutorials.blogspot.com/2011/06/interfaces-in-go... and the only new aspect of Go interfaces I see is that they are implicit - they are considered implemented if the type implements functions that interface defines and you don't need to explicitly write "implements Something". This is a handy shortcut, but it doesn't look like a "serious feature". It also looks a bit like polymorphic variants in OCaml.
Of course I might be missing something as I'm not that familiar with Go.
> the only new aspect of Go interfaces I see is that they are implicit
This is not a new thing. Structural subtyping has been around since the earliest formal treatments of subtyping in the early 80s. OCaml's subtyping relation is structural.
There's a related notion of row polymorphism that was first formalized in the late 80s. As far as I know, it hasn't been widely adopted, but is the subject of ML Poly/R. Elm's extensible records also seem similar. Row polymorphism is also an important concept when dealing with typed concatenative languages, like Joy and Cat.
Really, Go brings nothing new to the table. It is a synthesis of (mostly) good ideas. Unfortunately, it also forgoes other good ideas (parametric polymorphism, sum types, and pattern matching come to mind). The goodness of exceptions is, of course, debated.
Yeah, what I wanted to say was "the only aspect worth noting" or something similar. I knew about structural typing and vaguely remembered that row polymorphism exists (but I'm not really sure what it is).
Actually I wanted to play with Joy for a couple of times now, but it seems unmaintained and rather hard to approach. I ended up learning some Forth and (little) some of Factor instead. I think I'll give Cat a shot, I'm not a fan of CLR, but I'd really like to know how you can type concatenative language.
I'd definitely recommend Cat. It's a very neat language. The reference implementation is written in C#, but I recall seeing a page at one time listing a few implementations in other languages. Unfortunately, I can't seem to find that page now. The Cat website does have an online interpreter, though.
Actually it's a big improvement for large-scale programming due its effect on library dependencies. In Go you can easily declare an interface that matches a type in someone else's library without creating a hard dependency on that library. You can also superset and subset interfaces easily. That way you get loose coupling (almost like "duck-typing") in a mostly statically checked language.
Contrast with Java where you'd have to both create a new interface and write an adapter class in a separate "glue" library that has hard dependencies on both libraries.
I don't want to sound dismissive, but I think very few people here would like to "contrast with Java". We're (I am for sure) thinking more along the lines of comparing with Rust, OCaml, Haskell or Scala. Or Opa. Or Felix. Or Ur/Web. Or any number of modern languages implementing rich type systems.
Anyway, I believe this feature is very handy. It's not "new" however. As noted, OCaml objects - and also modules - support structure typing too. And you can't call OCaml a new language. Scala supports it too, in more than one way. And so on.
Also, compared to powerful and extremely rich type-systems that these other languages got Go's seems rather limited. What I meant by interfaces not being a "serious feature" - I should have said it differently, I know - was that compared to other features of modern type systems it's not that significant. I get a feeling that it only looks like it is in Go because it lacks those other features.
And BTW, that's a concious decision of language designers to keep the language simple. I don't say it's a bad decision, either. I just want to note that Go indeed is simple (at least in regard of types) and not that innovative. And also that using Java as a baseline is not the most ambitious thing to do. ;)
I don't think it matters that much from a library maintainer's point of view.
If you you want to change a public method and can find all the type's usages, an IDE or search engine can tell you which call sites will break. (Or just compile everything and see what happens.)
If you can't find all the type's usages, you're screwed anyway because any change that would break an interface will also break a call site that calls a method directly, without using an interface. So having all the interfaces declared right there doesn't help that much.
> I don't think it matters that much from a library maintainer's point of view
From a large scale application developer point of view it matters a lot.
In code bases developed by 50+ developers across multiple sites, it is important to be able to look to a struct definition and be aware what interfaces in the code base are supported.
g++ had those years ago, maybe gcc 2.0 or something? Google results are all messed up with Protocol Buffers and stuff so I didn't find a good link.
Back when C++ didn't yet have templates, there were several suggestions about what kind of generics to implement and how. g++ implemented "protocols", which are basically the same as Go interfaces. I think this was eventually considered (and refused) in C++ as "concepts", but I might be mistaken. Templates are more general and can be kludged to implement protocols - which is probably the reason they won out in the C++ standardization race.
Near the top of Nimrod's homepage (http://nimrod-lang.org/) is the following quotation, which leads me to believe that the language creators think they have picked a good name:
The most important thing in the programming language is the name. A language will not succeed without a good name. I have recently invented a very good name and now I am looking for a suitable language.
-- D. E. Knuth
But "Nimrod" might not be as good of a name as they hope. The biblical Nimrod was a "mighty hunter", and the name may have that connotation in Europe: the British seem to have always had a warship or a warplane (or both) named Nimrod, for instance. But in the US, due to the ironic use of the name by Bugs Bunny to address Elmer Fudd, we tend to associate "Nimrod" with incompetence and gullibility.
I'm not sure why this is down-voted. Connotations differ in different cultures; I merely point out that "Nimrod" does not have a universally positive connotation.
If this is connotation is intentional, perhaps as humor, please just tell me I'm not getting the joke.
I don't know. It's not like "Gimp", which is viewed by some as insulting to handicapped people. But it wasn't clear to me that the Nimrod developers were even aware of the US connotations, so I brought it up.
Perhaps the "Git" comparison is apt; as far as the US is concerned, that's the sort of name that's been picked.
From the little I know about metaprogramming, it sounds analogous to doing string manipulations in a language, and then eval'ing it (which is heavily frowned upon, at least in python).
It's a bit higher level, roughly like generating HTML using a templating library that auto-escapes to avoid XSS attacks. There are fewer ways to screw up and since it happens at compile-time, it's more secure.
Generally speaking, if you would normally do something using reflection, you can also do it using code generation, which should compile to faster code (since you're not interpreting it at runtime) but possibly at the cost of code bloat and making the code harder to follow.
If you're familiar with front-end web coding, it's comparable to manipulating the properties of DOM objects vs. assigning an innerHTML string. Building a string is crude but sometimes effective - but when you want to really engineer something, you want to have every node made available as a data structure, since that puts it in a form suitable for algorithm design.
In a lot of instances, a macro isn't particularly more complicated in intent than a string + eval solution; it's just a much more verbose way of attacking the problem.
Personally I didn't find myself wanting all of the Racket syntax to be transformed. but I certainly more than once wanted to have a form which would offer infix syntax for everything inside, like TCL's `expr` (IIRC). There's https://github.com/marcomaggi/Infix/blob/master/infix/infix.... but I don't know if it works with Racket. Having a `lang` for infix notation seems like a good alternative: everything that would benefit from it (mainly maths in my case) would be in a separate file anyway.
Interesting, but I'm troubled that they got the first function (on the first page) wrong. (You can leave out the discriminant when it's zero, but not otherwise.)
Given the current trends in language design, it's interesting that Nimrod declares itself an imperative language, and that this talk doesn't discuss lambdas at all.
I really want to start using Nimrod for real work.