I really enjoy the language so far. For me, it's the perfect middleground between C and python: a fast compiled language, and one where you can be as productive as in python.
I also tend to prefer catching errors early, and having a typed language that warns and errors at compile-time is great.
I've found some (arguably minor) things to be kinda messy. Maybe the result of a one-person language that grows too much before getting some feedback.
1) Everything is (strangely) called a procedure, and then there's syntax to differentiate arguments that will be modified in place (proc myproc(myarg : int, inplacearg : var int)). Kinda weird, and a lost opportunity to have checking for pure functions at compile time.
2) import vs. include. Why have include at all to shoot yourself in the foot if you have cheap namespacing?
3) If vs. When ?
4) varargs feels kinda unnecessary
5) Case-insensitive. Oh... why?
On the bright side, I like how OO was implemented.
I disagree that it's a middleground between C and Python though. I see it more like an evolution of Pascal, it has the same niceties (ALGOL-like, static types, builds executables) and some things added on top (no VM, but GC'ed, metaprogramming).
Oh, it does bring a lot to the table that C and Python don't have. You're totally right, it is not a middleground per se, it is just the language I was looking for.
If I try to answer your points:
-> 1) The procedure keyword comes from Pascal. I am not shocked by it, when I learned programming the teacher used to call them procedures also.
The 'var' in procedure arguments could be considered the opposite of the 'const' of C/C++. Everything is const by default in Nimrod, but if you want something modified in place you indicate it.
-> 2) Herm... I got nothing. I haven't used include, I didn't see the use for it yet.
-> 3) 'when' is compile-time, 'if' is at runtime.
-> 4) Unnecessary? varargs is quite useful. For example, if you use redis, you can do db.del("akey", "anotherone", "otherkey"), instead of having to fiddle with an array. Varargs makes some calls cleaner.
-> 5) "The idea behind this is that this allows programmers to use their own preferred spelling style and libraries written by different programmers cannot use incompatible conventions." from the Nimrod manual (http://nimrod-lang.org/manual.html). It forces you not to name functions and variables too closely. So you won't be able to have different things named myStuff and my_stuff because it will refer to the same variable or proc. You enforce your own writing style. That is debatable. You have others enforcing a style, like with gofmt. The case insensivity did not disturb me, though (but I admit it surprised me at first).
1) I'm not a fan of both the name `proc` and `var` because it allows you to mix inplace editing with return values and makes a mess of the (IMO, precise) meanings "function" and "procedure" have.
If I were starting a new language, I wouldn't pass the opportunity of disallowing mixing these concepts, so there's a way to reason about pure functions.
3) I get that, but it feels like something that could've been optimized away by the compiler, they didn't bothered and instead bloated the syntax. Not a fan of the naming too.
4) Just IMHO, but this kind of magic feels out of place on a static language. In something like Python, variable arguments aren't as opaque since there's an underlying object being passed around (a list or a dict), and your arguments can be of any type.
1) Conversely, while Ada used to not allow in out parameters for functions, it is allowed as of Ada 2012 [1]. There's certainly an argument to be made either way. For what it's worth, I'm not a big fan of functions with side effects myself.
3) You can't optimize it away. A compile-time conditional statement has to allow for undefined identifiers and such, but for a runtime conditional statement you want to have the compiler signal an error even if it can statically determine that the condition is always true or false.
3. If == runtime control structure. When == COMPILE time control structure...code in a failing when clause is not compiled. Basically the equivalent of something like an #ifdef pre-processor macro in C.
I looked at http://golangtutorials.blogspot.com/2011/06/interfaces-in-go... and the only new aspect of Go interfaces I see is that they are implicit - they are considered implemented if the type implements functions that interface defines and you don't need to explicitly write "implements Something". This is a handy shortcut, but it doesn't look like a "serious feature". It also looks a bit like polymorphic variants in OCaml.
Of course I might be missing something as I'm not that familiar with Go.
> the only new aspect of Go interfaces I see is that they are implicit
This is not a new thing. Structural subtyping has been around since the earliest formal treatments of subtyping in the early 80s. OCaml's subtyping relation is structural.
There's a related notion of row polymorphism that was first formalized in the late 80s. As far as I know, it hasn't been widely adopted, but is the subject of ML Poly/R. Elm's extensible records also seem similar. Row polymorphism is also an important concept when dealing with typed concatenative languages, like Joy and Cat.
Really, Go brings nothing new to the table. It is a synthesis of (mostly) good ideas. Unfortunately, it also forgoes other good ideas (parametric polymorphism, sum types, and pattern matching come to mind). The goodness of exceptions is, of course, debated.
Yeah, what I wanted to say was "the only aspect worth noting" or something similar. I knew about structural typing and vaguely remembered that row polymorphism exists (but I'm not really sure what it is).
Actually I wanted to play with Joy for a couple of times now, but it seems unmaintained and rather hard to approach. I ended up learning some Forth and (little) some of Factor instead. I think I'll give Cat a shot, I'm not a fan of CLR, but I'd really like to know how you can type concatenative language.
I'd definitely recommend Cat. It's a very neat language. The reference implementation is written in C#, but I recall seeing a page at one time listing a few implementations in other languages. Unfortunately, I can't seem to find that page now. The Cat website does have an online interpreter, though.
Actually it's a big improvement for large-scale programming due its effect on library dependencies. In Go you can easily declare an interface that matches a type in someone else's library without creating a hard dependency on that library. You can also superset and subset interfaces easily. That way you get loose coupling (almost like "duck-typing") in a mostly statically checked language.
Contrast with Java where you'd have to both create a new interface and write an adapter class in a separate "glue" library that has hard dependencies on both libraries.
I don't want to sound dismissive, but I think very few people here would like to "contrast with Java". We're (I am for sure) thinking more along the lines of comparing with Rust, OCaml, Haskell or Scala. Or Opa. Or Felix. Or Ur/Web. Or any number of modern languages implementing rich type systems.
Anyway, I believe this feature is very handy. It's not "new" however. As noted, OCaml objects - and also modules - support structure typing too. And you can't call OCaml a new language. Scala supports it too, in more than one way. And so on.
Also, compared to powerful and extremely rich type-systems that these other languages got Go's seems rather limited. What I meant by interfaces not being a "serious feature" - I should have said it differently, I know - was that compared to other features of modern type systems it's not that significant. I get a feeling that it only looks like it is in Go because it lacks those other features.
And BTW, that's a concious decision of language designers to keep the language simple. I don't say it's a bad decision, either. I just want to note that Go indeed is simple (at least in regard of types) and not that innovative. And also that using Java as a baseline is not the most ambitious thing to do. ;)
I don't think it matters that much from a library maintainer's point of view.
If you you want to change a public method and can find all the type's usages, an IDE or search engine can tell you which call sites will break. (Or just compile everything and see what happens.)
If you can't find all the type's usages, you're screwed anyway because any change that would break an interface will also break a call site that calls a method directly, without using an interface. So having all the interfaces declared right there doesn't help that much.
> I don't think it matters that much from a library maintainer's point of view
From a large scale application developer point of view it matters a lot.
In code bases developed by 50+ developers across multiple sites, it is important to be able to look to a struct definition and be aware what interfaces in the code base are supported.
g++ had those years ago, maybe gcc 2.0 or something? Google results are all messed up with Protocol Buffers and stuff so I didn't find a good link.
Back when C++ didn't yet have templates, there were several suggestions about what kind of generics to implement and how. g++ implemented "protocols", which are basically the same as Go interfaces. I think this was eventually considered (and refused) in C++ as "concepts", but I might be mistaken. Templates are more general and can be kludged to implement protocols - which is probably the reason they won out in the C++ standardization race.
I also tend to prefer catching errors early, and having a typed language that warns and errors at compile-time is great.