Hacker News new | past | comments | ask | show | jobs | submit login
Go 1.5 max procs default (golang.org)
187 points by enneff on May 28, 2015 | hide | past | favorite | 140 comments



Back when I was learning Go, I was astounded to learn that GOMAXPROCS was set to 1 by default. It didn't seem to make any sense to me given the nature of the claims of the language. Of course when I investigated why the default was 1 I learned the reasons why (which this article talks about), but it still left a bad taste in my mouth.

I'm glad that Go has matured to the point where this can be a historical footnote! Good work, Go team.


See also: Concurrency is not Parallelism http://blog.golang.org/concurrency-is-not-parallelism

You can still benefit from the "claims of the language", even if code isn't running in parallel.


While this is technically true, it doesn't seem like that helpful a response to say "well you were getting benefits, just not this one". Until now you've had to know to ask for parallelism; it should have been there by default, and soon (I hope) it will be.


Oh, I agree 100%. I was just as confused as he was when I first started using Go, and I'm very excited that this change might finally happen.

However, I think calling my response "technically true" is selling Go's concurrency primitives a little short. They enable a really great programming model, which makes it much easier to express certain ideas in code. And I/O-bound programs benefit even if GOMAXPROCS=1.

I didn't understand that subtlety when I first started with Go, and it seems like GP commenter (and probably other HN readers) didn't either. "Concurrency is not parallelism" is very helpful in explaining that, which is why I linked to it.


This is, incidentally, also the Erlang default. Interrogate the system and if it is SMP capable, run with SMP enabled.


What I do not want in a programming language is for it to advertise features which are subject to confusion in common parlance, then say "I did what I said, you just weren't smart enough to understand what I said".

My reaction is somewhat like this XKCD: https://xkcd.com/169/


The debate over concurrency vs parallelism hit its peak in the mid 1990s, well before Go came along. I remember having arguments with fellow students (and the occasional professor) at university about how it affects memory allocations (stack vs heap), virtual memory layout (the thread vs coroutine stack being one of the bigger issues; Linux's early introduction of mremap was particularly handy) the OS scheduler, etc. It's not just a Go / language feature thing.


A distinction without meaning is kind of dumb. In popular parlance parallelism and concurrency are approximately the same thing.


You aren't wrong, but it does make it difficult to discuss with people. At least within the programming field it's useful to have two terms with distinct meanings to discuss the topic. Consider the terms accuracy and precision [0], where again the popular understanding is that they're (essentially) synonymous. But within the scientific and engineering world that uses the terms the distinction is critical.

EDIT: Verification and validation [1] are another pair of terms where the distinction is important for industry, but popular understanding often mixes them up.

[0] http://en.wikipedia.org/wiki/Accuracy_and_precision

[1] http://en.wikipedia.org/wiki/Verification_and_validation


When having a technical discussion we should interpret technical terms with their precise meaning. Rob Pike gave a good talk about this: http://blog.golang.org/concurrency-is-not-parallelism


I agree that if you look at it simply as the number of cores used, then the distinction is not very interesting. But a better way to look at it is as follows:

* Parallelism is a feature of the algorithm, chosen to make it run faster. E.g. you can use a parallel algorithm to factor a matrix. From a technical standpoint, the challenge of parallelism is how to deconstruct the problem so that each processing unit will be able to take on a share of the effort -- parallelism implies cooperation.

* Concurrency is a feature of the problem. E.g. many users using Facebook at once. If at any one time you have 10K requests, then your concurrency level is 10K, no matter how many cores are used. The challenge of concurrency is how to allocate computing resources among the competing requests -- concurrency implies competition.

Parallelism is orderly; concurrency is messy. Parallelism is a choice; concurrency isn't.


It wasn't very long ago (before the multicore era) that essentially nothing running on personal computers was truly parallel. Yet multithreaded applications (concurrency) worked just fine.


The advantage of making the distinction is you can discuss the subtly different semantics which often arise in the area of parallel, concurrent and distributed computing. If you equate them, you can only discuss with limited level of detail, which makes it harder to understand the subtle differences coming from different computational models.

For instance, the subtle difference arising from MIMD execution and SIMD execution. The former allows for concurrent execution, whereas the latter does not. However, both are parallel execution models. This isn't theoretical either as GPUs are strictly SIMD machines in their execution.


It does have meaning. An obvious difference is that you can have concurrency with one processor but not parallelism requires at least two.


> parallelism requires at least two

Interestingly there is one example of parallelism without concurrency and with just one processor and just one core - vector instructions. Although it would be a breathtakingly good compiler that could emit vector instructions to schedule two goroutines in parallel


Like you, I was also initially confused about the default value of GOMAXPROCS - for the same reason.

Great to see they have changed this to something that, without any other context, makes more sense.


> It didn't seem to make any sense to me given the nature of the claims of the language.

You mean that it was good for writing network servers capable of handling many tens of thousands of connections? You usually don't need more than one core to handle IO, and if you are doing processor intensive stuff a few more cores is not going to help if you have many thousands of requests - in this case a better pattern is a work queue and a way of letting the client know their work is done.

Concurrency/threading is generally considered a "hard" topic within software dev, and if you lack a solid grounding in the basic concepts you are destined to misuse or misapply the paradigm.


Sometimes a server does more than just I/O.


Well if it "does more" for every request when you are handling many thousands of concurrent requests, that is going to be one hell of a busy server.


I'm not sure what you're trying to argue here. If you want to serve many requests that do significant CPU work in parallel, multiple cores are necessary no matter how efficiently Go manages the connections.

Additionally, even rendering HTML templates can take quite a bit of CPU time and when Go's concurrency is limited to a single-core, that means only one template can be rendered in parallel, blocking other requests.


> If you want to serve many requests that do significant CPU work in parallel

For what definition of "many"? If you have 16 cores that is 16 requests being processed in parallel. Go hasn't even got out of bed by the time it is handling 16 connections. When Go is being used for it's intended purpose - handling thousands of concurrent connections, you need to be a lot smarter than just running on all the cores you have.

I'm not saying you wouldn't use all cores to maximize utilization - what I am saying if that if you are scaling beyond one core, then you would already be thinking about scaling across multiple servers behind a load balancer - i.e. you are doing ops work on a production scale cluster. And if you are doing production ops work, you are going to be tuning all the processes under your control in detail, not just making rash assumptions about how they may or may not utilization multiple cores.

The point I was arguing here is that someone who said "Back when I was learning Go" and who understands the "nature of the claims of the language" is not going to be running into these sorts of scalability issues, and would not be bitterly disappointed to learn of a default threading value of 1. Unless they had some misconstrued understanding of what Go's concurrency support was all about, and made an incorrect assumption that it was primarily about parallel computation, rather than it's true purpose of having many thousands of lightweight processes handling lots of network I/O.


My goal is to have a busy server, because that minimizes wasted resources :) The more users I can handle on one machine, the better.

And latency is important. I can't have a user performing a lengthy operation, while at the same time blocking all other users. Real (not fake) threads are needed to combat latency.


> And latency is important. I can't have a user performing a lengthy operation, while at the same time blocking all other users. Real (not fake) threads are needed to combat latency.

And if you have thousands of current connections (the kind of workload Go was designed to handle), and only a few hundred of them are performing "lengthy operations", you could have 4 cores, or 16, it won't make much difference compared to 1 core - which is the vast majority of the connections are going to be blocked for unacceptably long intervals. In this scenario, the only solution is to either queue up work and make the client come back later, or scale across many servers behind a load balancer.

The point I am making here is that people who are engineering scaled out systems across multiple servers to handle thousands of concurrent requests are usually high-end ops people who are not going to be naively assuming that any process will be doing threading in a particular way - they would have already tested Go in details first and would very quickly have seen you need to set the GOMAXPROCS variable to use more than one core. They would probably look at the GC as well. Tuning stuff to run well is what ops people do, and choosing how many cores a process should use is pretty standard stuff (it may be sharing the server with other processes after all).


So your argument is basically that if you keep adding users, at a certain point my system will break? I think that is a strawman argument.

In my opinion, it is perfectly legitimate to have the requirement of low latency, while running lengthy operations in the background (I'd show a spinner or a progress indicator, a very common approach). Yes, at a certain point this will break down, at which point I will add more servers. This is better than having no such system at all.

If a language is only good for serving >>1M users and only doing I/O, then I guess it should not be branded as a general purpose, mainstream language, even if aimed at the web.


> So your argument is basically that if you keep adding users, at a certain point my system will break? I think that is a strawman argument.

That's only part of my argument, and I don't think it is a strawman argument.

> In my opinion, it is perfectly legitimate to have the requirement of low latency, while running lengthy operations in the background

Ironically, this is a straw-man argument - I never said nor insinuated it wasn't a valid requirement. All I am saying is for the kind of scalability Go was designed for (1000s-100,000s connections), if you are doing lengthy operations, you will never be just thinking about multi-core, you will be scaling across servers.

> then I guess it should not be branded as a general purpose, mainstream language

Well I honestly am not sure it was. It was branded very early on as a language that is good for writing highly concurrent, highly scalable network servers. For example, integrating with C/C++ libraries is a pain, and slow. I would say most general purpose languages would have good C integration as a basic feature. And the lack of good C integration is a direct result of specializing Go at running millions of little Goroutines concurrently (segmented stack and all that).


> It didn't seem to make any sense to me given the nature of the claims of the language

I disagree, it makes total sense when you understand the consequences of GOMAXPROCS set to anything but 1.


Concurrency was the primary selling point for Go. Setting GOMAXPROCS=1 not only failed to deliver on that but also ran the risk of people not noticing concurrency issues because all of their development happened without parallel execution.


Prior to Go 1.5 the default of 1 made sense for most typical Go programs, but I know of many users who set the variable differently for better performance.

Data point: Inside Google we have been setting GOMAXPROCS=runtime.NumCPU for many years because most of the Go programs we run are highly concurrent network servers, for which a higher GOMAXPROCS value gives better performance.


Interesting, can you share much about the other work on the same machine? In particular I'm curious if there are many other services running no those machines. I've written some services in Go (thanks!) but actually left GOMAXPROCS at 1 because there were other components running as separate processes on the same hosts.


When I was at Google the machines where typically oversubscribed and the workload varied quite a bit. Google has done a lot of work to allow you to ignore the other processes on the machine and just specify the resources your process required. Then Borg which they recently published a paper about would take care of scheduling it on an appropriate machine.

As a consequence, answering the question of what other processes where on the same machine was not as easy as on an EC2 instance. We didn't mind though because Borg was so good at scheduling we pretty much didn't have to care about the machine level. Instead we thought at the datacenter level.


Some machines run many other jobs, some machines run few. It depends on the resource allocation.


I think 1 was fine for many people but I wish it'd been 2 simply to avoid people getting as far before realizing there was a class of problem they weren't testing.


Concurrency in Go is a programming model, specifically CSP. They have been very clear about this from the beginning and saying they failed to deliver on it is disingenuous.


Note that I was not talking about the actual implementation or the indeed perfectly open discussion in the docs but rather the way the language was sold, most of which happened outside of Google. Even though e.g. Pike was very open about this, there was a lot of excitement of the magic scaling pixie dust variety around HN, Twitter, meetups, etc.

Almost every person I know who started with Go got a long way in before doing some benchmarks and realizing that they were only using one thread. I know the scheduler had issues but I think setting the default to 2 would have avoided people going so far before learning that the language can't handle everything they need to care about.


No, they also talked from the start about how Go makes it easy to leverage many CPUs...


And it did and does, provided you have a workload suited to many CPUs. It's just that with the ancient scheduler, setting GOMAXPROCS>1 with a concurrent but not inherently parallel workload would often be slower. It got a lot better around 1.2 and is finally a non-issue in 1.5.


Don't forget the race detector! It has also gotten easier to detect subtle concurrency errors which in turn means less deterministic execution is easier to handle.


Concurrency doesn't rely on the number of available processing cores. You're thinking of parallelism.


I'm quite familiar with the distinction you're trying to make but the term concurrency is not that strict in common usage:

“In computer science, concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. The computations may be executing on multiple cores in the same chip, preemptively time-shared threads on the same processor, or executed on physically separated processors.”

https://en.wikipedia.org/wiki/Concurrency_%28computer_scienc...

If you were to go back and reread my comment, note that I used the broader term concurrency at the start – which includes both classic async patterns within a single thread where operations may be interleaved but shared resources wouldn't be updated simultaneously as well as true parallel execution – and then specifically referred to parallel execution at the end as a source of potential pitfalls.


sure, but now some programs will break. This is a breaking change.


This just changes the default, which anyone can do in their OS. If your program breaks when run by someone who set gomaxprocs.. that's a bug in your program.


Those programs were already broken – the developer just didn't know about it.


Strangely, on my computer, Firefox reports a security error on the site. No error in Chrome.

    An error occurred during a connection to golang.org.
    security library: DER encoded incorrectly formatted
    message. (Error code: sec_error_bad_der)
I didn't have this the last time I went on golang.org.


The cert includes a wildcard for *.google, which Firefox rejects because it's a wildcard on a top level domain. I can't really tell who's right here from the specs.

https://bugzilla.mozilla.org/show_bug.cgi?id=1169149


Good catch! The addition is for the new .google TLD.

Looks like the Chromium source code had some "protection" against this, at least at one point in time: http://security.stackexchange.com/questions/6873/can-a-wildc...

It appears that the specification (RFC 2818?) doesn't specify what is acceptable with regards to allowed wildcard certificates - leaving the various implementations to decide what is "too risky" for themselves.


The certificate for golang.org was recently (re)issued. I see a "Not Valid Before" of "Thursday, May 21, 2015 at 5:39:30 AM Eastern Daylight Time".

I wonder if there is a Firefox-unsupported certificate extension in the new certificate?

EDIT: I wonder if it's related to this Firefox issue: Secure connection failed (sec_error_bad_der) due to certs with SAN dNSName entries incorrectly containing IP addresses[0]

However, it doesn't appear the golang.org certificate has any subjectAltName DNS entries using an IP address.

0. https://bugzilla.mozilla.org/show_bug.cgi?id=1148766


The most frustrating part about this is when you want to access router/etc machines via https and an ip address it is just no longer even possible if they dont support this particular certificate extension.

Firefox doesn't even give you a way to bypass the erorr, even the error itself gives absolutely zero indication of what the issue actually IS. It's extraordinarily obnoxious.


Huh? You can't put an IP into a DNS entry but you can put them into an IP entry.


Yet it worked fine for 20 years beforehand and is how a lot of management interfaces are connected to and used. Now a minor bit of certificate paper work is wrong by fiat, anything relying on it is no longer allowed to work, peroid, even with a manual override.

Great. I guess I'll just fork out $10k to satisfy some stupid technicality.


Your argument could be used against pretty much any of the tightenings up wrt. HTTPS certs that have been done for the last few years.

"We used to be able to do whatever and be marked a secure and now we actually have to do security right! UNACCEPTABLE".

Not very convincing.


Firefox doesn't even give you a way to bypass the erorr

Removing the 's' from 'https' in FF works for me.


and if that dosen't exist or auto redirects to https you are screwed. No matter how you slice it not even allowing an override over something like this is super shitty.


Not super shitty, a judgement call.

Browser defaults have to be created to cater to the greatest (lowest?) common denominator. And if you can't figure out how to bypass the SSL warning you shouldn't bypass it.


there is no way to do so without patching firefox, that is way extreme. Especially for something that amounts to "you put a number in the wrong field of your cert, because this standard didn't even exist yet"

This does not improve security in any meaningful way whatsoever.


Same issue here, using firefox.

openSSL doesn't complain though:

    sielicki@wetdog ~ $ openssl s_client -connect golang.org:443 -CApath /etc/ssl/certs
    CONNECTED(00000003)
    depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority
    verify return:1
    depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
    verify return:1
    depth=1 C = US, O = Google Inc, CN = Google Internet Authority G2
    verify return:1
    depth=0 C = US, ST = California, L = Mountain View, O = Google Inc, CN = *.appspot.com
    verify return:1
    ---
    Certificate chain
     0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=*.appspot.com
       i:/C=US/O=Google Inc/CN=Google Internet Authority G2
     1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2
       i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
     2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
       i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority
    ---
    <...>
    Start Time: 1432839068
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
    ---

Makes sense, I guess... firefox/iceweasel don't look at /etc/ssl/certs, they only trust what mozilla puts out.


Same here. If you go to the link without ssl it'll load for you though.


HTTPS Everywhere is the bane of my crap-configured-ssl-server existence :(.


Click on the icon in the menu bar. It allows you to disable the rules for domains where it is broken.


This probably always should have been the default, since bumping GOMAXPROCS from 1 to N can reveal bugs.


Worth to note that they address this concern in a way in the doc:

> Correctness. There is one non-performance concern. Increased parallelism could make bugs in racy programs more likely to cause crashes or other problems. The setting of GOMAXPROCS=1 may have thus far let those bugs go undetected. Raising it may therefore make buggy programs less reliable. We hope that both the race detector and the introduction of goroutine preemption and interleaving in Go 1.1 has already helped programmers identify and eliminate many such bugs. In any event, we can’t reasonably hobble the performance of working Go programs for fear of breaking buggy ones. If such buggy programs do turn up, the authors can set GOMAXPROCS=1 explicitly to keep them working until the bugs can be fixed.


As someone with little experience in Go, I'm curious: How do racy programs end up crashing? Do they just end up in some inconsistent state and panic or can you actually trigger a CPU fault?


You get weird little unexplained panics, more or less. Stuff like nil pointer references, errors with map/slice access, and deadlocks.


In theory the failures could be pretty insidious - data could be corrupted without any crash, or in a way that causes a crash in an almost unrelated part of the program. I have no idea how likely the really squirrelly failure modes are, but there will be much scratching of heads and gnashing of teeth when they do pop up. The race detector probably helps a lot.


You can segfault with arbitrary undefined behavior [1], but I've never heard of it happening in practice.

[1]: http://research.swtch.com/gorace


Setting something to nil and then referencing it is a trivial race condition to create which can cause a crash.


Thanks for this response, and all the others. For some reason I had it in my head that Go's CSP concurrency model precluded sharing of that kind.


Golang's CSP idiom precludes that sharing. The language itself was explicitly designed not to preclude it: if you know what you're doing (or, unfortunately, don't), you can easily design concurrent data structures using locks or atomic operations. Golang doesn't even cordon this stuff off as "unsafe"; like in most languages, shared mutable is the default.

That said: unlike, say, string counting idioms in C, the share-using-channel idiom in Golang is powerful, in the sense of: it's what most people reach for by default.

For a language as opinionated as Golang, it's extraordinarily pragmatic. It is designed first and foremost to be a tool, and then a lot of other things, before being a statement about how to design safe programs.


I don't actually know of a single language that has locks and atomics, but marks them unsafe. All languages I know of either have them and mark them safe and idiomatic (for example, Golang or Rust), or don't have them at all (for example, Erlang).


In the future some Go programs may fail or perform really badly (not exit for minutes) with GOMAXPROCS=1. Since Go isn't preemptively scheduled if you have a goroutine in a long-running or busy loop the program will run normally using 2 CPUs and freeze with 1. Even well-written programs can become "chunky" with 1 CPU and exhibit huge latency spikes not seen with 2 CPUs. Because of M:N threading.


It's sort of preemptively scheduled on function call boundaries now, isn't it?


There was preemption in some places. I thought it was when the stack grew or shrank, but who knows. Checking at every function call seems like a huge waste.

This probably won't cause anywhere near as many problems as >1 CPU, but still since VMs sometimes have 1 CPU and almost no developer computers have 1 CPU I expect it will not be unheard of.


That's the first thing I thought of as well. Bumping from 1 to 2+ might reveal potential concurrency bugs. So there is a higher chance code will break during update. They should put a large warning about perhaps during the release.


Agreed, but even ignoring how it shines light on bugs that might otherwise go unnoticed it should have been the default if only to follow the "principle of least astonishment" (which Go is generally very good about).

Even knowing the reasons why this was done (which have been widely discussed for years), I still think the default should have always been to have it set to the runtime.NumCPU value and then have clear documentation on why developers might want to set it to 1 in your own program.


Or they could have originally defaulted to GOMAXPROCS=2 to uncover these races while still minimizing the problems from their gorountine scheduler.


GOMAXPROCS=2 wouldn't make things better, since the Go runtime has all sorts of special cases to handle GOMAXPROCS=1 that don't kick in with any higher value (in particular, removing locks).


that to me looks like the worst of both worlds.


> We hope that both the race detector and the introduction of goroutine preemption and interleaving in Go 1.1 has already helped programmers identify and eliminate many such bugs.

I recall that it took me a while to discover the race detector when I started using Go. Now I always set GOMAXPROCS=runtime.NumCPU() and use the race detector.

https://golang.org/doc/articles/race_detector.html


Go 1.5 is shaping to be a pretty major release! Definitely feels more major than 1.3 and 1.4 were. Really happy it's progressing so nicely. Can't wait to see the GC improvements in 1.6+.


Direct link for Firefox users since there is a certificate error: https://docs.google.com/document/d/1At2Ls5_fhJQ59kDK2DFVhFu3...



> we propose to change the default setting of GOMAXPROCS to the number of CPUs available, as determined by runtime

That's what Erlang does, too.


I hope that at least the standard libraries that come with Go are thread safe (?)

Do they test them on a regular basis, appropriately, with GOMAXPROCS > 1?


My prediction is that it is going to reveal tons of bugs in libraries. They really should have defaulted it to 2.


Curious, why? Seems that anything over 1 would reveal the same bugs.


I'm interpreting the suggestion to mean that the original default with Go 1.0 should have been 2, because that is the lowest value (and thus would have the least of the the performance problems documented here) that would, as you say, reveal the same bugs. That way raising it to runtime.NumCPU wouldn't be a potentially breaking change.

Personally, I've found the issue to be documented well enough, and it is easy enough to set GOMAXPROCS to whatever works best for a given program, that I'm content with how the team has handled it. And I'm thrilled at the work that is being done now to fix the issue once and for all.


I thought it was set to 1 CPU because that's the only way for Go to guarantee thread-safety. Is that fixed in 1.5, or does Google just choose to ignore that in this release due to the performance benefits?


No, that was one of the reasons for GOMAXPROCS=1 on App Engine, which is a hardened environment.

For your own code, we trust you to fix your races or deal with the consequences.


Does this mean that App Engine will never see it raised above 1?


No.


I don't know what you mean by "guarantee thread-safety". Whether your program is thread-safe is up to you, not Go.


They address this in the document:

> Correctness. There is one non-performance concern. Increased parallelism could make bugs in racy programs more likely to cause crashes or other problems. The setting of GOMAXPROCS=1 may have thus far let those bugs go undetected. Raising it may therefore make buggy programs less reliable. We hope that both the race detector and the introduction of goroutine preemption and interleaving in Go 1.1 has already helped programmers identify and eliminate many such bugs. In any event, we can’t reasonably hobble the performance of working Go programs for fear of breaking buggy ones. If such buggy programs do turn up, the authors can set GOMAXPROCS=1 explicitly to keep them working until the bugs can be fixed.


Any updates regarding generics and error handling?


Yep! Confirmed in Go 1.6. Also - algebraic data types, monads, operator overloading, actors, STM, multiple inheritance, exceptions, LINQ, full opt-in dynamic typing, optional GC! Anything you want! All language features, ever conceived, will be crammed into Go, starting with the next release!


I totally want my language to be a superset of C++, Scala, Haskell and APL.


You realize a Monad is a concept, not a language feature. One can implement a Monad in any language.


No shit! I'm buying my ticket to Gophercon immediately


I just scanned registrations and don't see you there....


Too bad they don't include range datatypes, à la Ada, where you can state "x is a number in [3;17]". Those exist since the 1970s. I can't imagine how one can ever write a program without range datatypes.

Oh, I need self-modifying code and both computed and assigned gotos, too. I'm used to using them since FORTRAN.

Come on, Go team, fix your language so that we can start using it.


Pity about the algebraic data types + multiple inheritance. That probably won't play nice.


Totally. Even the guy who first introduced data types + multiple inheritance has come out to say that it's the worst thing he has ever done in the whole history of everything from always.


Who is that guy?


No guy, this is sort of an inside joke about the nil discussion. Every time Hoare comes up.


Got it. Thanks :-)


Well let's not exaggerate. Covariance can still be dealt with to some degree.


> Any updates regarding generics and error handling?

Don't hold your breath. it's unlikely the go team will add generics to the language, that's obviously a political issue not a technical one. Same for error handling. At least 2 people on the record, including Rob Pike said no new feature will be added to the language.

It doesn't mean you should stop asking for them, but it ultimately means that people wanting generics and other features in Go should consider a fork

Forks are ALWAYS healthy, the nodejs/iojs story is yet another proof of it. The thing people wanted (iojs) has won over the original project, which forced nodejs merge iojs back.

Like many I personally think Go isn't striking the right balance between minimalism and features. People who are against generics are in favor of trading compile time type checking(which is always safe) for run time type assertions(which can fail). But they refuse to acknowledge that fact. That's what make their argument against generics wrong and will ultimately damage any potential growth for the language.They are basically working against their own interests.


I'm sorry, but no: Generics are a technical issue and are not a political one. The Go team is not against generics per se, only against doing things that are not well understood and/or don't work well with Go.

There are deep technical issues that must be solved to fit the idea of generics into Go in a way that works well with the rest of the system, and we don't have solutions to those. I wrote on my blog about one issue years ago (http://research.swtch.com/generic), but there are others too. Even supposing you get past the problem on that page, the next thing you would run into is how to allow programmers to omit type annotations in a useful, easy-to-explain way. As an example, C++ lets you write make_pair(1, "foo") instead of make_pair<int, string>(1, "foo"), but the logic behind inferring the annotations takes pages and pages of specification, which doesn't make for a particularly understandable programming model, nor something the compiler can easily explain when things go wrong. And then there's a princess in another castle after that one I am sure.

We have spoken to a few true experts in Java generics and each of them has said roughly the same thing: be very careful, it's not as easy as it looks, and you're stuck with all the mistakes you make. As a demonstration, skim through most of http://www.angelikalanger.com/GenericsFAQ/JavaGenericsFAQ.ht... and see how long before you start to think "was this really the best way to do this?" (For example, http://www.angelikalanger.com/GenericsFAQ/FAQSections/TypePa..., but note that the latter page is only one part of the FAQ, not the entire FAQ.)

To be very clear, we acknowledge this fact: there are definite disadvantages to not having generics. You either use interface{} and give up compile-time checking or you write code generators and complicate your build process. But there are also definite disadvantages to generics as implemented in existing languages, and there is a very large advantage to not compromising today: it makes adopting a better solution tomorrow that much easier.

As I said in the interview at http://www.pl-enthusiast.net/2015/03/25/interview-with-gos-r...:

> Go is more an engineering project than a pure research project. Like most engineering, it is fundamentally conservative, using ideas that are proven and well understood and will work well together. The research literature’s influence comes mainly through experience with its application in earlier languages. For example, the experience with CSP applied in a handful of earlier languages—Promela, Squeak, Newsqueak, Alef, Limbo, even Concurrent ML—was just as crucial as Hoare’s original paper to bringing that idea to practice in Go. Programming language researchers are sometimes disappointed that Go hasn’t picked up more of the recent ideas from the literature, but those ideas simply haven’t had time to pass through the filter of practical experience.

I believe generics is one of those ideas. It certainly needs at least one more iteration, possibly a few more than that.


CSP is well understood because it's used in very minority languages, yet generics aren't despite being almost ubiquitous?

I understand the caution, but this justification doesn't add up.


The complexity in language theory does not stem from a single concept, but from their mix. If you take something like generics on its own, it is rather simple to add to a language. But once you mix it with other concepts, the complexity rises exponentially.

Generics + subtyping (inheritance) gives rise to a number of complex situations. Java needed to embrace bounded polymorphism in the type system in order to handle this. And we don't know about a simpler type system which is sound and has that particular mix. If you remove subtyping and replace it with another construct, type classes or implicits, then generics are somewhat easier to fit to the language.

CSP turns out to be far more orthogonal, as long as you use a typed channel primitive. Hence, it is easier to fit CSP into an existing model because its composition with other type theoretic constructs is far simpler.


The concept is ubiquitous. The specific design for each language is different, and what they all seem to have in common is that they require enormous complexity. See the rest of what I wrote.


Well, the fact is that many mainstream languages have Generics. You have an advantage to learn on others mistakes (e.g. made by those Java guys) and practice.

I understand that Go team are more "systems" persons rather than "languages" in general. Did you try to consult with Dart team? e.g. with Gilad Bracha. Maybe you should consider hiring some languages person in Go team.


It's just not an easy problem to solve. And I'm not convinced it has been already solved, much less solved correctly, in the vast majority of cases.

This is legal(-ish) C++:

template<typename T> T Add(T one, T two){ return one + two; }

It's completely illegal in C#. And Java. Both have "generics". Both are wildly different.

Then you have the Haskell camp which look at all of these "generics" and shake their heads and laugh bitterly since their static type system and "generic" blow all other ones out of the water anyway.

So when you say you want "generics" - which flavor do you want? How does it play with Go's interfaces? How deep down the rabbit hole do you go?

It's very easy, when you're not familiar with the problem to say "just do it" - but everyone seems to have their own idea of what "generics" is, how it works, how to constrain parameters, etc.

So even if the Go team picked an implementation from Camp A, I'm 120% positive that camps B, C, D, and X would declare it the worst implementation ever, and again, they've pleased no-one.

I'm glad they're sticking to their guns on this one.

Show them an implementation that works in Go, and they've said multiple times they might do it. Put up or shut up, basically.


I don't understand your logic - just because different languages have some differences in how generic compile-time type-safe code is written, Go should NOT have this at all? And the only reason is that some subgroup might not be fully pleased?

I'm a C++ engineer more than a decade now.. And I'll be more than happy if Go had ANY kind of generics - even Ada-like, with explicit instantiation... Anything is better than current interface{} run-time checked stuff.


Sounds like you have something in mind. All you have to do now, is:

1. Expand on your idea with some examples. 2. Show how this can be done without breaking existing Go code 3. Showing where the performance hits are - runtime? compile time? binary size? memory? 4. Show how this plays along with Go's implementation of interfaces, as well as other already-generic types, such as channels and maps.

If you can do all that, realistically, then I don't see why it wouldn't be taken seriously?

Heck, if you can provide even an example implementation (talk is increasingly cheap) I bet you'll get lots of people on-board!

You seem, like many others, to think that Go core team members don't want "generics". All they have ever said is - "we don't want bad generics". That's all.

Anyone can say "I don't know what exactly Go generics are, how they look like, how they perform, how they change the syntax, all I know is - I want them". Like I said, talk is cheap.


> Show them an implementation that works in Go, and they've said multiple times they might do it. Put up or shut up, basically.

Didn't someone fork Go and do exactly that only to get ignored? Perhaps it was a rumor, do you know? I'll see if I can find any reference to it.


That would be interesting to see. (I've never heard of such a thing...)


Best I could find was this:

https://groups.google.com/forum/#!topic/golang-nuts/L5Lothlv...

It also led to a real world example where the author tried using Go for a project and had a lot of need for Generics:

http://oneofmanyworlds.blogspot.com/2011/11/draft-2-of-propo...


Many members of the Go team have a background in languages, and are part of a greater community of software practitioners involved with language design. Of course they have discussed these topics extensively with many people; they do not exist inside a vacuum.


> You have an advantage to learn on others mistakes (e.g. made by those Java guys) and practice.

And the lesson learnt from studying other languages is "Don't add generics unless it's right for the language, even if that means choosing, on balance, to _NOT_ implement them at all".


Can you provide some examples of languages where people regret having implemented generics at all (as opposed to wishing they had been implemented in a different way)?


Java is the obvious example that springs to mind. Because of the requirement to be backwards compatible with Java 1.4, all the type information is erased during compilation. The call for reified types in Java has been heard for over a decade now.

C++ compile type templating is the cause of egregiously long compilation speeds. One of the original drivers for the creation of Go was the speed of C++ compilation.

I don't think you can separate a desire that generics was implemented a different way in these examples from the regret that their current implementation causes.


If you are willing to pay with execution speed, you can get fast compiles for generics. But C++ opted for fast execution speed and hence pay with long compile times.

I'd much rather have fast compilation and slower execution and some kind of generics, but the current situation is that people are insanely attached to their pride in fast program execution.


This is a common argument, but too much software is painfully slow to use. It becomes a user experience problem.


I thought the lesson was "adding generics after the point where you have to care about backwards compatibility is fraught with peril".


What is generics?


Go is situated as a better node.js. The entire language appeals to people who come from dynamic language backgrounds: python, ruby, javascript, etc. It is also a language that actively works against clever, intelligent, elegant programmers [1]. Rob Pike himself has stated that Go is intended for unintelligent people [1, 2].

Go is the new Java (<=1.5). It's slightly better....but it's not a language that will grow with you. If you're a serious programmer, you'll easily outstrip the usefulness and expressiveness of Go.They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the lang tl;dr: Go is uage that we give them has to be easy for them to understand and easy to adopt.

[1] http://nomad.so/2015/03/why-gos-design-is-a-disservice-to-in... [2] http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Fro...


Easy to pick up and deliver with != intended for unintelligent people. At the end of the day, writing software is minimizing effort vs maximizing impact. If a language takes me there faster in practice and has an easier learning curve so team adoption is easier - I'd pick it. Go is pragmatic, and that's what I like about it. I'd like to see generics in Go, but the "blue collar"/"dumb programmer" argument is just a bit of ad-hominem IMO.

Coming from years of doing both Python/JS and C/C++, Go is a nice balance between the stong points of each camp, and it fits amazingly well in some cases. I hope it will be the new Java, because I love writing in Go and hate writing in Java :)


Could people stop trolling about generics on every Golang topics?


The Golang FAQ mentions from the day one that they are going to consider adding ge<censored>ics someday in the future. So I was wandering if there are any updates regarding this. I didn't expect such a fanboysm though...


The FAQ also says that Go 1 will not introduce backwards-incompatible changes. Generics would be such a backwards-incompatible change.

So the answer to "are there generics in 1.x release" will always be "no".

http://golang.org/doc/faq#What_is_the_status_of_the_project http://golang.org/doc/go1compat

Go 2 is years away (if it even happens).


How are generics backward-incompatible? They would obviously affect forward compatibility, but what previously valid programs would have to become invalid with the introduction of generics? (Incidentally, as an interesting contrast, this GOMAXPROCS default change will break some existing programs.)


The only way to keep old code working would be to add generics but continue having an awful standard library that never uses them.


How will GOMAXPROCS default change break existing programs?


Per the OP: "Increased parallelism could make bugs in racy programs more likely to cause crashes or other problems. The setting of GOMAXPROCS=1 may have thus far let those bugs go undetected. Raising it may therefore make buggy programs less reliable."


GOMAXPROCS isn't really a language feature, that's all implementation specific AFAIC.


What old code would break when adding generics?


>"are there generics in 1.x release"

I've never asked this. Certainly it would have been a big news. I asked if there are any updates in general - any new thoughts, plans...


Generics were added to Java 1.5 in a backwards compatible way: all your existing Java code would build fine with Java 1.5 javac. So it is not inherently impossible.


Even if there were news, it's certainly off topic for this post.


If you ever implement it, I suggest a release on april 1st.


That would be my favorite April Fools ever.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: