Hacker News new | past | comments | ask | show | jobs | submit login
Go 1.20 released (go.dev)
318 points by petercooper on Feb 1, 2023 | hide | past | favorite | 193 comments



This change makes me SO HAPPY:

    The math/rand package now automatically seeds the global random number 
    generator (used by top-level functions like Float64 and Int) with a random 
    value, and the top-level Seed function has been deprecated. Programs that 
    need a reproducible sequence of random numbers should prefer to allocate 
    their own random source, using rand.New(rand.NewSource(seed)).
We've had some truly nasty bugs from people who weren't familiar with the prior behavior of having a default seed of zero for the global random-number generator. This is going to save many people so much heartache.


> We've had some truly nasty bugs from people who weren't familiar with the prior behavior of having a default seed of zero for the global random-number generator. This is going to save many people so much heartache.

Agreed, but worth nothing that this does not make it cryptographically secure. The output can still be predicted. For cryptographic security, you need crypto/rand: https://pkg.go.dev/crypto/rand

In general, my advice is to use the cryptographically secure RNG unless you specifically know you need reproducibility (e.g. scientific simulation, or map generation in video games). With non-secure RNGs, it's very easy to accidentally expose yourself to problems without realizing it.

(Also, FYI, the default seed was 1, not 0).


The whole point of math/rand is that it's not cryptographically secure.


> The whole point of math/rand is that it's not cryptographically secure.

Sure, and that's even been in the documentation since before Go 1.0, yet people still make mistakes with it in practice. I've found it worth making the point explicit. Particularly in a case like this, where a casual reader might not notice the distinction between "non-reproducible" and "secure".


OK, but a good sign that you're not using a cryptographic RNG is that you're somehow "seeding" it.


I think I have a better way to look at this:

If you, as a developer who doesn't know much about random numbers or cryptography, think you need a random value, and you don't know if it needs to be cryptographically secure or not, you may as well just use a cryptographic RNG interface (unless you're using so much that performance or entropy becomes an issue.)

I think in most cases, it's pretty benign if you use cryptographic randomness even when it's not necessary. But, if you use math/rand when you wanted cryptographic randomness, to generate IDs or some such, that would be a much worse outcome.

Maybe it's bad for someone to use an RNG without understanding this well enough, though, and they should instead use a higher level abstraction if at all possible. But I can get behind the general idea.


> If you, as a developer who doesn't know much about random numbers or cryptography, think you need a random value, and you don't know if it needs to be cryptographically secure or not, you may as well just use a cryptographic RNG interface (unless you're using so much that performance or entropy becomes an issue.) I think in most cases, it's pretty benign if you use cryptographic randomness even when it's not necessary. But, if you use math/rand when you wanted cryptographic randomness, to generate IDs or some such, that would be a much worse outcome.

This is more or less what I was getting at. The main two downsides to using crypto/rand are:

- ergonomics (crypto/rand is a less user-friendly interface than math/rand)

- concurrent performance

The first one can be easily solved with a wrapper[0]. The second is particularly relevant here, because the main distinguishing feature of the global RNG in math/rand is that it is safe for concurrent use, whereas user-instantiated RNGs in math/rand are not. The big downside to this is that it's very easy to end up with performance issues due to mutex contention when multiple packages all use the global RNG (which is common in practice).

I actually submitted a CL (patch) to fix the mutex contention issue in the global RNG about five years ago, but it was rejected on the grounds that callers might depend on the specific sequence of numbers with the default seed, which would arguably break the compatibility promise. That apparently is no longer a concern (this change breaks the same thing, and the notes in the CL justify it), so I might resubmit it now.

crypto/rand is a little less performant in the single-threaded case, but not much - I think it'd be rare for that to be the bottleneck in real-life workloads at scale. The mutex, on the other hand, is a common bottleneck - I've run into this multiple times in multiple different codebases, one of which is what motivated the aforementioned CL.

So I generally advise people to use crypto/rand unless they are certain they need reproducibility, because the potential downside of accidentally using a non-secure RNG when you actually need one is quite high[1], but the downside of using a threadsafe cryptographically-secure one when you needed a threadsafe non-secure one is quite low: you're already taking much of the performance hit because of the mandated mutex, so the number of use cases that actually require the global RNG is quite small.

[0] e.g. https://pkg.go.dev/github.com/andrew-d/csmrand

[1] there are a number of places where RNGs end up being used that don't obviously result in exploits but nevertheless result in exploits in practice. For the average developer, it's easiest just to avoid that quagmire altogether, rather than try to reason about the potential adversaries (and potentially get that wrong).


> OK, but a good sign that you're not using a cryptographic RNG is that you're somehow "seeding" it.

The change here is specific to the global RNG, which users often used without explicitly seeding - e.g. calling rand.Int() without first calling the now-deprecated rand.Seed(int64).

The distinction is obvious to people who have domain expertise here, but I've found many people make mistakes with it in practice, because it's easy to do.


That's not quite right. There is such a thing as a CSPRNG (https://en.wikipedia.org/wiki/Cryptographically_secure_pseud...). But you still have to seed it with another source of random (ideally non-computational) to actually get the "cryptographically secure" bit.


What? Cryptographic RNGs can be seeded, this is done all the time. Being able to seed a random number generator has no bearing on its cryptographic security.

As examples of secure cryptographic RNGs that can be seeded:

- Fortuna (has a seedable generator without the entropy pool parts)

- HMAC-SHA256 DRBG

- ChaCha20 and Salsa20

- AES-CTR

- Keccak


The system feeds unpredictable bits into its kernel random number generator and then expands it, through /dev/urandom and getrandom. You, as a developer, shouldn't be messing with any of this. (Obviously: Keccak, AES, and ChaCha are not themselves CSPRNGs at all, but rather primitives cryptography engineers can use to create them).

If you're seeding an RNG, you're almost certainly working with a userland RNG, which is, almost always, a mistake.


I think this is more about interfaces than algorithms. A good cryptographic RNG interface will generally not expose the ability to explicitly set the seed for a given algorithm. Instead it would either abstract this entirely away, or provide some kind of interface for adding entropy to an entropy pool. The PRNGs themselves obviously do need to have seeds...


Hear me out, I think in its deprecated-but-not-removed state it is actually more dangerous.

Projects who have been seeding the random generator like they should suddenly think “oh I don’t need to do that anymore” and get rid of their manual seeding.

Then a compromised or rogue library decides to seed the global generator themselves to a hard coded value in an `init()`, thus meaning merely importing the library re-statics the seed.

It would look pretty innocuous and non-obvious in code AND be potentially pretty difficult to notice it happening in a lot of use cases. For bonus points/making it slightly harder to detect points they could even have a random set of seeds they use.

The right answer, probably just generally anyway, is to never use the global generator, and always create your own instance. Global state is a danger once again


I think that's an interesting case, but any program where the random numbers have security implications should already be using crypto/rand, and not math/rand anyway.


Do we know if the Seed function is now noop or does it set the seed value like previous?

I don't think people would use the "global" random functions for deterministic randomness. At least I hope they didn't...


It does set the seed value like previous. The docs says calling Seed(1) forces the old behavior. https://pkg.go.dev/math/rand#Seed


I agree that would be a rough situation, but why would a library ever do that in the first place?


E.g. a supply chain attack.


I remember when I started programming this was one of the first quirks that really surprised me, I can just hear myself exclaiming "This is meant to be random! Why do I keep getting the same result?!"

I feel strangely vindicated seeing this change.


Only if you started programming in Go. I've never in my life heard of another project that by default seeds with 0. That's insane.

EDIT: Okay, C and a bunch of other languages I don't use. Keep the roasts coming, guys. :D


You've never heard of the C programming language?


This was actually one of the tripups for me when I learned C originally to do OpenGL.

POSIX (or at least ISO C) defines rand as being seeded with 0 by default.


For anyone currently having a blast with ChatGPT/copilot, these are the sorts of unexpected behaviors you need to keep in mind when using it.

Stay safe everyone.


If you know how PRNGs work and it comes back pre-seeded, that's some magic that you may not be expecting

But of course, rather than make people study up, let's just dumb shit down


How old is your life?


matlab had a constant random seed around 2005, no idea whether they've changed it.


Every single BASIC variant I used in the 80s/90s right up to and including QBasic?


I'm quite sure I encountered this in VB.NET like 15 years ago or so.


I thought C did that?


Why is that insane?


Because almost everyone who asks for a random number expects a random number. Reusing a predictable seed _by default_ violates that totally reasonable expectation.

(BTW, I'm deliberately avoiding making any distinction between so-called "true random" and PRNG above, because the difference isn't actually meaningful here!)


I'd say the functions are commonly misnamed. The "secure random" function should be the "random" function, and the "PRNG based on a user-provided seed" should be called something else. Maybe "MonteCarlo", since that's one of the common uses.


I always thought it was perfectly intuitive that "random" wasn't really random on a computer.


Insane seems to me an obviously an intentionally dramatic word choice of GP, but it seems clear to me that the answer is “because subsequent runs the same program would produce the same sequence of random numbers.” This violates my personal intuition and assumptions about a programming language’s standard lib RNG behavior.

Not a Go programmer though but that was my understanding.


In the go std lib, there's an explicit difference between a cryptographically secure number generator and a pseudo random number generator. Their names alone reflects the difference very well: "crypt/rand" vs "math/rand" and the difference and properties of each is well documented.

Personally I've wrote a whole bunch of unit tests in the past that relies on that repeatability property "math/rand" has over "crypt/rand"


There's many uses for random numbers where cryptographic random is unnecessary, but where different runs of the program should use different pseudo random sequences.


Of course, but in those cases, you should make sure that your initial seeds are random. It seems like in the older Go cases, the initial seed was fixed. But, if you wanted random numbers, you could have used your own (truly random) seed.

I don't really see an issue with either default, so long as it's documented and you can use a fixed seed if you'd like. I personally like needing to set a random seed explicitly. But then again, I learned about using random number generators a long time ago when this was always required, so what I think is obvious is probably a surprise to many (as shown by this stdlib change).

The only downside about this new change was that in the old way, if you needed a cryptographically secure random number, you had to explicitly call those functions from the stdlib. The choice should have been deliberate, but people don't like to read documentation...


IIRC this is how QBasic worked back in the early 90s.

I wish I had found this little throwback (whether accidental or not) before it got fixed - though I definitely agree with the change.

F


Would you be able to expand on why using a seed of 0 is an issue?


If you randomly generate numbers using `math/rand`, then every invocation of your program is going to see the same sequence of "random" numbers. If you happen to persist those random numbers and expect a uniform distribution, you're going to be sadly surprised in the future.

It's the kind of "bug" that doesn't manifest until your data is thoroughly boned.


So, if I use `math/rand` the RNG will always output the same sequence of random numbers?

How do I make sure I'm passing a random seed to my RNG?

(I must be missing something here)


> So, if I use `math/rand` the RNG will always output the same sequence of random numbers?

math/rand provides global functions using a global RNG, as well as the ability to instantiate your own RNG and call functions (methods) on that RNG.

Previously, the global functions all used a seed of 1, which made them generate identical sequences. Now, they use random seed, which makes them less-easily predictable (though still predictable).

There is no change to the self-instantiated RNGs.

> How do I make sure I'm passing a random seed to my RNG?

With the change, using the global functions in Go 1.20+ is the same as instantiating your own RNG with a random seed.


The example in previous versions of the Go math/rand package suggested using time.Now().UnixNano() to seed the RNG source since that's essentially a random int64.


To get cryptographically grade randomness you can use one of these C functions:

*BSD, newer macOS, and GNU libc: int getentropy(void *buffer, size_t length);

Linux 3.19+: ssize_t getrandom(void *buf, size_t buflen, unsigned int flags);

iOS and macOS 10.7+: int SecRandomCopyBytes(SecRandomRef, size_t, uint8_t *);

Fuchsia: void zx_cprng_draw(void* buffer, size_t buffer_size);

Any *nix: Read from "/dev/random" (or "/dev/urandom" under the assumption that your system has already enough entropy).

Windows: There are different functions/libraries depending on the Windows version and some of them are a complicated multi-step mess.

Some time ago I wrote a C library that abstracts that away just for fun: https://github.com/panzi/portable_get_random*


What a fascinating insight to the naming conventions of the different OS. Mac the long descriptive. BSD short and accurate. Linux short details are for specs. Fuchisa the functional HW reference ? What is that


Fuchsia is an open-source capability-based operating system developed by Google. —Wikipedia


It's a reasonable choice, and will be unique for most program runs, but it's not exactly random


Philosophically, there is no such thing as a random number


There are ways to get truly random numbers into a computer. Certain phenomena are theorized to be fundamentally random, and can be used as inputs to hardware.

https://en.m.wikipedia.org/wiki/Hardware_random_number_gener...


The sound of conviction in your first sentence does not match the "theorized to be" in the second sentence. I recommend that you don't bring a "for all intents and purposes" to a "philosophically" fight. ;)


However, the universe is based on randomness at its quantum core.


Or perhaps the universe is a simulation whose prng was seeded with 1 just like golang does

Maybe we're in the first batch of simulations, and the tester came along and asks why they're all identical. The cosmic coder then realises that they forgot to call the function to seed the prng.


Is it though? I’m merely a layperson here so I might be grossly misunderstanding, but I didn’t know determinism had been ruled out by quantum physics. I was under the impression that quantum phenomena was best described using probability. That means there might be an element of true randomness going on, but also that these systems are so chaotic that an observation is going to have some noise, regardless of any randomness at the core. The latter says nothing about how random things are, merely that they appear with some probability, they could be completely deterministic for all we know.


The generator returns the same sequence every time. For instance Kubernetes CSI drivers have a supposedly unique csiProvisionerIdentity but it uses math/rand without seeding so they're all using the integer number 8081.


Having the seed value set to the same value at run time will cause the pseudorandom number generator to produce the same sequence of values. In essence just creating a new random number generator in go w/o setting the seed will not actually generate random values.


> In essence just creating a new random number generator in go w/o setting the seed will not actually generate random values.

This change only affects the global RNG. Creating a new RNG has always required setting a seed


To be clear, if setting a global seed was required it would not be so bad - but silently seeding with 1 is an edge that's pretty easy to trip on.


Which is also significant because the global RNG is safe for concurrent use. Instantiated RNGs are not.


> Which is also significant because the global RNG is safe for concurrent use. Instantiated RNGs are not.

Amusingly, I submitted a patch for this some years ago, and it was rejected on the grounds that callers might depend on the specific sequence of numbers outputted by the global RNG using the default seed, which could break the compatibility promise.

Now that that's been deemed a non-issue, I might resubmit.


It appears like it's an API design that intersects the world of high level application development and computer science.

Given randomness requires a seed and UNIX's ISO standard defines the default seed as 0, it's rational from a comp sci perspective to expect to supply a seed to produce a random number.

However, due to how ergonomic and safe high level languages are today, you don't need to be a computer scientist to make a highly available, beautiful and robust application. As such, the inhabitants of the application development world are considerably less concerned with such details and want, when they call a thing, it gives an intuitive result.

It's important that tool makers consider their users when creating APIs. Supplying a seed to a function called "getRandomNumber" might be obvious to a C programmer, but not obvious to a JavaScript programmer looking to use Go to improve the performance of their back end service.


How does the seed value get created randomly? I presume it doesn't use the same global random number generator, since that would still make it deterministic?



I’m surprised they don’t just seed using /dev/urandom (or the platform specific analogue).


Not in 1.20 yet but x/exp/maps was just accepted for inclusion in the standard library so presumably it will come in a future version.

The library contains generics based helper functions for working with maps.

https://github.com/golang/go/issues/57436#issuecomment-14125...

https://pkg.go.dev/golang.org/x/exp/maps

https://cs.opensource.google/go/x/exp/+/master:maps/maps.go


Glad to see this make it into the core.

I've been using this library for ages now...

https://github.com/cornelk/hashmap

Always entertains me to see developers write all the lock/unlock code when they could just use that.


We likely need full coverage of the "tricks" documented on https://github.com/golang/go/wiki/SliceTricks into a generics package as well.

Yes, https://pkg.go.dev/golang.org/x/exp/slices gives some of that, but likely the rest should be added as well.


Hopefully they accept the slice version as well. Both of them contain very helpful functions that would be nice to have in the std library instead of rewriting them every where.


You don't rewrite them; you just import exp/slices. The only thing bringing them into std really changes in practice is the import path.


Very useful. I am always writing a little Clone() function for maps.


> Go 1.20 supports collecting code coverage profiles for programs (applications and integration tests), as opposed to just unit tests. > To collect coverage data for a program, build it with go build's -cover flag,

This is absolutely awesome! I'm looking forward to trying this on some server binaries!


Note that it was already possible before with a command like:

    go test -coverpkg=./... -c . -o main.test
This command builds a test binary with a coverage config. So if you run it, launch your tests, and stop it gracefully, it will spit out a coverage report.

I’m not exactly sure what the change in Go 1.20 is about, maybe it’s easier somehow, I’ll have to try it.


That's not the same. `go test -c` builds a binary that runs your tests, whereas (if I understand correctly) the new thing builds a binary that runs your main(). If it works as I understand, that would be fantastic because then I could get rid of this monstrosity: https://github.com/holocm/holo/blob/master/main_test.go


yay now we can combine and balance mc/dc coverage testing with fuzzing. I find SQLite verification inspiring in that regard: https://www.sqlite.org/testing.html#tension_between_fuzz_tes...


Im struggling with previous behavior all the time. It would be much better with this


> New diagnostic for incorrect time formats

> The vet tool now reports use of the time format 2006-02-01 (yyyy-dd-mm) with Time.Format and time.Parse. This format does not appear in common date standards, but is frequently used by mistake when attempting to use the ISO 8601 date format (yyyy-mm-dd).

It's frequently used by mistake because Go doesn't allow datetime layouts to use the standard YYYY,MM,DD,HH,MM,etc which they ironically used for clarity in their release notes.

I don't understand why Go still forces datetime formats to be specified using "magic numbers" from some time in 2006.


From the docs, the magic reference date:

01/02 03:04:05PM '06 -0700

So that's why it's in 2006, since you asked.

https://pkg.go.dev/time#pkg-constants


It's still Imho one of the worst decisions Go did, it definitely not easier to read or reason about.


Yeah...

Just have to remember that the year comes before the time zone but after the seconds :clown:

No idea where that came from. Probably the time format from the punchcard machine at bell labs.

If it was iso 8601 based, it would almost make sense.


Been looking forward to the macOS DNS resolver changes for a LONG TIME... https://danp.net/posts/macos-dns-change-in-go-1-20/


Oh, wow, thanks for pointing that out. I have to build HashiCorp Vault from source with CGO enabled because of the broken DNS resolver. Otherwise, it's completely unusable with a split-tunnel VPN.


> Go 1.20 is the last release that will run on any release of Windows 7, 8, Server 2008 and Server 2012. Go 1.21 will require at least Windows 10 or Server 2016.

This is interesting. I wonder what Go 1.21 will depend on that requires at least Windows 10?


Nothing specifically AFAIK; it's just fewer platforms to test and support, making development overall easier. Microsoft ended extended support for Windows 7 in 2020, and special enterprise security updates this month. Windows 8 will end extended support July this year (before Go 1.21 is released); I can't find anything about any volume security updates; I think few people care, as Windows 8 is used less than Windows 7 today.

https://github.com/golang/go/issues/57003

https://github.com/golang/go/issues/57004


I used to work at an organization that shipped a Go binary that supported Linux 2.4 kernels, which afaik Golang never supported. There was nothing in the toolchain stopping us from building on an old kernel, but we were on our own for testing and worked around some libc issues in the runtime and stdlib if I recall correctly.

I imagine there's some enterprise customer somewhere on an old version of Windows that will throw money at vendors to make a similar effort.


I wouldn't be surprised if it's more a "we're not going to bother to keep hooking up new things or doing fixes in a way that works and is tested on old operating systems" than "there isn't a way to..." type thing. Some security stuff may break the mold on that though.


https://github.com/golang/go/issues/57004

Nothing concrete as it seems. It means that new releases are no longer tested with the old versions of Windows on their builders, and if you open a bug report about a problem with an unsupported version of Windows, nobody will care.


If they wanted to, they can now use the `LOAD_LIBRARY_REQUIRE_SIGNED_TARGET` flag in LoadLibraryEx.

Aside from that, there are a broad swath of flags to LoadLibraryEx that are only supported on earlier platforms with a KB [1] from over a decade ago installed. My suspicion is that Go has decided that requiring a security KB (while good hygiene) isn't a supportable situation.

[1] https://support.microsoft.com/en-us/topic/microsoft-security...


I would assume it's problems with Microsoft not maintaining those older windows sdk's in favor of their current monolithic windows sdk which only seems to target 10 and 11.


The Go toolchain doesn't use any Microsoft SDK.


> The specification now defines that struct values are compared one field at a time, considering fields in the order they appear in the struct type definition, and stopping at the first mismatch.

This is interesting because in certain cases it can be a performance hit when comparing structs which have been declared in alignment order to save memory. Simple example:

    type t struct {
      a int64
      b int32
    }

    t1 := t{a: 1, b: 2}
    t2 := t{a: 1, b: 3}
    same := t1 == t2
When comparing `t1` and `t2`, the runtime will first compare the `a` fields, which are the same, then the `b` fields, which will differ. Only after doing both comparisons will it figure out that they're different. But the `a` fields are int64, so it has to traverse these large data types before finally getting the answer.

Of course this is a trivial example, in real-world cases structs can have many more fields with much larger contents. The point is that the optimal ordering for alignment, and the optimal ordering for comparison, seem to be different.


That is just the language definition. It is fine for an implementation to actually compare both at the same time as long as within the language you can not observe this happend. If we cant tell the read to be b happenend with a or before a (hello spectre) then for the implementation it should be fine to have done the comparison.

This is more of a constraint if the struct contains a comparison that can panic. The panic must happen in order or not at all depending how the fields are listed.

  type t struct {
        a int64
        b any
  }
Should not panic on b if a values are already different.


In the few cases where it comes up, maybe you could write out the code to compare the fields in the order you choose?


It is great that Go1.20 improves compile times after generics were added in Go1.18 [1]!

Overall, I think adding generics to Go was a big mistake. It brings the following drawbacks:

- Slower compile times, even if generics aren't used. This slows down development pace in Go.

- The reduced code readability if generics are used. This slows down development pace in Go.

- The increased complexity of Go compiler. This slows down Go compiler development and increases chances for bugs.

- Very low adoption of generics in practice, since they aren't useful in most Go code bases. The generics are actively used in some freaky packages only after the year since they were released in Go1.18.

The only useful thing from Go generics is a syntactic sugar, which allows replacing `interface{}` with `any`.

[1] https://go.dev/doc/go1.20#compiler


You forgot to list the most useful feature of adding generics: people on the internet can no longer say "lol no generics", drastically reducing the amount of garbage comments about Go.


Those comments have now been replaced with "adding generics to Go was a big mistake".


Maybe they will, but they haven't, since this is the first time I've seen that comment. The "lol no generics" was endemic.


It’s hard to be objective because of filter bubbles, but I’ve seen it a lot on Reddit last.


*lately

It pops up on /r/golang sometimes. I don’t think it gets taken super seriously but there’s usually at least someone bringing it up.


They'll never go away, it just morphed into "Go was wrong and finally learnt the lesson that Java 5 did 19 years ago by adding in generics".

Go showed that useful software could be written without user-level generics. I don't think any other language today would dare to do that. In fact most languages seem to be converging into the same thing.


We already knew how to write useful software without user-level generics, we have been doing it for decades since FORTRAN came to be in 1957, no need for Go to prove anything beyond the stubbornness of its designers.


Useful software can also be written in asm and we have the entire early software industry to demonstrate that.

That's not the same as it being a good idea


Go is about productivity. It allows writing, extending and maintaining big codebases with lower efforts comparing to assembly or some other programming language out there. This is because of simple "what you read is what you get" syntax without implicit code execution. Generics break this feature :(

Of course, there are other brilliant features in Go ecosystem, which simplify writing and maintaining non-trivial codebases in Go - tooling, standard library, fast compile times, statically linked binaries, etc.


Except that these people now pollute the Internet with freaky packages, which use generics in esoteric ways :)


But if adoption is “very low”, then it’s not much pollution, is it?


At first I was on the fence too. I'm not use it all the time, but when I need it, it works as expected and it is much less a hassle than I remember writing C++ templates.

I have not seen a lot of comments that complained about the slower compile times. In my own experience it didn't really had an impact. But I agree the compiler should not become slower over time, so I appreciate the effort of the Go team to bring the compiler speed back.

I don't think that code readability is so much impacted. The square brackets work well. I find the angle brackets from C++ harder to read and there is the problem that >> is a token and cannot be used for two closing template angle brackets.

The increased complexity of the compiler is an issue, but cannot be avoided if you want to support Generics. But they took the time to make it right and as I stated it works for me.

I don't think that there is low adaption. Using type parameters visibly in a public API, breaks the API, which is the reason there are not a lot of uses in the standard library and with popular packages now. But this will change when maps and slices will be integrated in the standard library, which provide completely new APIs. Yesterday I found a library writing and reading parquet files, which used it quite extensively. But since I simply checked what libraries existed to assess how well the file format is supported, I cannot say much whether the use of type parameters by the library is useful.


> there is the problem that >> is a token and cannot be used for two closing template angle brackets.

This was fixed years ago in C++11.


They still have other issues, though. You ever had to put `template` before a templated member function called from an object of a `typename` type? Not fun, yet I have to do it all the time in my C++ code. MSVC and Comeau make this problem optional, but GCC and Clang don't (as per the standard).

The chevrons also confuse all my editors, because they can't figure out whether `<` is an opening bracket pair, or if its just a less-than operator. I'm a bit miffed that Cpp2 doesn't try to replace the chevrons with something else, like D did. I'm also miffed that Rust uses the chevrons too.


> - The reduced code readability if generics are used. This slows down development pace in Go.

Speak for yourself: I prefer generics to the mess if copy-pasta or generated stuff that one had to use before.


Could you provide a few links to popular open source projects, which switched to generics and got better code because of this?


Suppose you're the maintainer of a popular open source project, and you have a million users running a working, stable version using code generation.

What priority would you assign a bug that migrates the existing, working code that other people are doing god knows what with downstream? What would the theoretical benefits "better code" be to your downstream, and how do they weigh against the cost of "I ran go mod tidy, and our build broke?"

I can think of a use where it would lead to better code, and that's in k8s custom resources, where each resource type also has an associated list type that people create with code generation. It'd be much neater for k8s lists to be

    type List[T runtime.Object] struct {
        ...
        Items  []T
    }
Than the way it is now: https://www.google.com/search?q=site%3Agithub.com+zz_generat...



I doubt this library is useful. Some of the provided functions such as Clone(), Clear(), Copy() and Equal() are substituted by simple and readable three-line loops. Others are plain harmful. For example, Keys() and Values() may lead to huge memory allocations when working over big maps, while DeleteFunc() worsens code readability and maintainability comparing to a simple 3-line loop. Additionally, it may slow down the resulting code.


Useful enough to be included in the standard library. https://github.com/golang/go/issues/57436#issuecomment-14125...


That would be a mistake :(


FUD


> I doubt this library is useful

It is.

> Additionally, it may slow down the resulting code.

FUD.


Agreed, it's my fault that I didn't explain why maps.DeleteFunc() can be slower than a simple loop with "if" statement inside. The reasons for the slowdown are:

- Go compiler may fail to inline the callback passed to the maps.DeleteFunc(). This will result into an additional overhead for callback calls per each item in the map.

- Go compiler may move some variables inside the callback from stack to a heap. This will result in an additional memory allocations comparing to a simple loop, leading to an additional load on garbage collector.


By that logic we should remove bytes.Clone, strings.Split, bytes.Equal, strings.TrimLeftFunc, etc. from the standard library.


The bytes.Clone() uses non-trivial trick under the hood - `append([]byte{}, b...)`. Compare it to a trivial loop behind maps.Clone().

The strings.Split() implementation is non-trivial because of performance optimizations.

The bytes.Equal() is actually written in highly tuned and optimized assembly in order to achieve high performance for inputs of various lengths.

Now compare this to trivial implementations behind generic-based functions for maps. And do not forget that these implementations may hurt performance because of excess memory allocations in Keys() and Values() functions or because the compiler may fail inlining the callback passed to DeleteFunc().


Those are good points about some of them being nontrivial.

Though part of why they’re optimized and in the stdlib in the first place is because they’re such common patterns. So without them people would end up writing trivial, unperformant, custom versions. So now that more routines can be moved into the stdlib, they can benefit from optimization later.

(I’m not sure how much the maps routines specifically can be optimized, but stdlib routines routines can generally be more aggressive with unsafe or asm or being coupled to the runtime and its quirks, like bytes.Clone, strings.Builder, etc.)

And there are still plenty of ubiquitous patterns that have been worth including in the stdlib even if they’re usually just simple loops that aren’t very optimizable. Like strings.Index is an easy loop to write, but it comes up so often. Or strings.Cut is basically just an if-statement. But it makes code clearer about its intentions; and optimizations to these down the road benefit everyone.

It’s also true that maps.Keys and maps.Values allocate slices, and that you could avoid this with a loop, but strings.Split, bytes.Split, regexp.FindAll, os.ReadDir return slices and are still worthwhile as opposed to specialized iterators for each one. As with any code, you’re conscious of memory allocations where it counts, and optimize as needed.

In fact, now that generics make it possible, the Go team has discussed using iterators (https://github.com/golang/go/discussions/54245), which would benefit strings.Split even further in addition to all the other slice-returning functions.

So generally you have three options for those slice-returning functions:

- Custom inline loop for some of them. More verbose, will probably be naive and not benefit from stdlib optimizations. - Return a slice and iterate over it with a for loop. Creates allocations that could probably be avoided. - Create a customized iterator for that type. Unfortunately, you can’t really use an ordinary for loop, and extra custom iterators for each type. - Use generic iterators to benefit from the optimized functions and also avoid allocation overhead.

So part of the motivation is that now with generics there’s a variety of further optimizations available even to old functions like strings.Split and regexp.FindAll, in addition to opening up common patterns and optimizations for maps/slices/etc. to be included in the stdlib.


Agreed with most arguments.

A few remarks:

> Like strings.Index is an easy loop to write, but it comes up so often

Actually, strings.Index() is very non-trivial function partially written in assembly in order to achieve high performance [1]. This function is used in Go projects *much more frequently* than functions from the golang.org/x/exp/maps package.

> strings.Cut is basically just an if-statement

No, strings.Cut() has non-trivial code when comparing to a trivial loop for map copy or for map delete [2].

> It’s also true that maps.Keys and maps.Values allocate slices, and that you could avoid this with a loop, but strings.Split, bytes.Split, regexp.FindAll, os.ReadDir return slices and are still worthwhile as opposed to specialized iterators for each one.

The *key* difference between maps.{Key,Value} and the mentioned functions from the standard library is that it is trivial to write the `for k, v := range m` instead of maps.{Key,Value} and avoid memory allocations, while it isn't trivial to write the corresponding code without memory allocations, which substitutes strings.Split() or other mentioned functions from the standard library.

[1] https://github.com/golang/go/blob/86c4b0a6ec70b07ab49d3813a5...

[2] https://github.com/golang/go/blob/86c4b0a6ec70b07ab49d3813a5...



That's a good example of using generics-based sorting! I think it would be more beneficial to work on compiler optimizations, which could optimize the standard interface-based sort.Slice and sort.Sort to the same level. This shouldn't be too hard to do for the most widespread case when the compiler knows the actual type hidden behind the sort.Interface, so it could inline methods of this interface.

This optimization then could speed up other similar interface-based algorithms.


Can you point to one that had the opposite effect?


No. I suppose, maintainers of popular open source projects couldn't justify introducing generics into their code. This is exactly my story - I was actively seeking places in VictoriaMetrics codebase, which could benefit from switching to generics. Unfortunately, not a single place has been found yet. There was a promising place for generics [1], but it has been appeared that it is better from readability and maintainability PoV to use plain old interfaces instead.

[1] https://github.com/VictoriaMetrics/VictoriaMetrics/blob/mast...


Tacking on generics on a working code base is difficult.

I effectively used it in this little experimental CLI library: https://github.com/cpuguy83/go-cli/blob/main/command.go

It's pretty simple, but the nice thing is it can use any flag library you want (stdlib flag package, pflag, whatever).


Try substituting F generic type with the FlagSet interface - and the code will become more clear without the need to use generics.


That totally changes it as it requires type assertions to use. With generics `Flags()` returns a concrete type and not an interface.


The question is whether the Flags() should return the concrete type instead of FlagSet interface? Another question is whether the additional complexity related to generics worth the ability to get the concrete type from Flags()?

In fact, the generic-free implementation based on the FlagSet interface is more flexible, since it allows storing multiple different FlagSet implementations in the same Cmd.


I mean the generic type def there could be changed such that different Cmd's can have different flag implementations.

The main point is that generics come in handy when building libraries so that you aren't forcing callers of your library into specific types or loosing some type safety.


Yeah, and I don’t tend to keep this around quantitatively, but I’ve certainly run into bugs in Go programs that would’ve been categorically prevented with generics.

Of course I want sync.Map to use generics instead of interface{}. How could I not? And it’s less complex-looking than type-asserting everywhere.


The sync.Map is a good example , which could benefit from switching to generics. The problem is that this is the only useful case for generics. This case could be implemented in the way similar to built-in map[k]v generic type, which is available in Go since the first public release. And this could have prevent from opening the Pandora box with the generics.


It’s not the only one. Some other packages that used workarounds like interface{} or other things to work around the lack of generics were container/{heap,list,ring}, sort, golang.org/x/sync/singleflight, /x/exp/{maps,slices}, etc. And people will want to write their own patterns at other times of course too. It wouldn’t be reasonable for these all to become builtin types like map. These standard library packages that already exist will also become more efficient and potentially reduce allocations (when using primitives) as well.


The container/list and container/ring is one of the least useful packages included in the standard Go library, since they aren't used widely in Go programs. It is better from performance and readability PoV to use an ordinary slices instead of these packages in most cases.

The container/heap is more useful, but it could benefit more from adding an optimization for inlining interface method calls when Go compiler knows the underlying implementation behind the interface.

The golang.org/x/exp/maps is useless and may be harmful [1].

The golang.org/x/exp/slices is mostly useless, except of Sort*() functions. But it would be better to apply the optimization mentioned above to standard sort.* functions instead of forcing users to switch to different Sort*() implementations in other packages.

[1] https://news.ycombinator.com/item?id=34622393


> The container/heap is more useful, but it could benefit more from adding an optimization for inlining interface method calls when Go compiler knows the underlying implementation behind the interface.

This is exactly what generics do. With e.g. a heap.Heap[uint32] the compiler knows the implementation and there’s no interface method call overhead.

In order for the compiler to do this optimization, it has to know that you don’t e.g. pass a *heap.Heap[uint32] to a function expecting *heap.Heap[uint64], so the type system is what allows it to optimize.

And on top of that, now the user also gets assurance at compile time that heap.Heap[uint32].Pop returns a uint32, preventing bugs from type confusion and also so you don’t have to add type assertions everywhere you use the heap.

So now heap, sort, etc. can benefit from this improved performance; users don’t have to write wrapper types and interface implementations just so their type can be sorted; and bugs are prevented at compile time.

For [1] I posted a reply. It’s true that there are overheads with some slice-returning routines but I explained how in the reply how I viewed the tradeoffs.


In theory the compiler can inline interface method calls without the need to introduce generics. For example, it can detect that the customStruct is passed to the sort.Sort() in the code below, and then instantiate the sort.Sort() code for the given inlined Less(), Swap() and Len() interface calls:

    type customStruct struct { ... }
    func (cs *customStruct) Less(i, j int) bool { ... }
    func (cs *customStruct) Swap(i, j int) { ... }
    func (cs *customStruct) Len() int { ... }

    func sortMyCustomStruct(cs *customStruct) {
      sort.Sort(cs)
    }

The tricky part here is that the compiler should be careful when instantiating such calls for different interface implementations, in order to avoid generated code bloat. For example, if sort.Sort() is used for a thousand different sort.Interface implementations, then it may be not a great decision to create a thousand of distinct sort.Sort() instances for every sort.Interface implementation. But this should work OK for a dozen of distinct implementations.


> since they aren't used widely in Go programs

They aren't widely used because the ergonomics suck, because they aren't generic yet.


You acknowledge in the first sentence that compile times are back in line with Go 1.17 (i.e. pre generics), yet you claim that generics mean slower compile times.


Go1.18 and Go1.19 have slower compile times comparing to Go1.17 even for codebases, which do not use generics. For example, VictoriaMetrics [1] - the project I work on, which is written in Go without generics.

[1] https://github.com/VictoriaMetrics/VictoriaMetrics/


Yes but these are release notes for Go 1.20.

(1) Make it work, (2) make it right, (3) make it fast

They did 1 and 2 in the last two releases, and in Go 1.20 they did (3). So what's left to complain about?


Why to spend time and efforts for adding useless generics in the first place? Maybe it would be better spending this time on performance optimizations, compile times optimizations and binary size optimizations instead.


Because I would rather write one generic container library than n slightly different ones for each data type of deal with screwing around with casting interfaces?


This sounds good in theory. What about practice? Could you provide a few links to popular Go packages, which benefit from using generics?


Because it's extremely useful to have adaptable data structures, without byzantine code generation?


Could you provide links to a few Go packages, which implement useful generic-based data structures?


Try google.


Because they’re not useless.


Thanks for very strong argument :)


No weaker than yours :)


I use generics all the time in my Go code, in particular with the exp/slices library and lo. I do not find it less readable. I think readability is subjective based on people's programming experience and familiarity with type systems.


I’m gonna be that guy, but do you have sources for any of this? That link shows that compiler performance is the same as before generics, for instance.

Are there more bugs in the compiler? Is readability reduced, and having an effect on pace? Especially if adoption is so low to begin with? Is adoption actually so low, or just rising?


That link admits that the compiler performance was lower than needed in Go1.18 and Go1.19, because of generics, even when compiling Go code without generics. I can confirm this based on my own open source projects written in Go [1].

[1] https://github.com/valyala/


I understand, but it shows that compiler performance is the same as before generics now, so any performance hit is gone.


But the time spent on designing, implementing and then optimizing the generics is lost. This time could be spent on more valuable things for Go ecosystem.


Generics consistently showed up as one of the most desired features (if not the most desired) by working Go developers in the previous developer surveys, so I think it makes sense that the Go team felt the ecosystem saw much value in it relative to other features and worth the time.


Unfortunately, the Go team was misguided by vocal minority who was using the "missing generics" argument as an excuse why they do not switch to Go from their favorite programming languages. The majority of working Go developers were happy with Go, so they didn't take part in the debates and surveys about generics.

The irony is that vocal minority still do not use Go, since they have other excuses now - "bad error handling", "missing functional features", etc. But if these harmful features will be implemented in Go, the vocal minority will find another reason why they do not use Go.


I mean, here's the Go 1.18 release notes and what they have to say about the dev team's level of faith in the stability of their implementation: https://go.dev/doc/go1.18

Is it a surprise that the low uptake is there with a discouragement like that?


I doubt that was very widely read. I think it would be a mistake to attribute much of the adoption to that statement.


I'm sorry, but I don't get this comment. Go is a great programming language! It is great because it is easy to write, read and maintain programs written in Go. This is because of the following features:

- Simple syntax

- Fast compile times

- Great tooling (go fmt, go vet, go too pprof, go tool cover, race detector, ect.)

- Useful standard library

Generics do not improve any of these features :( They complicate syntax, they slow down compile times and they complicate internals of go compiler and tools.


BTW, it is great to see that Go1.20 and the upcoming Go1.21 [1] return to good things instead of generics:

- Performance optimizations (profile-guided optimization, GC optimizations, etc.)

- Reducing compile times

- Reducing binary sizes (improved linker, which can throw away unused code more aggressively)

[1] https://groups.google.com/g/golang-dev/c/V8ez4YunkeE


You missed the highlight - the best part of go generics is it shut up all the people whining that go didn't have generics! =)


worse program readability is not a true con. Any codebase can have positive or negative code quality, generics do not make this worse.

In fact, it can improve readability and maintainability in some cases instead of having multiple structs copy pasted everywhere.


>worse program readability is not a true con. Any codebase can have positive or negative code quality, generics do not make this worse.

While true that generics are not necessarily going to make programs harder to read, it's also not very interesting to talk about theory.

The question which is answerable is, "on average, were go programs easier to read before the introduction of generics". The interesting point is what actually happens, not what could.


You are correct that we must observe this in practice. But, the original poster of this thread purports that generics are making go worse in multiple ways. 1: Slower compile times (without evidence) and 2: have created worse code (without evidence)


Sure, they haven't proven their claim. This is all the realm of speculation.


Exactly this! It's such a shame generics where added.


Really glad to see the ability to convert seamlessly from slices to arrays. Not that I use it often, but it seemed like such a 'natural thing' to convert between.


Off topic: I didn't realize Google Groups was still around.

Does anyone still actively use Usenet?


Google Groups, if I understand correctly, is used internally at Google extensively.

Last time I checked Usenet, it seemed like many of the users were pirates. The pirate releases would then get reuploaded to bittorrent sites.


We still use Google Groups at our Journal in my university. Although, we mainly use it as a mailing list


I might be wrong there and there's different groups but these mailing list groups are a very common way of setting permissions if you are using Google Workspace so it's not some abandoned usenet leftover.


>Comparable types (such as ordinary interfaces) may now satisfy comparable constraints, even if the type arguments are not strictly comparable (comparison may panic at runtime). This makes it possible to instantiate a type parameter constrained by comparable (e.g., a type parameter for a user-defined generic map key) with a non-strictly comparable type argument such as an interface type, or a composite type containing an interface type.

Wait, isn't that the whole point of the constraint in the first place, to keep you from using it with things that aren't comparable? Wouldn't it make more sense to have the constraint be a requirement in the interface itself, so that you can't create an interface value from a type that isn't comparable?


It is really a pity Go didn't support the comparable interface from 1.0. The Go 1.20 change is just a compromise between theory perfection and the pity.


I mean, from my limited experience of Go, it's not exactly a language that tries to help you avoid unexpected runtime errors or anything, so maybe they figure Go programmers are already used to gotchas like that.


Why did you reply to yourself?


Probably didn't know about the "edit" button. I used to reply to myself before I learned about it. (And I still do, when the two-hour edit window has expired.)


Well actually, i think i did consider editing it, but it felt like my first comment stood fine on its own, and i'd leave the follow-up comment to be downvoted separately


Yeah, that edit window is way too short.


I for one, welcome the error wrapping change.


math/rand: The original behavior never bothered me and actually motivated deterministic tests. File this under "won't be bothered to read the docs", I guess.


> File this under "won't be bothered to read the docs"

Do you read the docs of every basic feature you use? When you call a rand function, you usually expect it to be seeded. In any language, I'll only read the docs of a rand function to know how it handles bounds. You still can seed it manually in your tests if you want a deterministic behavior.

It's like if you used `time.Now()` only to discover that you first have to call `time.StartClock()` for it to work as expected, it can make sense but is not the commonly expected behavior.


Yes, I do. All languages have weird corner cases in the standard libraries, and third-party dependencies have even worse edge cases. "Plug and play" rarely works for me, and it's better to know how something you plan to rely on in a production environment works, then to just assume it will work.


>Do you read the docs of every basic feature you use?

Yes? You don't?


No.


Package size dropped by 200MB on Arch. What happened?


https://go.dev/doc/go1.20#go-command

"The directory $GOROOT/pkg no longer stores pre-compiled package archives for the standard library: go install no longer writes them, the go build no longer checks for them, and the Go distribution no longer ships them. Instead, packages in the standard library are built as needed and cached in the build cache, just like packages outside GOROOT. This change reduces the size of the Go distribution and also avoids C toolchain skew for packages that use cgo."


They stopped including precompiled shared libs for the stl


Someone blogged about it in a bit more detail:

https://tomaszs2.medium.com/%EF%B8%8F-go-1-20-released-its-l...


That's less detail, not more.


They changed the url from a rather sparse google group post - that might be why it's now relatively less content


The comment might predate the submission URL change

https://news.ycombinator.com/item?id=34617088


Release notes here: https://go.dev/doc/go1.20






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: