Optimized Go code will generally perform on par with optimized Rust code. Exceptions exist and Go is a much less expressive language but that affects developers more than users. Go's error handling with wrapping is very similar to the anyhow crate which also has runtime overhead. The happy path should occur much more often than the error path and so the overhead is usually worth it for developer convenience in debugging.
The quality of code relying on reflection varies wildly (even within the standard library) so I don't think anything about the reflection system itself is the problem. I've written reflection-based solutions to problems sparingly but with an eye to giving good output on errors so debugging is easy. You can write some awful and hard-to-debug macros in Rust too.
> The quality of code relying on reflection varies wildly (even within the standard library) so I don't think anything about the reflection system itself is the problem. I've written reflection-based solutions to problems sparingly but with an eye to giving good output on errors so debugging is easy. You can write some awful and hard-to-debug macros in Rust too.
Maybe? I am yet to see easy-to-read/easy-to-debug Go reflection. I have not encountered hard-to-debug macros in Rust (probably because I have almost never needed to use a debugger in Rust in the first place). Doesn't mean that these don't exist, of course.
I don't really use an interactive debugger with Go that often. I prefer error wrapping and logging. That having been said, I've found that it is often best to disentangle reflection and business logic by converting to an intermediate representation. For example, I wrote a library that takes a struct pointer and populates its target with values taken from environment variables, using field tags like `env:"FOO"`. The first version did at least have useful error messages, e.g.
reading environment into struct: struct field Foo: as int32: strconv.ParseInt: parsing "a": invalid syntax
However, it wasn't very extensible, because you had to read through a lot of boilerplate to figure out where to put new features. I also bet debugging this through an interactive debugger would have been quite a headache, though I never had to do that. So I rewrote it and split it into two parts: a reflection-based part which returned details about the extracted fields and their tags, then a (nearly) reflection-free part which took those details and parsed the corresponding environment variables then injected the values into the fields. The intermediate representation is usually hidden behind a simple interface, but can be examined when needed, and prints nicely for debugging.
> Optimized Go code will generally perform on par with optimized Rust code.
This is an incredibly dangerous assumption. Go and Rust are in a completely different weight class of compiler capability.
Before making an assumption like this, I strongly suggest building a sample application (that is more complex than hello world) in Go and Rust, then compiling both with optimizations and taking a peek at what they compile to with Ghidra/IDA Pro/any other disassembler.
C# + .NET 8 can sometimes trade blows with Rust + LLVM, particularly with struct generics, but Go definitely can't, is far behind in compiler features and doesn't leverage techniques that reduce cost of non-free abstractions that .NET and OpenJDK employ.
I should clarify that by "optimized" I meant hand optimized, not just compiler optimized. You can easily code yourself into surprisingly poor performance with Go, though usually for the same/similar reasons as C#/Java. Rust makes it a little harder to get into the same situations though it's by no means impossible.
As to the zero/low-cost abstractions, Go has very few abstractions at a language level and that is what I meant by "much less expressive". If you try too hard to write Go as though it were another language, you will shoot yourself in the foot. However the Go compiler, standard library, and broader ecosystem have come a long way and enable very similar performance profiles to most other compiled languages. It has not been my experience, at least in micro-benchmarks, to see any major performance differences in Go vs e.g. C on things like encryption, hashing, RNG, SIMD, etc. You do need to avoid allocations in hot code paths but that's true of any garbage-collected language.
The outstanding performance issues I'm aware of are around deferring, calling methods through an interface, and calling out to cgo. These are pretty minor in most situations but can eat up a lot of CPU cycles if done often in tight loops. They've also been getting better over time and can sometimes be optimized away with PGO.
This is fair but performant data processing in Go simply has much lower performance ceiling than Rust, C#, Swift or C++. Which is why all standard library routines that do so (text search, hashing, encryption, etc.) are written in Go's custom """portable""" ASM dialect. It is unwieldy and has API omissions because it takes this half-assed approach many things in Golang ecosystem do.
The closer, and better, example is hand-optimized C# where you still (mostly) write portable and generic code for SIMD routines and they get compiled to almost the same codegen as hand-intrinsified C++.
I understand where this belief comes from, but the industry at large does itself a huge disservice by tunnel visioning at a handful of (mostly subpar) technologies.
Unfortunately, you are right that SIMD support in Go isn't great and basically requires writing platform-specific code in their quirky assembly language or using a third-party library which does that for you.
I don't think the existing use of assembly under the hood to accelerate performance when possible is something to be frowned upon though. It's an implementation detail that almost never matters.
One issue with the performance of their assembly code though is that it passes arguments and return values on the stack. I saw something about allowing code to use registers instead but I don't think it's available yet.
I think you over hype c# and swift, we did benchmark a grpc api server in c# ( net core ) and Go, same implementation in both languages and Go while faster was using way less memory.
Yes, the default implementation of hashing in Rust is designed to be cryptographically-safe at the expense of speed. You can get a large speedup by replacing it, e.g. https://nnethercote.github.io/perf-book/hashing.html .
I believe this is because the code `counts[word]++` basically works by:
1. Hash word, retrieve the value from the counts map
2. Increment the value
3. Hash word, store the value into the counts map
By change counts to a map of strings to pointers of ints, you only need to perform #3 once per unique word. Once you've read the int pointer, you need only write to the pointer and not re-hash the word and write that to the map, which can be expensive.
I am surprised this doesn't get optimized by the compiler. I assume this is necessary in the general case (map could be modified concurrently, causing the bucket for the key to move) but it obviously isn't here.
Look at the antecedents of some of those comments and you'll see the bar for successfully introducing an unasked-for language comparison on a language thread is very high.
A reasonable rule of thumb specific to this thread: unless the article is about a comparison between Rust and Go, you probably cannot safely discuss Go in a Rust thread, or vice versa. It's a lit match thrown into dry brush.